via logo

Robot rights – a legal necessity or ethical absurdity?

The question of robot rights has emerged as a much more nuanced topic than at first seems the case. We examine the pros and cons.

image1-600x486We’re almost two decades past a world where machines like HAL 9000 gained enough self-awareness to commit spaceship mutiny, but dystopian visions of robots run amok re-emerge every time Boston Dynamics releases a promotional video.

Given the fears of robot-induced mass unemployment or autonomic warriors of unchecked destructive power, it might seem ironic then that the issue of robot rights, has become a matter of serious policy debate.

I’ll admit to being skeptical when reading a recent column decrying the plight of poor, abused robots, but upon further review, the question of whether autonomous, adaptive, aka learning machines, have or need rights is nuanced and worthy of debate. The legal status of quasi-intelligent machines capable of independent action is sure to vex developers, manufacturers, politicians, and legal scholars for many years as technology advances outpace our legal and regulatory frameworks. Here’s a preview of what will be a discussion that’s sure to heat up in 2019.

What are rights and how can machines possess them?

My initial reaction on encountering the concept of robot rights was somewhere between skepticism and derision since I reflexively conflated “rights” with human or ethical rights. These are the rights of the Enlightenment, Declaration of Independence and Bill of Rights, but they are only one category, namely natural rights, of a broader philosophical and legal framework.

The notion that any unconscious machine, regardless of its patina of intelligence, has the same innate claim on life, liberty, self-determination and the pursuit of happiness as a human being seems preposterous. However, our judicial systems also confer legal rights and obligations on individuals and other entities, notably corporations, that are independent of one’s entitlements as a human being.

I was drawn down the natural rights rabbit hole by a recent article by Andrew Sherman, an attorney specializing in business law and IP that argued for extending workplace protections, aka worker’s rights, to robots. While not a full-fledged human rights argument, the examples cited in the piece came close to equating the plight of abused laborers that led to the union movement more than a century ago to the situation facing robotic workers in the future. Here’s a sample (emphasis added),

By the year 2025, robots and machines driven by artificial intelligence are predicted to perform half of all productive functions in the workplace. What is not clear is whether the robots will have any worker rights.

Humans already have shown hatred toward robots, often kicking robot police resources over or knocking down delivery bots in hopes of reclaiming a feeling of security or superiority. … What is new is that it will only be a matter of time before the automated creatures will ‘feel’ this hostility and/or feel the need to retaliate.

The last sentence is highly debatable, perhaps the influence of too many sci-fi movies, since the prospect of machines that develop independent, i.e. not just mimicry, emotional feelings, as opposed to the sensory perception that they already possess, is remote. (Here’s a decent summary of the debate). Sherman then stumbles into a valid point, just not one justified by his previous argument.

These acts of hostility and violence have no current legal consequence — machines have no protected legal rights. But as robots develop more advanced artificial intelligence empowering them to think and act like humans, legal standards need to change.

The legal standards pertaining to robots and quasi-intelligent algorithms probably are inadequate, however the more pressing issues pertain to legal not natural or human rights. Unfortunately, he conflates the two while raising a mix of legitimate (emphasized) and specious questions.

Few are considering this trend from the perspective of the rights of our automated coworkers. What legal standing should the robot in the cubicle next to you have from a labor, employment, civil rights or criminal law perspective, and as a citizen? …Will humans discriminate against the machines? Will workplace violence or intolerance be tolerated against robots? … Should robots be compensated for their work? How and when? Are they eligible for vacation or medical benefits? What if a robot causes harm to a coworker or customer? Who’s responsible? Will robots be protected by unions? If a robot “invents” something in the workplace, or improves a product or service of the company, who or what will own the intellectual property?

The notion that robots as currently or foreseeably constituted need civil rights or HR benefits is absurd, however questions of criminal and tort responsibility for behavior by or towards robots and IP ownership need updating for an era of autonomous, adaptive, ‘intelligent’ machines.

Corporate personhood as a model for robot rights

The robot rights debate was ignited in 2017 after a EU Parliament report with recommendations to the Commission on Civil Law Rules on Robotics. Section 56 proposes a reasonable approach given the state of current robotic technology based on deep learning algorithms (emphasis added).

Considers that, in principle, once the parties bearing the ultimate responsibility have been identified, their liability should be proportional to the actual level of instructions given to the robot and of its degree of autonomy, so that the greater a robot’s learning capability or autonomy, and the longer a robot’s training, the greater the responsibility of its trainer should be; notes, in particular, that skills resulting from “training” given to a robot should be not confused with skills depending strictly on its self-learning abilities when seeking to identify the person to whom the robot’s harmful behaviour is actually attributable; notes that at least at the present stage the responsibility must lie with a human and not a robot.

However, Section 59 (f) contains the most controversial proposal raising the rights issue,

Creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently.

Critics fear that granting robotic personhood is just a way for manufacturers to shirk responsibility for what are ultimately defects in software and training data. However the idea of granting legal personhood to various entities, goes back in U.S. law to the seminal Santa Clara County v. Southern Pacific Railroad Co. Supreme Court decision that extended the equal protection rights of the Fourteenth Amendment to corporations and established the legal foundation for corporate personhood. As robots assume more tasks, some entailing life-and-death decisions to humans around them, it seems reasonable to establish a legal framework that addresses and assigns responsibility for their actions and inactions.

Separating the philosophical from the legal

The most thorough treatment of robot rights to date is a book by the same name from David Gunkel, a professor of media studies at Northern Illinois University. In an interview for NIU, Gunkel contends that the issue of rights arises from society’s evolving views on the status of autonomous machines that has moved them into a gray area between natural persons and things. However, he correctly notes that the legal definition of “person” is not limited to human beings. In Gunkel’s words (emphasis added),

Person’ is a socially constructed moral and legal category that applies to a wide range of different kinds of entities and not just human individuals. In fact, we already live in a world overrun by artificial entities that have the rights (and the responsibilities) of a person—the limited liability corporation. IBM, Amazon, Microsoft and McDonalds are all legal persons with rights similar to what you and I are granted under the law—the right to free speech, the right to defend ourselves from accusations, the right to religion, etc. If IBM is a legally recognized person with rights, it is possible that Watson—IBM’s AI—might also qualify for the same kind of status and protections.

Gunkel proposes a different way of viewing “the social position and status of technological artifacts” based, not on what something is, i.e. conscious, self-aware, capable of pain or emotions, but on how humans interact with it, a conceptual model called “the relational turn.” As Gunkel puts it,

The relational turn puts the how before the what. As we encounter and interact with others—whether they are humans, animals, the natural environment or robots — these other entities are first and foremost situated in relationship to us. Consequently, the question of social and moral status does not necessarily depend on what the other is but on how she/he/it stands before us and how we decide, in “the face of the other,” to respond. Importantly, this alternative is not offered as the ultimate solution or as a kind of ‘moral theory of everything.’ It is put forward as a kind of counterweight to provide new ways of thinking about our moral and legal obligations to others.

The relational model might be useful for addressing the social and legal norms for human-robot interactions as the machines become more advanced and adaptable, but falls short when considering the criminal and tort responsibilities of autonomous, adaptive machines

My take

Using the relational yardstick, the more humans treat robots like a social peer, friend or colleague, the more rights they should be accorded by our legal system. Of course, the same argument has been made for animal rights, where, at least with domesticated animals, the relational interactions are far stronger across cultures and generations.

While the model might be useful for establishing norms as certain types of robots or machines become human companions, I believe it is doesn’t adequately address more pressing issues and of liability for personal and property damage or IP creation and theft. No matter how much we treat a robot like a human colleague or friend, should one go HAL 9000 can it be guilty of premeditated murder or is that the responsibility of the developer and trainer?

I believe we are still many, many years from robots capable of such actions that would force us to face questions of individual or natural rights. Nor do I buy the argument that malicious, gratuitous damage to a robot is some form of hate crime against what are still unfeeling (in the emotional, not sensory sense) automatons. Current property damage and liability statutes seem adequate to cover the occasional Luddite or, conversely, malfunctioning droid.

Instead, I think section 56 of the EU report is the proper approach, namely establishing laws of accountability and damage mitigation structures (like insurance) that reflect the differences between autonomous, adaptive, ‘intelligent’ robots, and the algorithms that power them, and traditional machines. These must be extended with provisions that define ownership of any IP that such machines might create in the course of their normal use and that is clearly distinct from the underlying algorithms controlling them.

Society and robot developers must also address the rights of humans when dealing with robots, particularly to allay the dystopian fears of rogue destructive behavior. Here, Asimov’s Three Laws of Robotics are a philosophical foundation from which to build a code of robotic ethics.

SOURCE: diginomica.com