Editor’s Note: Mark Goldfeder, senior lecturer at Emory University School of Law and senior fellow at the Center for the Study of Law and Religion, is the author of a forthcoming book on robots in the law. The opinions expressed in this commentary are solely those of the author.
Computer program passes the Turing Test for artificial intelligence for the first time
Mark Goldfeder: It's a sign that the age of robots has truly arrived
He says it's time to rewrite the law to permit recognition of robots as persons
Goldfeder: A robot could be held liable for damages -- and it might need insurance
For the first time, a computer program passed the Turing Test for artificial intelligence. A computer on Saturday was able to trick one third of a team of researchers convened by the University of Reading into believing it was human – in this case a 13-year old boy named Eugene.
The Turing Test, named for British mathematician Alan Turing, is often thought of as the benchmark test for true machine intelligence. Since he introduced it in 1950, thousands of scientific teams have tried to create something capable of passing, but none has succeeded.
And that outcome means we need to start grappling with whether machines with artificial intelligence should be considered persons, as far as the law is concerned.
In 1920, Karel Capek introduced the mainstream world to the concept of artificial people in his play “Rossum’s Universal Robots” (the word robot comes from the Czech word for serf labor). Since then, society has been fascinated by the idea of a robot walking among us, or even crossing over into personhood like a modern-day Pinocchio.
The fascination continues; just take a look at this year’s box office. In the recent film “Transcendence,” Johnny Depp starred as a sentient machine. In the critically acclaimed “Her,” Joaquin Phoenix’s character fell in love with an advanced operating system named Samantha. Coming attractions include more installments in the rebooted “RoboCop” franchise; “Star Wars: Episode VII,” with its universally lovable droids; and, of course, “Terminator 5.”
A question at the heart of all these movies is this: At what point does a computer move from property to personhood?
Robotic legal personhood in the near future makes sense. Artificial intelligence is already part of our daily lives. Bots are selling stuff on eBay and Amazon, and semiautonomous agents are determining our eligibility for Medicare. Predator drones require less and less supervision, and robotic workers in factories have become more commonplace. Google is testing self-driving cars, and General Motors has announced that it expects semiautonomous vehicles to be on the road by 2020.
When the robot messes up, as it inevitably will, who exactly is to blame? The programmer who sold the machine? The site owner who had nothing to do with the mechanical failure? The second party, who assumed the risk of dealing with the robot? What happens when a robotic car slams into another vehicle, or even just runs a red light?
Liability is why some robots should be granted legal personhood. As a legal person, the robot could carry insurance purchased by its employer. As an autonomous actor, it could indemnify others from paying for its mistakes, giving the system a sense of fairness and ensuring commerce could proceed unchecked by the twin fears of financial ruin and of not being able to collect. We as a society have given robots power, and with that power should come the responsibility of personhood.
From the practical legal perspective, robots could and should be people. As it turns out, they can already officially fool us into thinking that they are, which should only strengthen their case.
The notion of personhood has expanded significantly, albeit slowly, over the last few thousand years. Throughout history, women, children and slaves have all at times been considered property rather than persons. The category of persons recognized in the courts has expanded to include entities and characters including natural persons aside from men (such as women, slaves, human aliens, illegitimate children and minors) as well as unnatural or juridical persons, such as corporations, labor unions, nursing homes, municipalities and government units.
Legal personality makes no claim about morality, sentience or vitality. To be a legal person is to have the capability of possessing legal rights and duties within a certain legal system, such as the right to enter into contracts, own property, sue and be sued. Not all legal persons have the same rights and obligations, and some entities are only considered “persons’” for some matters and not others.
Just last month, the Supreme Court heard arguments in the Hobby Lobby case about whether a corporation is person enough to ask for a religious exemption.
New categories of personhood are matters of decision, not discovery. The establishment of personhood is an assessment made to grant an entity rights and obligations, regardless of how it looks and whether it could pass for human.
To make the case for granting personhood to robots, it’s not necessary to show that they can function as persons in all the ways that a “person” may be understood by a legal system. It’s enough to show that they may be considered persons for a particular set of actions in a way that makes the most sense legally and logically.