It’s morning in Canada, circa 2030. Meet the Jones family starting its day.
Jenny Jones is off to kindergarten with her robot nanny, Pearl. Pearl walks, talks and can watch out for traffic. More important, she is soft, warm and huggable and never gets tired of reading Love You Forever
. No wonder Jenny sometimes calls her Mommy.
Jenny’s father, Mason, in his self-driving car on the way to work, is checking with his robot law clerk in Mumbai about searching law libraries for precedents. (Driving one’s own car is illegal now: people have too many accidents.) Suddenly, a baby carriage rolls off the sidewalk and toward the car. If the car swerves, it will hit a cyclist. Which will it choose?
Mason’s latest client, Jacob, fatally shot his fembot. Jacob never had much luck with women. Then he bought Roxxxy, the ideal mate, he thought, until she fell in love with the robot gardener next door. Should the charge be manslaughter or disturbing the peace?
Jenny’s mother, Emma, a doctor, is feeding her patient’s symptoms into the diagnostic machine, a successor to the Jeopardy
-winning computer Watson. It spits out a treatment. Emma doesn’t agree with it, but she’s not expected to think for herself, not anymore.
Jenny’s grandmother, Karen, lives at the Happy Endings nursing home. Her robot nurse, Cybel, is getting her up, helping her dress, doling out her daily pills. Cybel is a good listener and never lets on that Karen has told the stories about the olden days 20 times before. Her work is a great help to the family; they don’t feel guilty about not visiting. Except now Karen is talking about changing her will.
Jenny’s uncle, Lucas, is a field commander in the Fifth Iraq War. Well, sort of. From a desk in Ottawa, he dispatches robot soldiers to attack and fire on anything suspicious. Which might mean anything that moves. He’s more worried that an enemy computer nerd might hack into the software and reprogram all the robots to fight for the opposing side.
This is not science fiction. All of these artificially intelligent machines are in production or on the drawing boards. More are on the way, whether we like it or not, and they come with a bag of ethical issues that we haven’t even begun to think about. Our use of robots today is where we were with personal computers 40 years ago, says Bill Gates, founder of Microsoft. And he should know. Even before 2030, robots may be in our homes, helping with housework, caring for our children and elders, providing companionship and perhaps sex. Outside the home, computers may take over 47 percent of our jobs, according to an Oxford University estimate. “We are on a precipice never known in history of delegating our tasks to machines,” says Ian Kerr, Canada Research Chair in ethics, law and technology at the University of Ottawa.
What happens to the people who used to do those jobs? What does it say about us as a society if we surrender the care of our children and elderly to robots? Is there something wrong about sending machines to kill people? And if we can design robots that are even more intelligent than we are, should we? Physicist Stephen Hawking told the BBC last December that robots could spell “the end of the human race.” But what’s to stop us?
Every few centuries, we discover a new way of killing each other that’s a game changer: the crossbow, gunpowder, nuclear weapons and now robot soldiers. They fearlessly obey orders and never go AWOL, rape, plunder or exact revenge, suffer post-traumatic stress disorder or require pensions. Most of all, they spare us the unsettling sight of body bags coming home.
Some forms of robots are on the front lines already — an estimated 12,000 in the Middle East, according to ethicist Peter Singer — on sentry duty, clearing minefields, exploring hiding places. Rocket-firing drones, which cost $14 million, are replacing F-35 fighters, which cost $180 million and seem to kill just as many people. Unfortunately, in the Middle East, hundreds of those killed have been innocent bystanders.
Of course, the robots are expected to be programmed to follow the rules of civilized war, if that’s not an oxymoron. Roboticist Ronald Arkin at the Georgia Institute of Technology, with millions in grants from the U.S. military, is building an “ethical governor” into robots that might minimize casualties. He’s tested it in simulated combat. Whether or not it’s possible to make a perfectly ethical system, the goal is to design one that performs better than humans, especially in reducing war crimes — and sadly, that may be a low hurdle. (Canada’s military has no research into developing lethal autonomous weapons but opposes any United Nations limit on their use.)
Even if robot soldiers can be programmed to act more ethically than humans, is it ethical to use them? To give machines the right to kill is to treat the enemy as less than human. When all we risk is hardware, going to war will no longer be a last resort.
At a meeting this past April, the United Nations was urged to ban this technology before it is out of the box, and in July more than a thousand robotics experts warned in an open letter that a military artificial intelligence “arms race” must be stopped. Lots of luck with that. Dozens of countries have military robot programs under way. If they believe it’s in their interest to have autonomous soldiers, they will have them. The UN’s special rapporteur Christof Heyns admitted as much, in cagier language, in his report: “Experience shows that when technology that provides a perceived advantage over an adversary is available, initial intentions are often cast aside.”
If killing machines are a future challenge, self-driving cars and trucks are already here. An Ontario pilot project will see autonomous cars tootling around public roads come January; the mining company Rio Tinto has more than 50 autonomous trucks operating in Australia. At this year’s Consumer Electronics Show in Las Vegas, a half-dozen automakers announced plans for self-driving vehicles road-ready between 2017 and 2018. They are programmed to stay in a lane, avoid oncoming traffic, obey speed limits, stop at red lights. But how can we program them for the challenge Mason’s car faced? Can we even agree on what a human should choose — the cyclist or the baby?
Alan Winfield of Bristol Robotics Laboratory in England tested a similar challenge on a robot programmed to obey what science fiction writer Isaac Asimov postulated as the first law of robots: do no harm to humans or, through inaction, allow a human to be harmed. In the test, a proxy for a human was sent rolling toward a hole. The robot pushed it out of danger. Then, two dummies headed for the hole at the same time. The robot had to choose. Sometimes it saved one; a few times it saved both; but in 14 out of 33 trials it took so long making up its mind that both fell in.
There will be accidents with driverless cars. Software fails. The unpredictable happens. But there will be fewer accidents than with tired, tipsy, distracted human drivers. The U.S. investment firm Morgan Stanley estimates total yearly savings for the United States from a driverless takeover — including fuel, repairs, hospital bills and the value of time saved — at $1.3 trillion. Hard to resist, that. How about truckers? It’s the second most common job for men in Canada, with some 253,000 employed. Dumping them will cut costs by 40 percent, says Paul Godsmark of the Canadian Automated Vehicles Centre of Excellence, and “by 2030 I’m guessing it’s game over for truck drivers.”
Saving money is hard to resist, and nowhere more than in hospitals facing the soaring cost of care. Trolleys called Tug already roll through U.S. hospitals, clocking an average of 20 kilometres a day, delivering meals, linens and drugs, picking up waste and laundry. They’re cheaper than people, and they work 24-7. Too bad about the humans they replace.
Childlike robots roam the wards of the Alberta Children’s Hospital in Calgary, and researcher Tanya Beran found they reduced children’s anxiety about flu shots by 50 percent. Friendly machines are also being used in therapy. Kids with autism talk to them, play with them. Robots aren’t threatening or judgmental.
The diagnostic descendant of Watson, the Jeopardy
-winning computer, can cull more than two million pages in medical journals, search up to 1.5 million patient records and in seconds propose a diagnosis and treatment. Its success rate in diagnosing lung cancer, for example, is claimed to be 90 percent, compared to 50 percent for humans. What worries Ian Kerr about this is that when we delegate decision-making to computers, we may make ourselves dependent on them at the expense of our own critical thinking. “Doctors may be obliged to follow the machines,” he told me, “and that can be a dilemma like the angst of Abraham: do I listen to authority, or do I do what’s in my heart?”
Along with driverless cars on the road, by 2030 we’ll likely have a robot in the house, like Pearl, Cybel or Roxxxy, to meet our needs for care, companionship and even sex. One hundred different models of social robots are already available; the Japan Robot Association estimates a $50 billion market for them by 2025. And more interesting ethical issues come up when we use them to solve social problems.
For Mason’s client Jacob, his problem was attracting a girlfriend. But even in 2015, he could have had his robotic pick. Roxxxy, a product of True Companion, a New Jersey company, comes customized in looks (blond, brunette, Asian, black), personality (naive, experienced) and many other imaginable, and unprintable, ways. Covered in synthetic skin, she shifts between moods (sleepy, conversational, excited), “just like real people!” the website claims. While she won’t discuss existentialism in Camus, she can carry on a conversation about as deep as you’d get at a noisy cocktail party. “Sex only goes so far; then you want to be able to talk to the person,” sexbot inventor Douglas Hines explains. Price depends on your choice of features, but the deluxe version (with listening ability) sells for US$6,995. And Roxxxy could soon have a host of rivals, as other sex-mannequin companies are actively investing in robotic technology. The market at the moment seems strongest in Japan where, perhaps not by coincidence, the birth rate is close to the world’s lowest.
In a United Kingdom survey
last year, one in five questioned would have sex with a robot. No worries about disease or emotional complications. As David Levy, a British robotocist, told a writer for LiveScience, “Once you have a story like ‘I had sex with a robot, and it was great!’ appear someplace like Cosmo magazine, I’d expect many people to jump on the bandwagon.”
The downsides? Less work for prostitutes. Whether you view prostitution as exploitative or as a legitimate career choice, it still pays the bills for those who need it. Occasional frustration: “Not tonight, dear; I’m recharging.” And something seems basically wrong about an industry selling the ideal of womanhood as life-sized Barbies, beautiful and brainless; an industry that validates using women — robotic or real — as creations for male pleasure.
Turning over our children to robot nannies is a distant prospect in Canada. But Japan (which discourages immigration) needs them, and glowing customer reviews testify how completely children bond with them. Maybe it’s no different from having a human nanny. Or maybe there’s something harmful in one’s primary attachment being to a machine. We may not know until Pearl’s charges grow up and visit a psychiatrist.
The biggest argument for robot caregivers for the elderly is that they are needed. They are already in use in Japan, where seniors will make up nearly 40 percent of the population by 2060, leaving not enough young people to go around. In Canada, too, we’re all aging, and a reliable robot may be better than an abusive or neglectful person or none at all. Engineering professor Goldie Nejat at the University of Toronto is developing health-care robots for the elderly, at home or in hospitals, including Tangy, who debuted at a Toronto seniors’ home this year as the bingo caller.
For Mason’s mother, Karen, a visit from her son would be better, but a friendly robot is second best. Cybel can keep her company, make sure she takes her pills, call the doctor if she seems ill. It’s Mason who is missing something, David Deane of the Atlantic School of Theology argues. “When we care for those in need, it heals us,” he tells me. “Their vulnerability is a gift to others. We are called by Christ to visit the sick. Saying yes to that call is vital to who we are as humans.”
Should we worry about the rise of the robots? Stuart Russell, a pioneer in artificial intelligence at the University of California at Berkeley, thinks so. In an open letter earlier this year, he asked for more research aimed at ensuring that AI systems do what we want them to do. Thousands have signed it, and by last March, 300 research groups had applied to work on keeping artificial intelligence “beneficial.”
But Stephen Hawking’s fear of out-of-control robots enslaving humans is a long way off. We have other things to worry about.
One is that the more we let machines diagnose illness, drive our cars and provide our services, the more we make ourselves dependent on them. If they break down, will we still have the skills they replaced? Already parallel parking is a dying art. Inuit hunters with a GPS lose their tracking skills.
A second concern is that robots are taking our jobs — dirty and dangerous ones like repairing sewers and clearing landmines, low-skill ones like flipping hamburgers and high-skill ones like laboratory analyses. Getting humans out of the loop is more efficient. It saves money. Are those our only values? Are we ready for the consequences when a few get rich on the new technology and the rest face a jobless future? Surely that is something the church, among others, should be thinking about.
The third threat is not whether robots can be programmed to be “good” — if we can agree on what “good” means (philosophers have been debating that for 2,500 years). Rather, it is what robots do to us as humans. As machines more and more approximate human abilities, will we think of ourselves more and more as machines? Robots can already reason, decide, act, co-operate with each other, maybe even reproduce. (A University of Cambridge scientist has demonstrated rudimentary robot “evolution”
— how a “mother” robot can assemble “babies,” evaluate them and select the best to improve each generation.) Some developments in the works, cautions Carnegie Mellon professor Illah Nourbakhsh, are so significant they could lead to “a collective identity crisis over what it means to be human.”
Still, as Kerr says, “There’s a lot more to being human than carrying out operations.” Imagination, poetry, awe, love: there’s no software for that. So far.Patricia Clarke is a writer and editor in Toronto.