In 2015 I was watching a dress rehearsal for a play about love, loss and aging. In a climactic scene, the lead actress gesticulated and shouted, while her co-star cowered before her—stuttering, repeating himself, faltering in the face of her tirade. It was a beautiful, humanizing moment, pulling the audience into his distress and confusion. But it was off-script, and this actor wasn’t supposed to improvise. We paused while an engineer tinkered with his programming.
The play was Spillikin by Jon Welch, a playwright known for tackling of-the-minute social issues in an unshrinking, bitterly comic style. Beyond its considerable theatrical merits, this production had the additional draw of starring a six-foot-tall android. Until recently I’d been employed as a robotics researcher at the company that supplied this humanoid, which gained me a backstage pass and a few evenings at the pub with Jon, exploring how a robot might inhabit—or fail to inhabit—the role of companion and friend. Spillikin offered a prescient look at a near future of aging isolation, without family or social networks to buttress against the oncoming dark, but it also had me thinking: can technology possibly fill this gap?
Sally, the main character of Spillikin, is not disabled, but she is aging. Formerly a fiery, foul-mouthed rock ’n’ roll music journalist, she is restricted by encroaching Alzheimer’s to an ever-shrinking existence in an increasingly empty house. Her husband was a researcher in artificial intelligence, who on his death left her a gift—a robot caretaker, imbued with his memories and personal quirks. Watching, you have to ask: Is this pseudo-companionship enough for the once-brilliant Sally? Will being patronized and soothed by a repetitive mechanical facsimile enhance or erode her degrading faculties?
No longer the domain of science fiction, these questions are now highly relevant to both commercial robotics and the care sector. In 15 years, the percentage of the population over 65 will more than double in Europe, Japan and the U.S. A tenfold increase in care workers will be required, at a time when the sector is relentlessly shrinking. At first glance, this could be a perfect opportunity for robots to fill a genuine social need—entrepreneurs and tech evangelists frequently talk of machines tackling the “dangerous and demeaning work” of carrying and cleaning patients.
But we are decades away from even small-scale employment of such devices, which require many cycles of testing, approval, and certification. A rule of thumb in robotics is that a robot must work 40–50 percent of the time if it’s a proof of concept, 90 percent of the time in a non-critical commercial setting (e.g. a toy), and 99.9 percent of the time in a critical industrial setting. And that’s for robots never intended to touch or handle a human being. What kind of failure rate should we accept when the consequences might involve injury to our most vulnerable?
Aware of these issues, but hyper-alert to a looming market opening, manufacturers are pushing forward to establish robots in non-physical, non-critical support roles, such as digital companions or animatronic pets. These robots are designed to entertain people, listen, sooth or perform as a telepresence interface for absent relatives, leaving human caregivers to get on with the more physically challenging tasks of emptying bedpans and administering medication.
These business initiatives assume that human interactions are not a pleasure for caregivers, but a tedious imposition that cuts productivity, efficiency and (in private institutions) profits. In a sector that already struggles with low pay and recruitment, removing the opportunity to bond with patients may not be the most enticing proposition for future nursing students and professionals. Another concern is the lack of long-term research on whether displacing social and physical contact onto machines is psychologically detrimental. Prisons in the U.S. have been gradually substituting physical visitation with supervised Skype meetings, but inmate advocacy groups say this is unacceptably cruel. Can something be a cruelty for prisoners, yet adequate for the elderly and disabled?
The issues raised above, and by Spillikin, are ones that urgently need answering before the juggernaut of technology irrevocably reshapes the care sector. To try to tackle them, I’ve joined with an international team of social scientists, health and policy experts, and roboticists. We want to encourage governments to consider the possible positive and deleterious effects of robot caregivers on the long-term mental
health of the elderly and disabled, and to create regulatory frameworks to protect citizens pushed into a role as guinea pigs on the vanguard of a social revolution they may not want.
We are also interested in exploring alternatives that could mitigate the oncoming demographic crisis. Finland, for example, has pioneered a scheme where young people get subsidized housing in eldercare facilities, in return for spending time with the residents. This kind of complex social interaction is far more stimulating than anything that can presently be provided by technology. In industrial settings, the idea of a “robot tax” has been floated—taxing companies that move to automation at the expense of skilled or unskilled labor, and using those proceeds to provide training and support in affected communities. A similar policy around social robotics could yield benefits.
Perhaps, as boomers age and extended families become sparser, robots really will be a boon. Ultimately, the future offered by Welch’s play is a bittersweet one. Sally might no longer have her husband, but she has her memories—memories shared by the homunculus he left in his place, and Welch suggests that the non-sentient comfort of a metal hand may be better than nothing. But the question we should be asking is whether these are our only choices, or if we can take advantage of technological advances without also ignoring real human needs?