An autistic boy stares off into the distance when you try to talk to him. A victim of stroke needs constant encouragement to perform exhausting and boring physical exercises. A lonely elderly woman is becoming vegetative through lack of any social ties.
These are just three examples of people who desperately need person-to-person interaction -- not just casual social friends, but someone who will work with them intensely, hour after hour. Someone constantly encouraging, constantly engaged, constantly attentive; someone who never shows boredom, discouragement, frustration. Someone a bit like a saint.
Or maybe a robot.
In a recent New Yorker article, medical professor Jerome Groopman discusses the emerging field -- and developing technology -- of "socially assistive robots." These are machines designed to interface with human beings. Whether the robots move or are stationary, speak or make noises, appear human or remain simply machine-like, they are all designed to work with patients, often tirelessly doing or saying the same thing over and over, giving the patient the non-ending support and encouragement that he or she needs. The robot ideally performs the same tasks that human therapists can do now, but does them indefinitely, beyond normal human endurance.
The simpler robots, such as some used in physical therapy, simply measure the effort exerted by the patient, and offer pre-recorded verbal encouragements. More sophisticated robots, working with children suffering from autism, use their half-machine, half-human characteristics to lure kids -- kids often comfortable with machines but who have difficulty sharing interests and information with other humans -- into child-robot relationships that, it's hoped, will evolve into improved relationships with other children. Similarly, robots working with the elderly are designed to fill a vacuum in their lives, giving them someone with whom to bond, a bonding that may also encourage them to bond among themselves (a function that animal pets often perform as well.)
Dr. Groopman notes that while some of these robots may be fairly simple in their intellectual attributes, others are being designed with the ability to learn from experience and, based on that learning, to modify their own behavior and responses. For example, some who work with recovering stroke victims are able to determine to what extent a particular patient is introverted or extroverted. The more introverted the patient, the more physical distance the robot maintains between robot and patient, the lower the pitch of its "voice," the more slowly it speaks, and the more encouragement it offers.
We have obviously reached in real life the first, halting steps toward the artificial intelligence (A.I.) that we have been reading about in science fiction for decades. In many ways, these developments seem miraculous and wonderful, but science fiction has trained our minds to suspect that it will all go wrong in the end.
Groopman mentions the "uncanny valley" effect: the uneasiness or revulsion people feel when a robot seems too human, but not entirely human. At present, designers avoid this effect by designing deliberately non-human characteristics into their robots. But this effect may not always be a problem. Already, elderly patients have been observed looking forward eagerly to spending time with their robots. One researcher observed a woman who called a robot her "grandchild."
'Others said that they would like to arrange their schedule around singing to the robot ... it's the high point of their day.' ... 'One woman spun quite a yarn,' Mataric said. 'They had whole internal stories about how the robot fit into their lives, however unreal those stories may be.'
Here, the robot is not just a tool that facilitates amused interaction among elderly women; interaction with the robot itself appears to have become their prime social objective. "We were wired through evolution to feel that when something looks us in the eye, then someone is at home in it." These women find their robots better company than their own peers.
And can we blame them? Isn't this really what we all would like? A totally non-judgmental friend who is constantly encouraging and forever patient, who has only our best interests at heart? One to whom we can tell the same story over and over, and get the same appreciative laugh every time? One who always gives, and demands nothing from us in return?
In other words, aren't these robots way too good to be enjoyed only by the disabled, the elderly, the autistic? As they develop further, as they are made ever more human, as they become ever more skilled at being the kind of buddies we always craved, surely -- human nature being what it is -- there will develop black markets in robots, robots obtained without a prescription, dubious claims of a medical need for robots. In the end, doesn't the battle over marijuana shows that sooner or later we'll get what we crave, robots on demand?
Everyone will have his own personal robot or robots. Where the economy demands interpersonal cooperation, cooperation and empathy we no longer have the patience or skills to develop ourselves, we will gladly delegate to our robots the task of working things out among themselves on our behalf.
At the end of his article, Groopman quotes an MIT psychology professor who expresses fear that patients now being helped by robots will find it too easy to limit their social life to interaction with their robots. Look at email, he argues. Has email facilitated human contact, or replaced it? What the professor sees as a threat, however, mankind may well find a blessing. Human relations are messy. Robot relations will be tidy and warm and fulfilling. If there are conflicts between persons, between nations, let the robots sort it out -- we trust them.
Hey, I read sci-fi! I go to movies! I already know the ending to this story!
--------------------------Jerome Groopman, Robots That Care, The New Yorker (Nov. 2, 2009)
2 comments:
There are quite a number of scientists who view artificial intelligence as the next step in human evolution. The logic, it seems, is that if we create the robots to be more and more human-like, eventually they will be able to take the reins and reprogram themselves to be even better.
On a related note, I read about an interesting thought experiment a few years ago: There will soon exist technology by which you can replace a single neuron in your brain with a mechanical component. If someone were to do this, in stages, to their entire brain, do they become non-human? If so, at what point?
I can't give you any citations, but I think they already have developed quite a few robots with a limited capacity to reprogram themselves. Nothing too dramatic yet, however.
Your thought experiment is very cool. Something I've thought about for a long time, although usually in the context of somehow doing a complete mechanical copy of a person's brain -- the copy would then have all of the person's memories up until the instant that it was created. But I like your approach -- a gradual replacement of neurons and synapses, one by one.
From a legal point of view, I guess there's no problem. The person would stay "human" even if the entire brain was replaced, just as when legs, arms, hearts, etc., are replaced. But philosophically and religiously, the questions seems huge.
Fascinating stuff being done in this area.
Post a Comment