Even five years ago, technology seemed external, a servant. These days, what's so striking is not only technology's ubiquity but also its intimacy.
On the Internet, people create imaginary identities in virtual worlds and spend hours playing out parallel lives. Children bond with artificial pets that ask for their care and affection. A new generation contemplates a life of wearable computing, finding it natural to think of their eyeglasses as screen monitors, their bodies as elements of cyborg selves. What will it mean to people when their primary daily companion is a robotic dog? Or to a hospital patient when her health care attendant is built in the form of a robot nurse? Both as consumers and as businesspeople, we need to take a closer look at the psychological effects of the technologies we're using today and of the innovations just around the corner.
Diane L. Coutu met with Sherry Turkle, the Abby Rockefeller Mauzé Professor in the Program in Science, Technology, and Society at MIT. Turkle is widely considered one of the most distinguished scholars in the area of how technology influences human identity.
We are ill prepared for the new psychological world we are creating. |
Sherry Turkle |
Few people are as well qualified as Turkle to understand what happens when mind meets machine. Trained as a sociologist and psychologist, she has spent more than twenty years closely observing how people interact with and relate to computers and other high-tech products. The author of two groundbreaking books on people's relationship to computersThe Second Self: Computers and the Human Spirit and Life on the Screen: Identity in the Age of the InternetTurkle is currently working on the third book, with the working title Intimate Machines, in what she calls her "computational trilogy." At her home in Boston, she spoke with Coutu about the psychological dynamics between people and technology in an age when technology is increasingly redefining what it means to be human.
· · · ·
Coutu: You're at the frontier of research being done on computers and their effects on society. What has changed in the past few decades?
Turkle: To be in computing in 1980, you had to be a computer scientist. But if you're an architect now, you're in computing. Physicians are in computing. Businesspeople are certainly in computing. In a way, we're all in computing; that's just inevitable. And this means that the power of the computerwith its gifts of simulation and visualizationto change our habits of thought extends across the culture.
My most recent work reflects that transformation. I have turned my attention from computer scientists to builders, designers, physicians, executives, and to people, generally, in their everyday lives. Computer software changes how architects think about buildings, surgeons about bodies, and CEOs about businesses. It also changes how teachers think about teaching and how their students think about learning. In all of these cases, the challenge is to deeply understand the personal effects of the technology in order to make it better serve our human purposes.
A good example of such a challenge is the way we use PowerPoint presentation software, which was originally designed for business applications but which has become one of the most popular pieces of educational software. In my own observations of PowerPoint in the classroom, I'm left with many positive impressions. Just as it does in business settings, it helps some students organize their thoughts more effectively and serves as an excellent note-taking device. But as a thinking technology for elementary school children, it has limitations. It doesn't encourage students to begin a conversationrather, it encourages them to make points. It is designed to confer authority on the presenter, but giving a third or a fourth grader that sense of presumed authority is often counterproductive. The PowerPoint aesthetic of bullet points does not easily encourage the give-and-take of ideas, some of them messy and unformed. The opportunity here is to acknowledge that PowerPoint, like so many other computational technologies, is not just a tool but an evocative object that affects our habits of mind. We need to meet the challenge of using computers to develop the kinds of mind tools that will support the most appropriate and stimulating conversations possible in elementary and middle schools. But the simple importation of a technology perfectly designed for the sociology of the boardroom does not meet that challenge.
If a technology as simple as PowerPoint can raise such difficult questions, how are people going to cope with the really complex issues waiting for us down the roadquestions that go far more to the heart of what we consider our specific rights and responsibilities as human beings? Would we want, for example, to replace a human being with a robot nanny ? Indeed, the robot nanny might be more interactive and stimulating than many human beings. Yet the idea of a child bonding with a robot that presents itself as a companion seems chilling.
We are ill prepared for the new psychological world we are creating. We make objects that are emotionally powerful; at the same time, we say things such as "technology is just a tool" that deny the power of our creations both on us as individuals and on our culture
Q: If we can relate to machines as psychological beings, do we have a moral responsibility to them?
A: Instead of trying to get a "right" answer to the question of our moral responsibility to machines, we need to establish the boundaries at which our machines begin to have those competencies that allow them to tug at our emotions.
Many people try to hide their emotions from other people, but machines can't be easily fooled by human dissembling. |
Sherry Turkle |
In this respect, I found one woman's comment on AIBO, Sony's dog robot, especially striking in terms of what it might augur for the future of person-machine relationships: "[AIBO] is better than a real dog It won't do dangerous things, and it won't betray you Also, it won't die suddenly and make you feel very sad." The possibilities of engaging emotionally with creatures that will not die, whose loss we will never need to face, presents dramatic questions. The sight of children and the elderly exchanging tenderness with robotic pets brings philosophy down to earth. In the end, the question is not whether children will come to love their toy robots more than their parents, but what will loving itself come to mean?
Q: What sort of relational technologies might a manager turn to?
A: We've already developed machines that can assess a person's emotional state. So for example, a machine could measure a corporate vice president's galvanic skin response, temperature, and degree of pupil dilation precisely and noninvasively. And then it might say, "Mary, you are very tense this morning. It is not good for the organization for you to be doing X right now. Why don't you try Y?" This is the kind of thing that we are going to see in the business world because machines are so good at measuring certain kinds of emotional states. Many people try to hide their emotions from other people, but machines can't be easily fooled by human dissembling.
Q: So could machines take over specific managerial functions? For example, might it be better to be fired by a robot?
A: Well, we need to draw lines between different kinds of functions, and they won't be straight lines. We need to know what business functions can be better served by a machine. There are aspects of training that machines excel atfor example, providing informationbut there are aspects of mentoring that are about encouragement and creating a relationship, so you might want to have another person in that role. Again, we learn about ourselves by thinking about where machines seem to fit and where they don't. Most people would not want a machine to notify them of a death; there is a universal sense that such a moment is a sacred space that needs to be shared with another person who understands its meaning. Similarly, some people would argue that having a machine fire someone would show lack of respect. But others would argue that it might let the worker who is being fired save face.
Related to that, it's interesting to remember that in the mid-1960s computer scientist Joseph Weizenbaum wrote the ELIZA program, which was "taught" to speak English and "make conversation" by playing the role of a therapist. The computer's technique was mainly to mirror what its clients said to it ELIZA was not a sophisticated program, but people's experiences with it foreshadowed something important. Although computer programs today are no more able to understand or empathize with human problems than they were forty years ago, attitudes toward talking things over with a machine have gotten more and more positive. The idea of the nonjudgmental computer, a confidential "ear" and information resource, seems increasingly appealing When I've found sympathy for the idea of computer judges, it is usually because people fear that human judges are biased along lines of gender, race, or class. Clearly, it will be awhile before people say they prefer to be given job counseling or to be fired by a robot, but it's not a hard stretch for the imagination.