9/30 – Alone Together: Introduction, and Chapters 1, 2, 3, 4, & 5

In this session we discuss the initial chapters of Sherry Turkle’s book. Each student should comment on this entry by 5 pm PDT on Monday, September 30th.

Advertisements

10 Comments

Filed under Class sessions

10 responses to “9/30 – Alone Together: Introduction, and Chapters 1, 2, 3, 4, & 5

  1. Turkle’s arguments are naturally from a perspective of a psychoanalytically trained psychologist, and as someone who is interested in the philosophy of technology, I thought I would try to recast some of her main points in a more philosophical light. For the sake of concision, there are two related points I want to place focus on:

    1) “I once described the computer as a second self, a mirror of mind. Now the metaphor no longer goes far enough. Our new devices provide space for the emergence of a new state of the self, itself, split between the screen and the physical real, wired into existence through technology” (16).
    2) “AIBO is “kind of alive” because it can function “as if it has a brain and heart”….In the role of selfobject, another person is experienced as part of one’s self, [wherein they are] cast in the role of what one needs…Relational artifacts…clearly present themselves as candidates for the role of selfobject” (54-56).

    I would argue that relational artifacts such as computers and computerized toys (like AIBO) are intended by their designers to be representations of certain aspects of the human experience, such as emotion and sentience. The distinction between reality and representation has been obfuscated in contemporary society. In his seminal treatise “Simulacra and Simulations”, French philosopher and social theorist Jean Baudrillard asserts that post-modern society has become so reliant upon models that all contact with the real world that preceded the map has been lost. In other words, meaning and reality have been replaced entirely by signs and symbols, and that human experience has become a simulation of reality. While Baudrillard’s philosophical motivations are primarily political, I believe that they apply to the above two quotes.
    In the first quote, Turkle asserts that technology has become more than a mere extension of ourselves, but instead tightly interwoven into the fabric of our existence. Turkle follows this up by saying that “technology has become like a phantom limb, it is so much a part of them….Gradually, we come to see our online life as life itself” (17). This description mirrors Baudrillard’s prediction that cultural products will produce an alternative “hyperreality” for consumers such that their lives will become so artificial that any reference to reality will be phrased in artificial, “hyperreal” terms. This is exemplified in the second quote since AIBO is viewed by some children who play with him as alive enough to have emotion and to be considered “real”. However, this notion of “real” is phrased in entirely artificial terms because the children have to rely on the phrase as if in order to describe AIBO’s perceived sentience. Moreover, the selfobject, such as AIBO, is an instantiation of the “hyppereal” in that it is an object of reality (namely, a conglomeration of electronic parts) whose purpose is only to artificially extend another object (the experience of its human owners). But a relationship between a person and a selfobject is nothing more than a relationship involving one person – inevitably empty.

  2. ato1120

    Whenever I am reading a text, I am always very conscious of the author and the context in which they are writing. I am always happiest when I can forget about the author and just read the argument, but I am the most engaged when my mind cannot be derailed from the author’s methods. In Alone Together, as Turkle explored how sociable robots affect children, I could not stop thinking about the children whose lives she entered and rapidly exited. How can we know what long-term affects trouble these children after weeks of caring for and nurturing their robots, only to have them removed from their lives forever?

    She outlines fascinating trends as increasingly complex sociable robots entered the lives of Americans. She notes that even amongst educated adults, the simple distinction between biological and mechanical is not enough to stop the owner of a sociable robot from caring for it on an emotional level or from facing ethical dilemmas when one is mistreated.

    My concern, again, is for children involved in these experiments. Turkle found some truly fascinating results. As children navigate the difference between something like Furbies™ and biological creatures, she notes, “if you focus on a Furby’s mechanical side, you can enjoy some of the pleasures of companionship without the risks of attachment to a pet or a person” (44). But we know, even without her brief and troubling case studies, that many children did risk strong attachment to these sociable robots. Consider Callie, the child of two overworked and unavailable parents. She saw My Real Baby as “appreciating her motherly love” (77) and was able to pour her love and care into the robot. She comforts and reassures the robots while going through a non-trivial amount of grief at her own loss. Turkle uses Callie as a case study, noting, “at ten, ministering to her robots, Callie reminds us of our vulnerability to them” (79), but that vulnerability is driven by Turkle’s work! Not all children bounce back from severance of that kind of connection. Although I am impressed and in awe of Turkle’s findings, I am disturbed that we only take note of these distress signals and do not care to follow up and take more care in these studies. I cannot offer a solution in exploring these relationships without causing some damage, but I propose that looking for that solution deserves more time and attention in this text.

  3. This text has been fascinating, particularly because I often think about the way that technology influences our presence in not just human-human relationships but also our lives in general. I have two broad thoughts about this week’s excerpt: one about how much Turkle has actually justified the harmful nature of the current direction of our “robotic moment,” and one about how our changing emotional relationship with technology might actually change our rational relationship with it as well.

    On Turkle’s explanation of emotional harm: Progressing through this excerpt, it appears that Turkle is postulating to as least some extent that authenticity and “genuine” relationships are both good for humans and the best style of relationship for our psychological development. Turkle is a psychoanalytically trained psychologist, this book has gotten wide acclaim from sources that include many psychological researchers, and Turkle spent quite a bit of time writing this book and illustrating particular emotional states of mind with examples (so far mostly involving children), so I think it’s reasonable to expect that there is a good foundation for this. However, it seems to me that even if it’s granted that technology can cause us to lose some amount of this authenticity, Turkle has not really explored the magnitude of these harms. Perhaps our increasing need to control the tempo of our relationships and willingness to “fill in the blanks” as we treat robots as alive-type creatures really does make us feel more isolated. But does it affect us all in the same way, and to the same degree? We see examples like Miriam with her Paro (9) and the Furby children in Chapter 2 that suggest that there is a certain positive emotional connection that can be drawn between human and machine, even if that connection isn’t actually bidirectional. These examples are certainly not the only ones, and these examples offer tangible evidence that there can be something that’s personally meaningful in these types of relationships. Moving forward, I would love to see Turkle talk more about how the emotional costs of technology stack up to the individual benefits that different types of people get from them. My intuition is that there are many people out there that for a range of reasons – their home environments, their neurological predispositions, etc. – could naturally and seriously have trouble interacting with people in the “classical” ways, will not receive treatment or help for it and arguably don’t need it, and could be benefit a lot from this type of technology and not suffer that much relative to where they would be if this technology wasn’t there.

    On our changing rational relationships with technology: Turkle does a wonderful job spelling out the chronology of how children change their understanding of aliveness when confronted with new machines in Chapters 2 and 3. In particular, the way that children adjust their conception of aliveness when performing surgery on a Furby in Chapter 2 – some believing that the robot skeleton underneath the fur is “the goblin” and the skin is the Furby’s “ghost” (43) – shows how the rational frameworks that they use to determine aliveness can change as their emotional relationship with a technology grows stronger. Of course, these are children whose rationality is still in its formative stages, so this example probably wouldn’t carry over as strongly for adults. Yet looking in the Introduction at how adults come to see their lives as inseparable from being connected to their phones, this effect seems to carry over at least a little. If our emotional attachments can cause us to develop new ways of thinking about aliveness, then that suggests that perhaps our current conception of aliveness is limited because most people haven’t seriously introspected about the aliveness of social robots. Indeed, we might have very little idea on how we’ll think of aliveness, and how much it will matter to us, in the future.

  4. When people are asked to describe whether the robots used in the studies are alive, there is a common theme that arises that tries to describe the robots as both creature and machine; alive and nonliving; real and fake.The sense of aliveness comes from many areas – displaying emotions, sense of love, programmed intelligence, awareness, personality, etc. However, Turkle expressed her discontent that actual “aliveness seemed to have no intrinsic value” among the children she observed (4). She resists referring to robots as creatures, always using this term in quotes. She rejects the idea of a machine as a creature. With this distinction, she idealizes the potential of human-to-human interaction and downplays the unpredictability of human-to-robot interaction.

    Turkle describes the companionship with robots as “risk free.” This view paints robots as free from error and always acting in accordance to how the human wishes. It’s important to point out that robots are mechanical, and therefore break. Robots are programmed by humans, who make errors, and therefore will have bugs in their software. Turkle even explores how people react to robots breaking down. Children tried to rationalize the behavior of malfunctions, but sometimes children self reflect at these malfunctions. In the observations of Cog, when it is not fully functional, children become very self-conscious that the robot doesn’t like them (97). This fear is far from the “risk free” promise of robots. Robots can lose their personalities; they can break; they can ignore you; they can die. Robots have complexities similar to humans. They age. They aren’t one-dimensional.

    Another issue with Turkle’s idealization with human-to-human interaction is that she focuses on what people should do and how those interactions should be. Turkle downplays the relationships people can have with machines because the relations are not with risk, but relationships with people “makes us subject to rejection” and with that “opens us to deeply knowing another” (66). People often have to open themselves up to rejection with one another. It’s not as simple as every human interaction makes a person vulnerable. The person has to open themselves up to vulnerability first. It’s not uncommon to have superficial interactions with other people. Even Turkle describes that robots don’t fool us into thinking we are communicating with them, we fool ourselves (20). However, that’s not unique to robots. People can fool themselves into thinking they belong and are friends with actual people, when in reality relationships can be superficial. People fool themselves into thinking they are loved or liked as well.

  5. hramesh2013

    Though Sherry Turkle does overall a fine job explaining how we use technology to control how much intimacy we want, she focuses too much on robots. In the introduction of her book, Alone Together, she says teenagers are “drawn to the comfort of connection without the demands of intimacy” because “these young people have grown up with social robots” (Turkle 10). While the former claim may be true, it seems problematic to imply that everyone has grown up and only played with these toys. While I, someone who has grown up in a major city in the United States, have definitely heard of Furby and Tamagotchi, these robotic toys did not seem to be as pervasive as Turkle makes it out to be. At least a decade ago, most children still played tag, hide-and-go-seek, and Monopoly together or Pokemon on their Gameboy when alone. If they happen to own robotic toys, they were only options among many other non-robotic toys. Turkle does explicitly that her book is not about robots but rather “how we are changed as technology offers us substitutes for connecting with each other face-to-face” (Turkle 11). I am hoping as I go on to read the rest of the book that the robot is only a vehicle to get her point about how we try to establish intimacy with technological devices.
    This being said, her findings from her experiments involving robots and children are fascinating and do give insights on what would be like if robots intruded our life. I was especially taken with her discussion of AIBOs when she describes the conflicted feelings that her younger test subjects had when they rough handled their AIBOs. Turkle talks about Henry who makes two contradictory claims: one being that his robotic dog doesn’t have feelings and the other being that the dog prefers him to his friends. This encounter exemplifies the ambivalent state of mind many of us will have one day when we begin incorporate robots into our lives in a major way and build far more advanced intelligent machines.

  6. maggiesko

    Sherry Turkle refers to the “robotic moment” as our emotional and philosophical readiness to embrace relationships with sociable robots as a natural part of life; to accept them as both partners in play, love, and hardship and even substitutes to fellow humans. But this convenience of having machines taking care of us comes with a tradeoff that might be difficult to accept—that we ourselves can be replaced as friends, caregivers and lovers in other people’s lives.
    The children Turkle interviews are generally accepting and surprisingly pragmatic when considering the idea of having a robot companion. Most of them see in a robot “someone” who can prepare better meals, react reliably in emergencies and be “more present” and engaged than preoccupied babysitters. They see the potential for a risk-free relationship deprived of disappointment and hardship and embrace the idea of having a predictable companion “Programing means that robots can be trusted…It is easies to trust a robot than a person” (p.71)
    But the children’s ambivalence about sociable robots becomes obvious in their thoughts on the possibility of robots spending time with their grandparents. From friendly and predictable companions machines suddenly turn into threat and competition for the love and affection of the people dear to them. Confronted by the idea of her grandmother spending time with My Real Baby makes Chelsea uneasy and jealous to the point where she perceives it as potentially superior to her. “I don’t like it that I can be replaced by a robot, but I see how I could be…It is better that grandma be lonely than forget us because she is playing with her robot. (p.75) In general the concept that one’s place in life can be filled in by another person can be quite painful. The possible reality of being substituted by a machine can be even more shocking and troubling, especially to a child. At this point of the book it would have been very interesting to read parents’ opinions on the possibility of having a robot take on some of their parental duties. We see that Chelsea’s mother is taken by the idea of bringing a My Real Baby robot to her own mother in a nursing home, but it is interesting to think about whether she would have provided her daughter with a robotic babysitter or companion, if such a possibility existed.

  7. Primarily in the Introduction, Turkle makes an interesting point about our connections to machines an not uniformly attributable to the sophistication of the machines themselves, but also to the human mind’s desire and efforts to bridge the gap. As demonstrated by the examples in the first 5 chapters, none of the technologies were ultimately that sophisticated, however their anthropomorphism combined with their greater sophistication relative to more archaic forms of toy made them more human to children.

    While Turkle focuses on this phenomenon and evaluates a child’s perception of the objects, she does not empirically compare the relationships between children and inanimate objects versus children and robotic toys versus children and other children.

    Also, in times of play, children are likely to attribute more human-like attributes and dispositions to the object of their play than they actually comprehend the object merits. This is simple the nature of imagination and child’s play.

    Turkle airs on the side of prescribing attachment to robotic toys as scarily human and nearly indistinguishable (from a child’s perspective) from actual human attachment. However, a concern with this argument may be that the limited vocabulary of children coupled with a desire to emulate their peers and elders convinces them to use vernacular that is more human to describe robotic interactions.

    Coupled with Turkle’s theory of bridging the gap, how much of a child’s interaction with their robotic toys can be attributed to this phenomenon?

  8. Turkle is unfairly skeptical of the potential for emotionally intelligent inanimate objects, going so far as to claim such robots may be socially pernicious. While I accept her qualms about the possibility that robots could negatively impact our relationships with others, I question the scale of the drastic conclusions that stem from her skepticism. For example, Turkle claims that machines will continually underperform in light of our expectations of true friendship and connection; consequently, she is fearful of such connections demeaning what is currently thought of as true friendship (101).

    Her claim here does not sit well with Turkle’s earlier observation that “the audience gasped” (53) at the first display of AIBO’s ability to feign shame, hanging its head low. This astonishment is representative of a fundamental disconnect in Turkle’s earlier comment. Perhaps initially, at the first exploration of artificial intelligence as a field, we expected true friendship or true intelligence from pieces of machinery. But it would be fallacious to assume that our expectations have not empirically adjusted in the past few decades. Today, Siri – a robot without any physical presence and only the ability to fetch a few details of information through voice commands – is viewed in awe the first time someone speaks to her. We do not expect true friendship, because we are delighted the second she retorts with a quip, rather than the expected error screen or message.

    Turkle’s other claim – that the connections and relationships we forge with machines may change those we currently have – holds some validity. However, I contend that her insinuation that such change necessarily equates to degradation is invalid. Note that I am not arguing on the case of replaced interaction (where one spends time interacting with a robot that would otherwise have been spend forging or building on pre-existing relationships). Rather, I believe that if relationships are maintained with robots in addition to those with other human beings in some balanced manner, the outcomes for humans will almost always be positive. Having a permanent, unassuming companion could serve as a solution to social anxiety or other emotional issues in children and could be a social asset to elderly who may be suffering from loneliness, particularly after the passing of a loved one. Both of these cases (and multiple more) help humans, and are dependent on users being able to foster a strong emotional connection with their mechanical counterparts.

    I do believe some of the alarm Turkle voices may stem from her observation of significantly excessive interaction with robots, to the point that it may have replaced time spent interacting with family or friends. I would caution that such an observation may be a bias stemming from the research itself, much of which involved conversations with children who were given a new toy to play with for a limited period of time. In such a case, could we expect them not to play with it excessively?

  9. NOTE TO STUDENTS IN SYMSYS 201: Scores need to be sent to me only for student comments posted in response to the readings. This comment and any others about additional topics do not need to be scored (unless you want to, of course. 🙂 ).

    At the end of class last night I mentioned fmr. Lt. Col. Dave Grossman’s writnig in On Killing and elsewhere about what are called “firing ratios” — the percentage of troops who fire their weapons in a given combat scenario. The figures I gave (10% during WWII and 85% today) I got from Prof. Michael Nagler in a course he taught at UC Berkeley (Introduction to Nonviolence, PACS 164A) which I watched online. I don’t know the exact video or location of this reference, but it was part of the 2006 course that is on Youtube (starting with http://www.youtube.com/watch?v=H4F8kJchX4I).

    I decided to look into this further. It turns out that Grossman’s original source was S.L.A. Marshall’s 1968 book Men against Fire: The Problem of Battle Command in Future War. In a review of Grossman’s book for the Canadian Military Journal, Robert Engen writes: “However, the claims that Marshall made, supposedly based upon his interview data, proved to be controversial. He claimed that only 15 to 25 percent of even the best-trained soldiers would ever fire their weapons in combat,” and that “the ratio had risen to 55 percent of soldiers firing their weapons by the Korean War, and was over 90 percent in Vietnam,” but, according to Engen, there are “no surviving notes or documentation that would substantiate his claims, and no corroborating evidence from Marshall’s companions, there is only Marshall’s word that his claims regarding the ratio of fire were supported by the empirical evidence of his interviews” (http://www.journal.dnd.ca/vo9/no2/16-engen-eng.asp). Grossman wrote a response in the same journal, noting that he had previously written about the controversy surrounding Marshall’s claims and arguing that there is abundant other evidence to support the truth of both the “resistance to killing” phenomenon and the effectiveness of conditioning in dramatically reducing it (http://www.journal.forces.gc.ca/vo9/18-grossman-eng.asp).

    Grossman has become a crusader against violent video games and other forms of entertainment (see e.g. http://www.killology.com/tricitynews.htm) and has drawn much criticism as a result (see e.g. http://www.freerepublic.com/focus/news/2975074/posts) [note that neither of the previous two links are from academic, peer-reviewed sources].

    For our purposes in this class, a key issue is whether information/communication technology plays a dramatic role in training soldiers to kill. Grossman believes it does, although the conditiong training he says was used to increase firing ratios among U.S. troops in the Korean and Vietnam Wars could not have involved video games. I would need to learn much more about the details of research in this area to have a stronger opinion about the role of technology in such training. Although I mentoned Grossman in class, I am also skeptical of attempts to ban violent video games.

  10. I found this first section of the book to be surprisingly forthcoming and informative in a strict manner. Turkle wastes no time in presenting factually supported hypotheses. One such argument is that the attachment we feel toward robots, the catalyst for the “robotic moment”, is our own depiction of self.
    This connection is made from the outset of the book when Turkle describes elderly patients and their interactions with Paro the robot-seal. “In attempting to provide the comfort she believes it needs, she comforts herself,” (9) provides foreshadowing into how the author will analyze the causes and effects of human-robot interactions. When first reading this, I couldn’t help but compare the roots underlying human-to-robot interaction with those of human-to-human interaction. In the case of Paro, the eldery woman, Mariam, projects a need onto the seal stemming from her own discomfort. It seems the empathy serving as a basis for this interaction is also present in purely human interactions as well. It is our mechanism for understanding and communicating in ways that language cannot describe. It’s past experiences and learned truths melding individualized realities into a collective understanding. While robotic moment differs from the human moment in its number of active conscious connections, the human mechanism used to create both remains the same.
    Turkle reinforces this point throughout the opening chapters, providing that our tools of interaction are innate. The audience is informed, “children’s attachments speak not simply to what the robots offer but to what children are missing” (87). Revealing another aspect of our projected feeling, Turkle explains that children’s use of empathy is a mutually beneficial instinct designed for early survival. Whether this is with a human or robot, the idea for interaction with either is first created in response to a need within an individual. The engagements of human-to-human and human-to-robot differ only in how they are received, and whether a second party is able to reciprocate “feeling.” I argue that in both cases, the cause for interaction is the same: empathy of an individual.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s