12/2 – Student-led Discussions II

In this final session we will be hearing presentations about, and discussing, 4 books:

  1. Heidi Boghosian (2013), Spying on Democracy: Government Surveillance, Corporate Power, and Public Resistance  [led by David Olson]
  2. Jim Blascovich and Jeremy Bailenson (2011/2012), Infinite Reality: The Hidden Blueprint of Our Virtual Lives [led by ericw213]
  3. Ray Kurzweil (2012), How to Create a Mind: The Secret of Human Thought Revealed  [led by ilyagaidarov]
  4. Jaron Lanier (2013), Who Owns the Future?  [led by maggiesko]

Presenters, please review the presentation guidelines for tips on preparing and leading your presentation/discussion. Excerpts should be sent out preferably by Friday, November 22nd, and definitely no later than Monday, November 25th.

Everyone: Read the distributed excerpts.  Blog responses may be posted about any or all of the readings other than one you are presenting.

Advertisements

7 Comments

Filed under Class sessions

7 responses to “12/2 – Student-led Discussions II

  1. In the excerpt from “How to Create a Mind”, Kurzweil provides arguments in support of his view (call it View 1) that

    View 1: “Consciousness is an emergent property of a complex physical system” (6).

    Since most discussions of consciousness tend to make subtle assumptions, I thought I would analyze and criticize some of the assumptions that Kurzweil makes for this view. In fact, consistent with View 1, “an ant has some level of consciousness too….The ant colony, on the other hand, could be considered to have a higher level of consciousness than the individual ant; it is certainly more intelligent than a lone ant. By this reckoning, a computer that is successfully emulating the complexity of a human brain would also have the same emergent consciousness as a human” (6). But here I am confused. While I can intuitively see why an ant colony would be considered more conscious than an individual ant (namely, more parts in a system can potentially give rise to more complex behavior), I do not see any clear set of criteria that Kurzweil proposes by which to deem something “more conscious” than another. One potential response to my criticism would be Kurzweil’s reformulation of View 1, call it View 1a.

    View 1a: Consciousness is a physical system “that has ‘qualia’” (6).

    Therefore, it seems that the major criteria in assessing whether one system is more conscious than another is if it has more qualia. But qualia, as Kurzweil rightfully acknowledges, is elusive to define. For instance, if one tries to define it as “conscious experiences” (6), then someone who can only see black-and-white could form conscious associations as to what the color red may be (e.g. it is the color that is associated with apple, fire, violence, roses, love, etc.), but that would not enable her to actually experience red (as we do). But nonetheless, the person who only sees in black-and-white still has some experience of red, even though it may differ from those who have normal vision. Therefore, Kurzweil posits that another possible definition for qualia is “the feeling of an experience” (7). But again, as Kurzweil mentions, this definition is problematic since “feeling”, “consciousness”, and “having an experience” are all synonymous, so we are led to circularity.

    Thus, Kurzweil re-asks his main inquiry for this section: “Who or what is consciousness?…I maintain that these questions can never be fully resolved through science. In other words, there are no falsifiable experiments that we can contemplate that would resolve them, not without making philosophical assumptions. If we were building a consciousness detector,…American philosopher Daniel Dennett (born in 1942) would be more flexible on substrate, but might want to determine whether or not the system contained a model of itself and its own performance. That view comes closer to my own, but at its core is still a philosophical assumption” (7). In support of this position, Kurzweil goes on to criticize the proposals of Hameroff and Penrose of scientific theories linking consciousness to some measurable attribute.

    Therefore, it seems then that there is no precisely defined criterion in Kurzweil’s view that measures whether one system has more consciousness than another. But if this is the case, then what serves as motivation for Views 1 and 1a?

    He writes that “the reality is that these theories are all leaps of faith, and I would add that where consciousness is concerned, the guiding principle is “you gotta have faith”—that is, we each need a leap of faith as to who or what is conscious, and who and what we are as conscious beings” (10). But even though I accept that qualia is hard to define (and perhaps impossible to objectively measure), Kurzweil could have at least provided a discussion about concrete reasons why his proposal for his theory of mind should be accepted over others or why I as a reader should take the “leap of faith” and accept his theory instead of someone else’s.

    Let’s say I do take the “leap of faith that a nonbiological entity that is convincing in its reactions to qualia is actually conscious, then consider what that implies: namely that consciousness is an emergent property of the overall pattern of an entity, and not the substrate it runs on” (11). Then it seems logical to me to accept the premise (as Kurzweil does) that You 2 is conscious since the gradual introduction of nonbiological systems into our brains will be no different than the natural replacement of our biological cells. But without accepting the leap of faith, it seems there is no way to justify why You 2 is just as conscious as You.

    As a result, Kurzweil’s argument essentially boils down to me accepting his belief that qualia is the distinguishing factor in saying that a system is just as conscious as another system, but without any satisfactory justification as to why I should make the leap of faith in the first place.

  2. A. To

    In Chapter 5 (“Spying on Children”) of Spying on Democracy, Boghosian highlights the pervasive corporate presence in the lives of today’s children. Kids are constantly bombarded with advertisements in ways that are dangerously manipulative to what is a particularly impressionable audience. She writes that the Children’s Online Privacy Protection Act (COPPA), along with other legislature intended to protect children from manipulation and sharing their private information (which may later be used against them). But how can we protect children from this kind of onslaught without the ability to keep up with new forms of advertisement. COPPA requires “verifiable parental consent” before collecting and using information provided by children under 13, but most of the time, it is laughably easy for a child to lie about their age on a website.

    It is easy to characterize the villain as a looming corporate giant, trying to manipulate children into getting their parents to spend money, but Boghosian also reminds us that with the explosion of mobile apps, those who molding children into consumers at a young age may have no idea what they are doing. “The [mobile app] industry’s growth is fueled largely by small businesses, first time developers, and even high school students without access to either legal counsel or privacy experts” (129). Just this past week I watched a 3-year-old family member playing through what looked like a simple (and certainly not corporate-sponsored) mobile game. As eager as she was to impress me with her skills, when ads popped up, she clicked through as fast as she could, scaring me when she nearly made a $99 purchase. I’m certain that the developers of the overly simplistic game have no real investment in whatever it was she almost bought, but the carelessness of that ad and other she probably encounters are a completely unknown quantity. The research is behind and we don’t know what that ubiquity of flash ads will do to children at such a young age. Though she is still too young to read, and though her parents try to severely limit her time borrowing “mommy’s phone,” I’m worried that those brightly lit up ads that overtake her visual senses every 2 minutes may be molding her sponge-like mind into a consumer before her first day of Preschool. And as she gets older, how will ads become more sophisticated? We already know that we exercise the ability to tailor advertisements based on a user’s data – their interests, web activity, and online posts. What will that tailoring do to someone who grows up their entire life with ads designed with one particular target in mind?

  3. In the excerpt provided from Spying on Children, Boghosian explores the extent to which young children are exposed to corporate targeting and informational surveillance. One of the early statistics that Boghosian highlights is striking: “In a conservative calculation of children’s exposure to television advertising, the Federal Trade Commission estimated that children ages two through eleven watched nearly 26,000 ads in 2004” (123). Keeping in mind that this total is just from television, that the amount of advertisement displayed online is magnitudes greater, and that there’s even more advertising now than there used to be, it seems to say that children are exposed to a huge amount of advertisements on an annual basis. On its own, this advertisement is not necessarily insidious, although it likely predisposes children to buy into the commercialism that comes with the corporate capitalism that we saw in McChesney’s piece. Whether this has serious harms for their well being – and then whether regulating these advertisements more heavily would avoid these harms – is of course debatable.

    Yet Boghosian points out that this targeting is tied to other norms of surveillance that can harm children in more immediate ways. The first is the loss of privacy that comes from knowing someone who is targeted by a corporation in some commercial way. In an example, Boghosian notes that “fourteen groups cited five corporations that used theme websites aimed at the young to engage in commercial exploitation of children by enticing them to play online games or engage in online activities and then encourage them to share their experiences by giving email addresses of their friends” (124-125). In this case, not only do children unsuspectingly have their contact information passed on to corporations, but these children can then be themselves targeted by these corporations with other ad campaigns, exposing them to commercialism that they may not want. The second harm is the loss of privacy from even greater degrees of surveillance, and as Boghosian observes in a school district that gave laptops to its students, “in a remarkable display of audacity, school district administrators were remotely activating build-in laptop cameras to watch students’ behavior in the privacy of their own homes” (131). The visceral reaction that many people had when this revaluation went public shows that people seem to attach a great degree of value to at least some aspects of their privacy. The question then becomes whether the subtle forms of targeting and surveillance that advertisement campaigns employ perpetuate the types of manipulation and privacy violation that we are not okay with as a society.

    I think these observations are fascinating, and in particular I wonder what the tangible effects of increased exposure to advertisement and surveillance in youth are on how much a person accepts commercialism and values their privacy later on in life. In light of this, I think it’s unfortunate but expected that “the United States lags behind other industrialized democracies in regulating children’s exposure to corporate persuasion” (124), hinting at McChesney’s observation that many important policy debates on technology and society are either opaque or nonexistent. More discussion and debate could be the key to creating interest in investigating the tangible effects of advertisement on children’s development. Whether this discussion will ever happen is up in the air.

  4. hramesh2013

    Ray Kurzweil’s book, How to Create a Mind, provides a good review of some of the currently existing philosophies of consciousness. He brings in everyday concerns showing why the “hard problem” of consciousness, as termed by Chalmers, is important to consider. One such concern is whether fetuses have consciousness and by extension and implication, whether abortion really is murder. Kurzweil asserts that:
    “We would be well advised not to dismiss the concept [of consciousness] too easily as just a polite debate between philosophers – which, incidentally, dates back two thousand years to the Platonic dialogues. The idea of consciousness underlies our moral system, and our legal system in turn is loosely build on those moral beliefs.”
    Furthermore, he says these moral beliefs ultimately dictate who we judge to be conscious.
    Kurzweil says that pro-life supporters go beyond the “issue of consciousness” and lambast anything that endangers the potential for fetus to become a conscious person, which can range from straight-up abortion to the morning-after pill (labeled as the abortion pill). Furthermore, he points out that pro-life supporters want to ultimately award the same “right to life” as someone in coma, someone who was once conscious but now is not. Lastly, Kurzweil points out in the hypocrisy, if pro-life supporters adopt the panprotopsychist approach that recognizes consciousness in all entities, in treating human or human-like lives, such as those of apes, with sanctity and insects with utter disgust.
    These blurry questions of who or what is conscious and what makes them conscious are important, Kurzweil emphasizes, because how we answer these questions shape our beliefs about morality and legality. The worst thing to do, he implies, would be to reject them as irrelevant. Regardless of what side of the abortion issue spectrum one is one, one would be more informed about the debate if one were to look at it through the lens of the consciousness problem.

  5. I found the excerpt from “Who Owns the Future” by Jaron Lanier provided an interesting companion to McChesney’s fear of the future of the Internet. McChesney talked a lot about the fears that a capitalistic influence has on the internet. Lanier provides a similar fear. He mentions the discrimination of a “siren server.” He draws a similarity to Maxwell’s Demon that was thought to provide unlimited energy, but the separation of cold and hot particles ended up using more energy. He says the internet acts in a similar manner. What we see may be advantageous in the near term, but it might not in the long term. I think it’s interesting to think about technology in this way. People often say new technology is disruptive technology, and it generally comes off as a positive thing. However, it’s worth taking a moment to think about the long-term effects.

    One part of his excerpt I found interesting was the reference to Amazon and how it’s hurting small businesses. Lanier mentions that Amazon would have price triggers that always made books cheapest on Amazon. So if there were a sale going on somewhere, Amazon would beat the price. I don’t think Amazon does that anymore, or maybe there is just nowhere that can compete with Amazon prices. However, I think it does create a dangerous space. I’ve heard before that Amazon is building this huge infrastructure, lowering prices, and wiping out competitors. In the meantime, they are slowly becoming a monopoly, and at some point they have complete pricing power. It’s just a matter of when they decide to flip the switch and raise prices. I think McChesney would share this fear because he also mentions the rise of monopolies in the internet age. Amazon’s low prices may seem appealing in the short term, but it may be dangerous in the long term.

  6. Both the Infinite Reality and How to Create a Mind excerpts attempt to address a similar question, but through different means. In How to Create Mind, Kurzweil addresses the “problem of consciousness” – the conundrum of mind versus brain and the innate human sentiment that there is something inherently different between what we experience and what a robot who is inhuman but indiscernibly human-like does not “experience”. Kurzweil attributes this difference to consciousness and postulates through several thought experiments that if the difference is indistinguishable, then there is no difference. In Infinite Reality, rather than looking at the mind, the author investigates virtual worlds that we spend increasing amounts of time in as well as virtual worlds that are indistinguishable or sometimes even greater than reality. The difference between reality and virtual, in this case, is again consciousness.

    Both authors are concerned with the inability to define consciousness (as philosophers have also struggled for centuries) in a way that distinguishes one ‘false’ perception of consciousness from a more biological consciousness. Both passages beg the question: are the virtual and consciousness really different? And if so, can we or will we ever be able to define them such that we may discern between one and another?

    Kurzweil theorizes that we will never be able to define consciousness in such a way and already we take a leap of faith that it exists without being able to iron out a definition. Because of this leap of faith, Kurzweil believes the chasm between consciousness and believably emulated consciousness will close, “my position is that I will accept nonbiological entities that are fully convincing in their emotional reactions to be conscious persons”. Infinite Reality on the other hand, simply outlines examples (often fictional) where virtual reality and reality itself converge and one is nearly indistinguishable from another. This is a cautionary tale as opposed to a hypothesis.

    In Infinite Reality, the author discusses the movie Avatar where the character desires their virtual reality so much that to them it becomes real. Similarly, Kurzweil discusses how humans attribute human-like attributes to robotic entities: “I am seeking to connect with attributes that I can relate to in myself and other people.” Turkle, earlier in the course, was also concerned about this issue. My question is, does our desire to understand consciousness motivate us to attribute consciousness to non-conscious things? Since these other objects (robots, animals, etc.) and realities are similar enough to us, we are motivated to add missing details to stories to create the most coherent narratives. If so, what are the implications of this when, as Kurzweil suggests, AI emulates humans more and more accurately. Finally, is it plausible that both one day AI could emulate humans so precisely and that we could not discern who/what is or is not conscious?

  7. maggiesko

    In the excerpt of “How to Create a Mind” Kurzweil presents different theories on the nature of consciousness among which his own idea that a conscious entity is one that has “qualia”; that it can have conscious experiences and a feeling of experience. What I found interesting is similarity between Turkle’s definition of AI in alone together and Kurzweil’s idea of when a non-biological entity might be perceived as conscious. . Turkle notes that “we are coming to a parallel definition of artificial emotion as the art of “getting machines to express things that would be considered feelings if expressed by people”(Turkle, 63). And Kurzweil builds on this notion “Once machines do succeed in being convincing when they speak of their qualia and conscious experiences, they will indeed constitute conscious persons.” What is common between Turkle and Kurzweil is the importance of the entity displaying emotions in a human-like way. And even though Alone Together is a human-centric book that has a different view point than How to Create a Mind, I think that the experiences of Turkle’s subject when interacting with robots are not that distant from Kurzweil’s definition of qualia. Many children and some of the elderly described by Turkle appeared to perceive the sociable robots they were given as though they experienced emotions. Children would feel pity for their malfunctioning Furbies, explain the mechanical fault as an illness and take care of the toy; the elderly would try to calm down their “upset” baby robots and try to cheer them up. Perhaps if Turkle’s subjects were introduced to idea of qualia, they might have answered that the robots they have interacted with definitely possessed it.
    And even when we are not directly interacting with robots, as Kurzweil notes, we still accept robots from futuristic fiction like having consciousness. Even though nowadays robots are still far from being the sophisticated non-biological entities described in How to Create a Mind, we can almost be persuaded (as evident by AloneTogether) that a machine could be conscious.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s