11/14/2017 – Life 3.0 (continued)

This week we continue reading and discussing Max Tegmark’s book Life 3.0: Being Human in the Age of Artificial Intelligence, chapters 4-6. Students in the class: please post your blog comment on this entry by Tuesday, November 14, at 6pm.

Advertisements

16 Comments

Filed under Class sessions

16 responses to “11/14/2017 – Life 3.0 (continued)

  1. I found chapter 4’s discussion of the ‘intelligence explosion’ fascinating, but also somewhat unclear in its conclusion. It is certainly possible that a ‘fast takeoff’ of AI happens, and that such a takeoff is accompanied by a unipolar result (where Prometheus or equivalent takes over the world). The problem here, for me, is what exactly we should do about it. While thinking about the potential rise of AI in the abstract – or the energy- and data-related questions that arise therefrom – is interesting, thus far the book has (perhaps deliberately) stepped away from specific conclusions about how we might prepare for any of the consequences of AI.

    This ambivalence, at least in my opinion, stems mostly from what UK central banker Mervyn King termed “radical uncertainty”, originally a financial term but applicable here also: where we are oblivious not only to the probabilities we ought to assign to various outcomes, but on the set of outcomes themselves. The sheer size and scope of the options in chapter 5 – or the dozens of ways in which Prometheus might be able to hack its way out of an enclosed system – suggest that creating an exhaustive list of outcomes is a pipe dream.

    Worse still, there seems to me to be no generalisable prescription. In finance, for example, general uncertainty about risks can be mitigated in all cases by lowering leverage ratios; in AI, restricting killer robots *might* curtail specific AI advances in dangerous fields, but also commensurately means we have no academic knowledge on sensible restraining mechanisms for AI soldiers, which in turn means that if a general AI *does* arise, we have less expertise for curtailing its worst effects in certain fields. Before we can say much about AI, we need to see what it can do.

  2. Meg Verity Elli Saunders

    In Chapter 5, Tegmark explores several alternative possibilities regarding the creation of life 3.0, listed as AI aftermath scenarios(162). Each has its own appeals (some more than others), but there is not one that does not present significant potential drawbacks and challenges. Therefore no possibility appears to be a complete utopia. Indeed, some of them sound very much like dystopias, such as the “Zookeeper” scenario where AI keeps a small number of humans around to function as almost-extinct-animals in a zoo. Tegmark concludes that we need to “deepen this conversation about our future goals, so that we know in which direction to steer…let’s not squander [the future] by drifting like a rudderless ship”(201). The need to plan certainly seems like a reasonable assumption. However, there is a serious difficulty that we might encounter whilst attempting to plan the future of AI, which is that our current understanding of AI is very limited. This is made clear by Tegmark’s suggestions as to what could go wrong with the hypothetical AI example of Prometheus. At one point he says “a confined, super-intelligent machine may well use its intellectual superpowers to outwit its human jailers by some method that they (or we) can’t currently imagine.”(140) Tegmark alludes again to the idea that, due to its highly superior intelligence, AI could overpower us in ways that we cannot comprehend when he talks about “Conqueror” AIs, saying that one such AI could eliminate us “probably by a method that we wouldn’t even understand”(185). Due to our very limited understanding of what an AI might be able to do, I am skeptical of the notion that we will be able to anticipate what an AI we create might do, and use it to engineer our future.

  3. tamara2205

    In Chapter 5, Tegmark delineates a number of tentative AI aftermath scenarios, spanning an incredibly diverse spectrum of possibilities. In doing so, he does a good job in remaining unbiased towards any particular outcome and basing his hypotheses about the potential powers of AI on diverse AI research. However, the scenarios do not only involve hypotheses about how advanced the technology itself might become, but also assume what people will think about the AI and how they will interact with it and with each other. While I appreciate Tegmark’s well-informed consideration of different forms of the technology (e.g. cyborgs, uploads, or software without “a permanent physical form” on page 164), I believe his views on people’s attitudes towards the AI in each scenario are largely speculative. For example, Tegmark states that people would be “quite happy with their lives”(165) in the Libertarian Utopia, “share their good ideas” (173) in the Egalitarian Utopia, and view the AI in the Benevolent Dictator scenario “as a good thing” (169). These hypotheses about the social and psychological implications of AI seem largely unsupported by scientific research and seem quite subjective — people might not actually choose to share good ideas or feel happy and may react to the scenarios quite differently from how Tegmark anticipated. However, I believe Tegmark’s need to fill in the social aspects of the scenarios with subjective predictions speaks less to a personal mistake and more to the general lack of research on the societal implications of AI. Therefore, I believe social science researchers should be as encouraged as traditional AI researchers to study the potential impacts of this technology on both individuals and societal structure as a whole, and to do so before the power of AI overrides our wisdom about how to use it.

  4. In Chapter 5, Tegmark runs through a handful of scenarios of reality after the creation of superintelligent AI. I became particularly skeptical of the “Descendent” scenario (pg. 188-9). This imagined future discussed a transition from humans to AI ruling the world; the primary difference between this scenario and that of the “Conquerer” is that instead of a hostile takeover by AI as in the “Conquerer” scenario, AI provides humans with a graceful exit in the “Descendent” one.

    What struck me as paradoxical was the implementation of a one-child policy and the general sentiment of being the “most fortunate generation ever” (188). China’s infamous one-child policy — called “the Great Wall of family planning” — has proven instituting such a scheme causes immense psychological and emotional distress to both parents and children as a byproduct. Studies have found citizens that lived in China’s one-child policy era are “less trusting, less trustworthy, less competitive, more risk-averse, more pessimistic and less conscientious” than their counterparts (Cameron & Erkal 2013).

    Researchers posit this is because China’s one-child policy caused parents to micromanage and obsess over their child, creating a generation of what people call “China’s little emperors” (Arnold 2013). Others suggest that because of the one-child policy, parents were less likely to teach to their kids to be imaginative, trusting and unselfish. Either way, China serves as a case study that explains why the one-child policy outlined in the “Descendent” scenario would be vigorously opposed as it causes vast psychological strife and despair.

    Arnold (2013): https://goo.gl/SQH7Lc
    Cameron & Erkal (2013): goo.gl/c8a4UR

  5. rhwang2

    In chapter 6, Tenmark explores the interesting possibilities of halting major AI adoption before the scenarios he explores earlier in the chapter can become a reality. It’s an interesting question: is it realistic to think that we can really alter the role of AI in society? Regarding the possibility of halting AI usage, Tenmark seems to agree with Ray Kurzweil and Eric Drexler that the only remotely realistic solution is a global totalitarian state. He argues “the reason is simple economics: if some but not all relinquish a transformative technology, then the nations or groups that defect will gradually gain enough wealth and power to take over (location 3465 kindle).” To illustrate this point, he gives the example of how the Chinese lost the First Opium war because they had not developed firearms as aggressively as the British. Although I find this reasoning to be convincing, I find the possibility of creating a totalitarian state in the current state of international society to be highly unrealistic. Pursuit of this goal would require the unification of many countries who maintain a nuclear arsenal and wish to remain sovereign.
    Tenmark’s point on economics is intriguing, though, given that economics could very likely play an influential role in determining how our relationship with AI plays out. In particular, if there are always economics incentives in place could we really alter the rules so that our path with AI deviates from the path that our incentives encourage? For example, one could argue that nuclear proliferation was inevitable given the nature of the incentives in place.

  6. In Chapter 5, Tegmark discusses scenarios for superhuman AI aftermath. Scenarios differ in controlling agent, degree of control, human safety, and human happiness, as seen in Table 5.2 (163). Interestingly, on http://www.ageofai.org, where readers can vote on preferred scenarios, the Egalitarian Utopia leads by a significant margin over the runner-up, Libertarian Utopia, which is subsequently substantially more popular than all other scenarios. Both of these 2 “Utopias” involve humans and technology “coexisting peacefully”, however the level of technology that exists and the provisions that allow for coexistence differ in each – in the Egalitarian Utopia, superintelligence creation is repressed, but humans, cyborgs, and uploads coexist due to “property abolition and guaranteed income”, whereas in the Libertarian Utopia, superintelligence, along with humans, cyborgs, and uploads, coexist “thanks to property rights” (162).

    What is thought-provoking here is that the possibility for an Egalitarian Utopia with AI is not an available option; in other words, Tegmark implies that the only way superintelligence can coexist with humans is by upholding property rights of each species to limit “recursive self-improvement” and “intelligence explosion” of the superintelligence that may lead it to take authoritarian control of society (176). Given the increasingly-popular policy of UBI based on recent polls* and the significant public affinity for Egalitarian utopian policies based on ageofai.org, it seems that incorporating technological constraints and regulations, to preemptively limit superhuman intelligence explosion should be a consideration for policy-makers. However, most plausible constraints, such as limits of information transfer to the superhuman intelligence, or caps on AI-related research, seem staunchly anti-egalitarian. This leaves me inclined to believe that Tegmark’s version of sans-AI Egalitarian Utopia is the only plausible version, which leads me to question whether future applications of egalitarian policies should be reconsidered because of this flaw and its seemingly-crucial prohibition on AI-technology development.


    In “a census-representative survey of 11,021 people across the 28 EU countries completed in March 2017 by Dalia Research”, 68% of the sample voted for the UBI, and 31% of the sample wanted UBI to be in place as soon as possible. See below:
    Holmes, Anisa. “31% Of Europeans Want Basic Income as Soon as Possible.” Dalia Research, Dalia Research , 16 Oct. 2017.

    “The Future of AI: What Do You Think?” Ageofai.org, Future of Life Institute.

  7. When considering some of the possibilities of our future relationship with AI, Max Tegmark considers the possibility that we might create a general artificial intelligence that serves the purpose of keeping human lives meaningful, by acting in ways that extoll the virtues of human self-determination. Such a “protector god” as Tegmark describes it, would maximize “human happiness only through interventions that preserve our feeling of being in control of our own destiny, and hiding well enough that many humans even doubt its existence” (177). As opposed to an AI that satisfies every hedonistic whim that might entice us, this protector god would organize society in such a way that surreptitiously allows for human progress to seemingly occur through human means.
    Concerning the extent of such an AI’s influence, I would imagine that it would still allow human to use computers as tools to further our knowledge about the universe. However, it seems clear to me that scientific progress would eventually run into problems that would become insurmountable without the aid of super intelligence in some form or another. How would such a protector god cope with such a dilemma? On one hand, this god must act in a manner that cloaks its existence. At the same time, blocking such progress might result in dissatisfaction with life, at least among those individuals interested in pushing the envelope of our knowledge. I can think of at least three distinct scenarios that might resolve this tension. First —and I believe this scenario to be most likely— the super intelligence’s scientific knowledge is always expanding faster than humanity’s. Second, the protector god actively sabotages attempts to create super intelligence like Tegmark’s gatekeeper. Third, humanity pushes forward and forces a confrontation with our protector god, and some new paradigm emerges, possibly resulting in the elimination of humanity as we know it.

  8. Caroline Cin-kay Ho

    In “Aftermath: The Next 10,000 Years,” Tegmark discusses the potential of “mixed zones” filled with cyborgs and uploads in his conception of a libertarian utopia, stating that these upgraded humans would be “liberated… from the need to work and freed… up to enjoy the amazing abundance of cheap machine-produced goods and services” (166). This made me wonder about a similar world superintelligent AI has not yet been achieved but humans can significantly alter their own “hardware” with cybernetic modifications and uploads. While the apparent “promise of techno-bliss and life extension” (167) from cybernetic upgrades initially seems highly attractive, as Tegmark himself notes, “If this technology arrives, however, it’s far from clear that it will be available to everybody” (167). Being a libertarian society, there is no guarantee that all people are given access to the same opportunities for modification or life extension. Furthermore, as there would likely be a very high demand for (subjective) immortality, a decrease in the cost of producing the technology required to modify or upload a human might not have an impact on the sticker price, preventing all but the extremely wealthy from obtaining it. One might imagine a nightmare scenario of extremely exacerbated inequality in which wealthy humans are able to modify themselves to become near-superintelligent and ultimately use their higher intelligence to take and justify their total control over society, while unmodified humans are treated as “sub-human” due to their lower intelligence and have little, if any, input in governance. Ultimately, the availability of such extreme modifications might pose a grave threat to democracy and self-determination, leaving a class of god-like cyborgs in power while disenfranchising those not wealthy enough to obtain modifications.

  9. The concept I found most fascinating in this week’s reading of Tegmark’s Life 3.0 was that artificially intelligent entities might act like mercenaries for hire. In the sub-section on totalitarianism in Chapter 4, Tegmark writes, “Another possible twist on the Omega scenario is that, without advance warning, heavily armed federal agents swarm their corporate headquarters and arrest the Omegas the threatening national security, size their technology and deploy it for government use” (137). This is an incredibly useful thought experiment, because we often imagine AI in the hands of private corporations. But what happens when that powerful, unchecked ability, let’s say, to collect data, falls into the sovereign’s hands? Facebook and other data behemoths are resistant to regulation and anti-trust intervention on the grounds that the government should not interfere with freedom of expression. Tegmark’s AI example, however, points to the idea that any large amassing of data in the “safe” realm of the private sphere is dangerous because state actors may swiftly make use of said data for anti-speech purposes. Authoritarianism in industry can bleed into authoritarian government. This is a thesis sociologist Zeynep Tufekci develops across her writing (book presentation forthcoming!). Tegmark pushes the experiment further, proposing that foreign governments could also appropriate that which was once private. Look no further that the recent Russia Facebook ads example for evidence of the feasibility of this claim. Of course Tegmark is speaking in terms of AI and not social media, but the principle sticks: “No matter how noble the CEO’s intentions were, the final decision about how Prometheus is used may not be his to make” (138).

  10. SM201

    In chapters 4-6, Tegmark describes methods by which AI can advance human civilization beyond our current environment – technologically and geographically. Speaking contrary to many domain-experts, however, Tegmark claims that “by limiting attention to technology invented by humans, they’ve overestimated the time needed to approach the physical limits of what’s possible.” (227)

    This claim evokes two points. First, given that we have only a rudimentary understanding of the development and capabilities of superintelligent technology, it is difficult to quantify how much faster we will approach these practical/physical limits. Moreover, it’s unclear whether these current limits (as we understand them) will remain viable as our scientific understanding improves.

    Towards the first point, Tegmark might argue that he’s not dictating an absolute timeframe for superintelligence reaching the physical limits- rather, he’s simply disputing the scientists’ timeframe based on failure to consider all possibilities. Despite his penchant for quantification, Tegmark would likely agree that accurate prediction is extremely challenging, and is often biased by historical trends in progress (e.g. Moore’s Law). (217)

    Towards the point about inherently unknown limits, Tegmark generally makes an effort to quantitatively substantiate his claims (e.g. that the currently-believed physical limit on computation is 5×10^50 operations/second). However, he himself concedes that “humans need to be humble and acknowledge that there are basic things that we still don’t understand” – and I suspect that he would continue to stand by this argument. (231)

    From Tegmark and I’s claims, these questions don’t appear to be obviously answerable. In terms of next steps, perhaps the best approach might simply be to iteratively improve our predictions of limits and our progress towards them as new evidence becomes available over time, “embracing technology and proceeding…with caution, foresight, and careful planning.” (246)

  11. Divya Siddarth

    I went to a talk yesterday and found a particular concept really interesting – ‘when we talk about merging with machines, we aren’t merging with machines in the abstract. We end up to some extent merging with the companies or the people that create those machines’ (paraphrased). In these chapters of Life 3.0, Tegmark outlines numerous scenarios of AI ‘breaking out’, and also several directions in which the intersections of AI and humans (and cyborgs, uploads, etc) could play out. However, he often frames this with phrases like “what will happen if humanity succeeds in building human-level AGI” (156), and asking questions like will “humans exist, be in control, be safe, be happy”, etc (163). I want to give more thought to the questions of which humans will be building the AGI, which humans will be in control, or safe, or happy. We’ve talked about encoded bias in our previous classes, and that comes from who, or what entity, or what corporation, ends up building the AGI under discussion. Inequality obviously exists in humanity now, and it seems silly to believe it will not propagate unless something is done – especially since, given the way the world is currently structured, it is likely that any successful AGI will be built by a very particular sector of the population.

    Tegmark points out that AI breaking out can come from competency at its goals, even if those goals were to ‘better humanity’, because it may realize it is more efficient to work without human instructions (139-149). However, even assuming this fairly best-case scenario is what happens, the definition of ‘betterment of humanity’ would depend largely on which humans were making those decisions, which could lead to bias that (if the scenarios play into into the creation of superintelligence) we would never even see, let alone fix. I think being more cautious around treating humanity as a bloc will be a crucial part of the ongoing conversations around AI that Tegmark hopes to engender.

  12. Alex Gill

    At the end of chapter 6, Tegmark argues against the attitude that life almost certainly exists outside Earth. Even though he admits he may be wrong in thinking Earth holds all the life in the universe, Tegmark argues that “it’s at the very least a possibility that we can’t currently dismiss, which gives us a moral imperative to play it safe and not drive our civilization extinct” (241). Later, Tegmark comments that “if we don’t keep improving our technology, the question isn’t whether humanity will go extinct but how.” Furthermore, he votes to embrace technology (with caution of course) so that life can continue (246).

    Here, I wish Tegmark expanded more upon how much humans are morally obligated to pursue technology that would possibly prevent extinction. If an asteroid hits Earth tomorrow, surely we haven’t made a moral mistake of not developing technology that prevents it. However, I can imagine a hypothetical scenario in which we are only 200 years from building that technology. If humanity focuses more on art and enjoyment rather than improving technology and an asteroid hits us in 300 years, have we then made an immoral decision?

    I understand why Tegmark is against humanity collectively working against technological development; technology is likely our only hope for even the slightest potential of defeating extinction. However, I am unclear of the moral line between actively driving civilization to extinction and passively not preventing as such. To develop technology that could in some way save us from a calamity would inevitably require massive amounts of resources. If not enough of humanity is up for the challenge to devote themselves to those resources, I don’t think their “allowance” of extinction is necessarily immoral. So I ask Tegmark: how much and how quickly are we morally obligated to develop technology?

  13. Julia Thompson

    In Chapter 5, Tegmark describes a wide variety of power structures involving AI. One important consideration is how these structures would interact with each other. Depending on individual goals, an AI could attempt to exert control throughout the universe, and there would possibly be multiple superintelligent AIs trying to do this simultaneously. The “overlap regions” (240) become crucial when two or more AIs meet. In a situation such as a libertarian utopia, this would likely not be a problem, as the societies would be based on the ability to “coexist peacefully.” Situations with a benevolent dictator would also likely end peacefully as the AI’s control is more focused on its humans, and the same would be true for a protector god. However, a gatekeeper would be a major problem, as its main goal is to prevent another superintelligence. If it attempts to destroy or harm superintelligence that has already been created, this could create a war or similar destruction. Interaction or encounter with another superintelligence could liberate the enslaved god, wreaking havoc on its society. Meanwhile, zookeeper and descendants scenarios, while they do not include extreme violence toward humans, focus on the AI completely in control and omnipotent. A threat to its control would likely trigger an attempt to add the threat to the zoo or eliminate it as peacefully as possible. Finally, in the case of conquerors, a violent and controlling AI would likely start wars with another superintelligence, leading to harm or destruction of both. Due to these different responses to encounters between superintelligent AIs, some of the more peaceful and desirable options have significant flaws in terms of security. Even one AI with goals oriented toward power and control can eliminate other kinds of AI that have greater potential to be beneficial or peaceful.

  14. sandipsrinivas1

    In chapter five’s discussion of the possible outcomes of superintelligent AI, Tegmark provides an objective look at the pros and cons of particular resultant societies. However, one thing that I believe could be mentioned more in order to enhance the understandings of each scenario is the value discussion of what kind of societal goals people choose for themselves.
    This is addressed at the beginning of the chapter, in which Tegmark states some of his guiding questions as “Do you want humans or machines to be in control?” and “Do you want a civilization striving toward a greater purpose…or are you OK with future life forms that appear content even if you view their goals as pointlessly banal?” (162).
    Tegmark’s descriptions discuss this idea with a little more depth later on. For instance, while examining the downsides of the sector system in the Benevolent Dictator world, Tegmark writes: “Although people can create artificial challenges…everyone knows that there is no true challenge, merely entertainment” (172). Later on, he discusses the Protector God model, and how even if more human agency might present in this model, it could come at the expense of safety and stability.
    While these points are significant and well-stated, I believe that Tegmark should dedicate more time to this idea. In a sense, I believe that all subsequent discussions of the existence of AI are predicated first on this fundamental question of what people want the purpose of their civilization to be and how much control they want to hold in the process. I look forward to seeing whether or not Tegmark circles back to this idea at a later point.

  15. Chapter 5 “Intelligence Explosion” was quite entertaining. On one level, it seems quite compelling to think about all the possible scenarios that AGI could engender. However, it seems like all that tinkering is based on the premise that we make those extrapolations with our own cognition. Hence, every single scenario we try to anticipate might be far from the reality of general intelligence. Cockroaches, I expect, can’t even fathom the inner complexities of our lives just like we potentially will not fathom super-intelligence even if we try to inject our human values. Values that even us can’t really agree on. Our moral values and in general our view of the world is a highly colored view of the actual reality out there and everything we strive toward is still imprisoned within the constraints of our own cognition and the figments of our evolving collective psyche. I would have enjoyed a discussion on how pointless all those predictions could be. Also, assuming that there is no free will, the aggregate of information exchanges is merely just another form of life expression or complexification of it. I find it quite pretentious sometimes to picture the human species as the creator of another “species” like if we decided of the fate of the world with our own agency when in fact we seem to be merely vehicles of an evolutionary process we can’t predict, understand, or justify.

  16. Question 6 of Tegmark’s seven-question “alignment test” (162), “Do you want life spreading into the cosmos?” stood out to me as odd. Compared to post-AGI economics and humanity’s loss of technological control, specific applications of AGI seemed tangential. Surely, even:
    “Should ‘we’ pick a code of ethics among multiple cultures/times (or
    perhaps a general/non-relative ethics) for AGI, or should AGI be
    responsible for reasoning about its own ethics?“
    pokes at more worthwhile issues.

    It was only after reading Tegmark’s answer that I finally understood why he included this question. Tegmark believes that “without technology . . . life in our Universe [is] merely a brief and transient flash . . . in a near eternity of meaninglessness experienced by nobody” (246). So baked into his simple answer – Yes – are a bunch of other opinions about what our goals are and what they should be:
    – we seek to maximize happiness and minimize suffering
    – we push boundaries for fulfillment
    – we should be embrace being ambitious as an advanced life form (204)
    – we should hold the ability of intelligent life to experience the universe as precious
    – we should not let the universe’s resources go to waste

    One’s answer reflects how they think of humanity’s place in the cosmos, the value they place on life as a whole, and what ideals they think are ultimately worthwhile. Beneficial AI would have to conform, both now and far into the future, and anyone who wants to work towards beneficial AI needs to have their answer ready.

    Thinking about the role of superintelligent AGI on the scale of billions of years is still silly to someone with techno-skeptical leanings like me (though it’s not so surprising for a cosmologist), but I’ve gained a deeper appreciation for what Tegmark wants to accomplish.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s