11/2 – Superintelligence chapters 1-5

This week we begin our discussion of Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies, focusing on chapters 1-5.

Advertisements

15 Comments

Filed under Class sessions

15 responses to “11/2 – Superintelligence chapters 1-5

  1. Bostrom has not put much into giving a precise definition of general intelligence. I don’t blame him for this at all, as it allows for a significantly less restricted discussion of the potential paths toward superintelligence (AI, whole-brain emulation, etc.) as well as the three modes of superintelligence that he discusses, among other things. However, the fact that a precise definition of general intelligence is not necessary for discussing these matters certainly does not imply that a precise definition does not have serious implications for them.

    For example, I think that creativity is a relevant factor of general intelligence. Of course a precise definition of creativity is itself pretty important, but not necessary here. If creativity is a necessary factor of general intelligence, then it is obviously a requisite to superintelligence. This implies, I think, that creativity factors into the most probable path toward superintelligence. For example, while whole-brain emulation is guaranteed to exhibit at least the level of creativity of the brain it models (with an enhancement proportional to that of every other intelligence attribute), the AI path runs into a roadblock of needing to actually implement creativity within the software.

    Another important question would be the relationship between general intelligence and emotional intelligence. I think one could argue that the two might be necessarily intertwined, or at least tightly correlated. But one could also reason that they are completely distinct. Again, a whole-brain emulation would obviously come packaged with emotional superintelligence, while AI might not require it at all. If we desire our super intelligent system to be emotionally intelligent (or specifically not), then we surely need to consider the definition of general intelligence.

  2. acamperi

    In his fourth chapter, titled the kinetics of an intelligence explosion, Bostrom talks about the different speeds at which a superintelligence may develop, and he distinguishes between slow, moderate, and fast takeoffs (pages 63-64). In a slow takeoff, humans would have ample warning to prepare before a superintelligence appeared, in a moderate one they would have less time, and a fast takeoff will take place in a matter of minutes or hours. I will attempt here to show why I think that, in reality, it can only be a fast takeoff.

    One of the first quotes Bostrom includes if from IJ Good, who states that once we reach a baseline of general AI, these machines will be able to themselves build more intelligent AI, and this process will result in a sort of “intelligence explosion” characterized by an exponential growth in processing power of these machines (page 4). Computers already possess narrow AI, wherein they can complete very specific tasks very well, but more importantly very rapidly (depending on the processing power of the specific computer). Therefore, in my opinion, the only thing needed to the mix is some sort of general intelligence, or common sense, and artificial intelligence will take off at faster speeds that we can fathom. Once we reach that baseline of artificial intelligence, what Bostrom calls “human-level machine intelligence” (page 19), that machine will immediately realize its potential and start to develop new artificial intelligences at increasingly rapid paces and it will be totally out of the hands of humans and will happen much faster than we can anticipate.

    Therefore, while we may be able to track the advancement of AI up to the HLMI level, once it reaches that general AI stage I believe it will have a very fast takeoff.

  3. sgussman

    In chapter 3 of Superintelligence Nate Bostrom discusses different types of superintelligence and argues that digital intelligence has a greater aptitude than organic intelligence. Bostrom divides superintelligence into three distinct categories: speed, collective, and quality. Speed superintelligence utilizes technology to quickly run sequential calculations, collective superintelligence uses a massive parallelization approach, and quality relies on a strategy where the computer is infinitely more intelligent than a human. Furthermore, he posits that a superintelligence of one type, say speed, would be capable of creating any of the other types (in this case via a brute-force approach).

    However, I think that he’s overly optimistic about the advantages of silicon intelligence and I take issue with a speed-based superintelligence. Regarding a speed-based superintelligence, Bostrom’s thought process is that faster computers can run more calculations, increasing its intelligence by relative to an increase in speed. That, however, begs the question. Bostrom approaches the problem believing that the computer is already intelligent, hence speeding it up orders of magnitudes would make it superintelligent. By rejecting the notion that said computer is currently intelligent, then speeding it up does not necessarily make it super intelligent. My intention is not to reject that computers are capable of having any intelligence (being unsure of my own views on the subject), but to raise an objection that a speed-based superintelligence may not in-fact be intelligent. It’s easy to imagine an infinitely fast computer that brute-forces an answer to any problem. After looking under the hood it feels inaccurate to call this computer intelligent since it’s simply exploring decision trees in search of an optimal solution.

  4. Thus far, I have been amazed at Bostrom’s demonstrated skill of laying out such a complicated intellectual topic in the most accessible, yet informative manner possible. With regards to this comment, I will focus on Chapter 4 where he discusses the different forms in which superintelligence could manifest itself. Towards the end of the chapter (59), Bostrom outlines the various advantages that a form of digital intelligence would have over human intelligence referencing several points such as speed of computation, duplicability etc. While every point he makes here is properly substantiated, as a reader, I noticed he failed to address any potential weaknesses regarding digital intelligence. For example, presumably a digital form of intelligence would have to rely on some source of power (ie. electricity) which can be easily disrupted. Perhaps this is not a valid weakness; maybe there are none. Yet, I was left wondering whether there were and I interpreted Bostrom’s failure to at least address the issue as a tiny bit misleading.

    I cannot totally crucify Bostrom for this little misstep. In the end, his goal is to raise awareness about this issue (as exemplified by the fable) and that is best done by addressing certain points and leaving out others. I commend Bostrom for prefacing his book with a very self-aware acknowledgement stating that “…I believe that my book is likely to be seriously wrong and misleading…” (viii). All in all, I believe Bostrom would have built up more ethos by addressing the topic of the weaknesses of digital intelligence even though it maybe detracts from his overarching goal.

  5. I’m interested in Bostrom’s assertion that human intelligence is really the best baseline to constitute what is essentially a “lower bound” for superintelligence. Specifically, I wonder if the limit collective intelligence currently attainable by groups of individuals, particularly those that are tight-knit and well-managed, is a more fitting measure. It seems to me that this collective intelligence could dwarf individual intelligence, and thus a machine would have to exceed this limit in order to truly reach an unforeseen domain.

    Bostrom’s greatest argument in favor of using individual intelligence as opposed to the collective is the idea of how a “solitary genius” might be better suited to accomplishing some tasks than “an office building full of [experts]” (58). I see the wisdom in this argument, but I think that its implications may be less wide-ranging than Bostrom suggests. Certain tasks do require the cohesion of thought that only an individual can currently provide, but it seems to me that most of the effects of superintelligence that we need to be most concerned with are much better done in groups. I would imagine, for instance, that only the wisdom of many can produce the first AI smarter than a human brain, so I’m not convinced that this AI’s advantage will guarantee it can produce an even smarter one (at least without the help of intelligent peers).

    Interestingly, fictional depictions of superintelligent AIs often stress the power of collective intelligence as well. In the Terminator series, “Skynet” is overcome by a group of normal humans led by an effective, unifying leader. Obviously this is just a story, but nothing about it seems utterly unreasonable to me, and it certainly begs the question of whether an AI would need to pass this level to truly become a real concern for humanity.

  6. The overarching general principle that emerges in the beginning chapters of this book is the idea that superintelligence is quite possible; it is just a matter of how or when it will arise, which we can at the least do our best to predict.

    Bostrom outlines the five possible pathways to superintelligence: AI, whole brain emulation, biological cognition (i.e. enhancing of human intelligence), brain-computer interfaces, and networks/organizations. By realizing there are multiple different paths to achieve it, there seems to be a higher chance of it happening if one path doesn’t work out. The feasibility of each pathway then must be analyzed. Bostrom points out in Chapter 4 that recalcitrance along paths to superintelligence not involving advanced machine intelligence are high. A quicker takeoff “involves machine intelligence; and there we find that recalcitrance at the critical juncture seems low” (66). The milestone in brain emulation would be reaching first an insect brain emulation, which is actually quite plausible in the near future. An example of AI in which recalcitrance is low is “if human-level AI is delayed because one key insight long eludes programmers, then when the final breakthrough occurs, the AI might leapfrog from below to radically above human level without even toughing the intermediary rungs” (69). The evidence of lower recalcitrance in machine intelligence used to reach superintelligence shows that the paths of brain emulation and AIs may be the most plausible pathways. The question that follows the feasibility of superintelligence is the implications of its outcomes. Bostrom suggests that it will powerful, but the question that remains to be seen is how will this power manifest itself? Bostrom eludes to the possibility of catastrophic results, as well as acceptable or uncertain outcomes, and I will look to the remaining chapters to gain more clarity on this question.

  7. ccibils

    As far as I was aware, the idea of approximating the date when a Superintelligence would be created involved a lot of handwaving. It is most certainly unclear when the next big step in AI will be made that would enable some form of digital intelligence to rise above our own. However, I was extremely happy to see that Bostrom does lay out a sane baseline, full brain simulation. I had not given much thought to full brain simulation, but if indeed we can simulate a brain and accelerate its thinking (speed superintelligence as he calls it), then we can have a much clearer picture of when this might happen.

    In terms of full brain simulation, we know what we have yet to observe, the technologies that need to be developed for that are much clearer, and once that is done, we would know how much of that needs to be simulated. Knowing things like Moore’s law, this becomes much easier to assess. In my opinion, full brain simulations are much scarier than digital intelligence simply because just like normal brains, they will malfunction, and they will inherit the psychological problems that us, their parent race, have.

    I am much more scared of a moody and psychologically unstable (imagine bipolar) superintelligence than one with a well-understood reward function. One might argue that we could avoid this by figuring out how it would reproduce, but understanding that would require even more simulation and even more observation of biological mechanisms such as genetics and genetic transcription.

  8. In Chapter 1, Bostrom includes data and a writing segment about computers defeating humans in mind games, notably checkers and chess. From a scientific standpoint, these events obviously have significance for particular algorithmic abilities (even if not for artificial capacities for general intelligence, as Bostrom describes) of robots. However, an additional component to these computational victories were how much of a sociological phenomenon they became.

    Deep Blue’s defeat of Gary Kasparov in a 6-game set of chess matches captivated the entire world—stories continue to be told of the 1997 rematch between the world champion and the IBM computer (there even exists a 2003 documentary on the lead-up and coverage of the matches). This put the advance of machine intelligence into the hands of the general public for consumption, and also allowed it to be conveyed through layman’s terms: Kasparov spoke at length about his engagement with the robot after the experience, stating that he “could feel—could smell—a new kind of intelligence across the table” (1).

    There was also a great amount of speculation, especially from Kasparov, that IBM was cheating and allowing a human to help Deep Blue strategize. He became suspicious from “exceedingly human-like moves” that Deep Blue was making, that, from an algorithmic view, made absolutely no sense; these moves now notably are attributed to bugs in Deep Blue’s coding (2). IBM did not release the data printouts of Deep Blue’s thinking processes from the game, garnering more skepticism.

    From a philosophical standpoint, this was, as stated in a New York Times roundup of the event, with “the impact of a Greek tragedy” (3).

    (1) http://time.com/3705316/deep-blue-kasparov/
    (2) http://www.wired.com/2012/09/deep-blue-computer-bug/
    (3) http://www.nytimes.com/1997/05/12/nyregion/swift-and-slashing-computer-topples-kasparov.html

  9. Jack Cook

    Bostrom’s discussion about purely machine or software-based AI seems way too speculative to be usefully predictive – while I do have a lot of patience for experts using current data to try and predict long term trends, I questioned the value of a lot of his delving. Why painstakingly trying to describe how true AI might arise when not even the most savvy experts can guess at either its arrival date or what its architecture will look like?

    His foray into human-based superintelligence (ch. 2) was most interesting to me, if only because we already live in a world where human enhancements that we as people born into economically developed nations consider “primitive” (eg eyeglasses, prosthetic limbs, Internet connections) are advanced compared to what the average human in a developing nation has at their disposal. We are already in a situation in which a minority of the global population enjoys the cognitive-enhancing benefits of education, diet, and medicine. Bostrom does bring light to the moral qualm that many on our planet do not even have access to basic nutrition, but he doesn’t incorporate the disparate trends we see in enhancement availability into his main argument because he is more focused on the enormous gap between human intelligence and superintelligence rather than the tiny gap between the intelligence of any two humans alive today. He ignores the moral qualms and takes an interesting pragmatic approach: genetic enhancement is bound to become an unavoidable, international tech race for increased economic productivity. To me this likely-seeming conclusion says a lot about how unlike AI enforcement, regulation, or ethical consideration will be as long as there is an economic advantage to pursuing it.

  10. Jamison Elizabeth Searles

    In Chapter 2, Bostrom discusses possible paths to superintelligence. In his consideration of biocognition, he touches on the implications of human genetic manipulation and selection. He lists a variety of gains from selecting among embryos such as increase in average IQ, enhanced beauty and attractiveness, support of more stable societies, and decreased prevalence of disease.

    Though these advantages are great and Bostrom seems convinced of the feasibility of cognitive enhancement, I believe the ethical and moral consequences of such a practice limit the likelihood of global application of these technologies.

    First, as Bostrom notes, the relationship between child and parent would be complicated and might lead to a lessened sense of connection and duty to protect. Similar to the way employing algorithms exclusively to handle college admissions would damage an admitted students sense of attachment to a university, both a parent and child might feel less attached to each other if the child represented an optimized version of the parents’ possible offspring.

    Next, if countries were incentivize residents to adopt these technologies, they might be able to customize the selection algorithms to produce populations less likely to revolt or question authority, leading to an imbalance of power and possibly, a Brave New World type of society.

    Also, when these technologies become available, they will likely be expensive and inaccessible, leading to a widening of the gap between more affluent and less affluent populations. Last, if these technologies produced criminal individuals, what would be the legal implications?

  11. What fascinates me about this book is the idea of superintelligence as something to be wary of. I recall watching the latest Avenger’s movie (Tony Stark programs a superintelligent machine — which inevitably goes horribly “evil”, and he manages to build another superintelligent machine with a strong sense of human-like morality that helps them defeat the evil one). I laughed during the movie, because given my understanding of A.I., we’re quite a bit aways from anything similar to superintelligence. Yet Bostrom argues that we’re closer than one might think, and even the potential of said superintelligence would certainly result in huge changes for us as a species — good or bad.

    Bostrom’s arguments have been quite compelling and eye-opening for me thus far. He at times touches on morality or ethical issues that certainly may arise, such as when writing about the enhancement of humans to become superintelligent, by eugenics and other related practices (36 onward). He also remarks on the diversity that would remain intact despite “spell-checking” the genes of an embryo (41), which was interesting and important, but he fails to mention the inherent stratification this sort of thing may lead to, due to accessibility of these technologies to only those who are informed and have the resources to attain such advancement. One can argue that this stratification could have an even larger impact because of the resulting superintelligence; who knows if superintelligent humans would retain moral/ethical views (and perhaps have enhanced views in this regard) or would instead result in a more sinister, troubling stratification than what we have today.

    A tangential question Bostrom may answer later on in the text is how we might prevent a catastrophic superintelligence, perhaps through learning how to encode a sense of ethics/morality.

  12. Betsy Alegria

    “There is no reason to expect a generic AI to be motivated by love or hate or pride or other such common human sentiments: these complex adaptations would require deliberate expensive effort to recreate AIs.”

    In the third chapter, we discussed the definition of super intelligence. Speed super intelligence is an intellect that is just like a human mind but faster, collective super intelligence is a system achieving super performance by aggregating large numbers of smaller intelligences, and quality super intelligence is at least as fast as a human mind and vastly qualitatively smarter. When I was reading this chapter, I kept thinking back on the statement said in the first chapter about how a generic AI should not be motivated by common human sentiments. When I read this, I got the impression that 1) it would take too much time and energy to recreate AIs that will account for them and 2) these common human sentiments are not what makes us humans intelligent. Although I agree that human sentiments alone are not what make us intelligent, I do think they play a role in our decision making and intelligence.

    For example, imagine someone who risks their life to save their loved one because they know that that other person has children and a family to take care of. I wonder if Bostrom would think that decision, definitely influenced by love, would be intelligent. At this point, that thought also raises the notion that there is a difference between an intelligence decision and a correct/right/morally sound decision, and what can be the repercussions of not expecting a generic AI to be motivated by common human sentiments.

  13. In Chapter 3, Bostrom discusses three major forms of superintelligence: speed, collective, and quality. I’m particularly interested in collective superintelligence because it offers an opportunity to create progress in a decentralized way that I think is more conducive to transparency and good governance than speed or quality superintelligence. Bostrom points out that if we had more Newtons and Einsteins, “[new] ideas and technologies would be developed at a furious pace, and global civilization…would constitute a loosely integrated collective superintelligence” (56). He proposes this scenario within a hypothetical where we have a planet, MegaEarth, with “the same level of communication and coordination technologies that we currently have on the real Earth but with a population one million times as large” (56). Disappointingly, he offers no suggestion that we could have the same population with a different level of communication and coordination technologies, a form of organization that is not limited to hypotheticals. As Bostrom notes two pages earlier, our planet is governed by a “division of labor: different engineers work on different components of the spacecraft; different staffs operate different restaurants” (54). What he doesn’t acknowledge is that most of the intellectual work that governs our society is done by a very small segment of the population (Einstein and Newton were given opportunities), while the vast majority of people in the world never get the chance to participate in globally important intellectual work. Centralization of authority and intellectual property can be extremely dangerous, as Bostrom discusses in Chapter 5: “Achieving…collaboration on a project that has enormous security implications would be more difficult” (86) than for past projects like the International Space Station. A participatory system of decision-making built on a collective and transparent intelligence may be necessary to ensure that superintelligence is developed in the common interest.

  14. In chapter 3 of Superintelligence, Bostrom discusses different types of superintelligence, which he defines as regular intelligence with marked improvements in either speed, collective communication, or quality. He then discuesses how society may transition to take advantage of the collective benefits of the presence of superintelligence.

    Bostrom’s examination of these matters feel premature. A fundemental – perhaps the single most fundamental – point that Bostrom seems to neglect is what the word intelligence means. Understanding what intelligence means – or at least what Bostrom interprets it to mean – is critical. While it is well and good to talk of different methods of amplifying intelligence to create superintelligence, such conversation would appear to require an understanding, or at least a working consensus, of what intelligence actually is.

    Bostrom appears to examine the definition of intelligence in the “State of the Art” section of chapter 1, , where he considers manifestations of intelligence in game-playing AIs, face-recognition and minimum-distance algorithms, and neural networks. However, Bostrom seems to drop the epistemological nature of intelligence. For me, a true superintelligence is artistic, able to be creative, capable of displaying not just algorthmic capabilities but also emotional intelligence.

    It is very possible I have misinterpreted the intention of the first five chapters of Bostrom’s book, and its purpose is to discuss the coming of artificial intelligence as we understand it today from the computer science perspective. Yet it would seem that such a comprehensive discussion of concepts around the nature and replicability of intelligence would be well served by a detailed examination of what Bostrom sees intelligence to be.

  15. The introductory chapter of Superintelligence is rather rambling and hard to put together, but gets interesting towards the end. In it, Bostrom examines the history and current uses of Artificial Intelligence and introduces the topic of exploration for this book – when will we reach superintelligence, and what will the outcome of this be?
    I am excited by this opening but also rather suspicious. After all, it does mention that there is greater risk in nuclear annihilation than of us being enslaved by computers. I still feel that this is all a rather fanciful topic, and I feel that Bostrom needs to justify his reasoning for this topic of discussion far more in the opening chapter. He mentions that AI is used from hearing aids to wall street algorithms, implying that because it is so widespread, it has the power to dominate us. But I feel something is very wrong the fundamental point he leads to. After all, we do know that there have been 2 “AI Winters” when people realized that their current methods of symbol manipulation was fruitless. Current AI is based on big data, and miles away from the ideas surrounding the early days of AI. What is to say that soon we won’t find the next limit of AI based on current methods, and we go again into a long AI winter? Furthermore, I feel like the idea of a computer actually seeing itself as a being and preventing humans from blocking its agenda is just so fanciful and far away from current technology, that we should surely have much more important things to talk about. Thus I believe that the first chapter needs to justify a lot more why a program that sees itself as a conscientious being with an understanding of how the world works is AI-complete

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s