11/7/2017 – Life 3.0 (beginning)

This week we begin reading and discussing Max Tegmark’s book Life 3.0: Being Human in the Age of Artificial Intelligence, Prelude and chapters 1-3. Students in the class: please post your blog comment on this entry by Tuesday, November 7, at 6pm.

Advertisements

18 Comments

Filed under Class sessions

18 responses to “11/7/2017 – Life 3.0 (beginning)

  1. Two problems that struck me as urgent in our reality of near-term AI were how we would (a) define a machine as a single entity and (b) hold a machine accountable for a crime.

    We consider each human or Life 2.0 a single entity because each human has the same body their whole life. Though this body changes and grows over a human’s lifetime, it is fundamentally housing the same, though ever-developing, software that makes each person worthy of their own identity. In this way, we can assign names to people that will last their whole lifetime.

    Tegmark posits Life 3.0 would be able to change both their software and biological hardware (p. 29). This would mean that a self-driving car, for instance, could deconstruct and turn into an army of toasters; the whole would become many. Or, conversely, the self-driving car could reconstruct into a wing of an airplane. In this way, it would be a part that is incorporated into a whole.

    Because machines lack a consistent physical identity that we as humans have in our bodies, I am mystified by the prospect of self-driving cars that hold their own insurance proposed by legal scholar David Vladeck (109). How are we to hold something that is not necessarily a single entity accountable? If the self-driving car were to deconstruct and become the army of toasters, would we punish each of these toasters? To me, it seems these toasters are now fundamentally a different type of entity than the self-driving car that committed the act. What I struggle to see is how we can attribute ownership to amorphous beings. Perhaps the future requires the conception of a new way of thinking about identity and ownership of actions.

  2. In chapter 2 we are told “intelligence […] can’t be measured by a single IQ, only by an ability spectrum across all goals”. I followed the footnote (location 920 on Kindle), which critiques the notion of an “athletic quotient” to judge aptitude across all sports. No doubt, Tegmark is right that IQ does not describe everything; but IQ does, in fact, describe rather a lot about individuals, or at least certainly more than the “very specific goals” that neural nets can accomplish. Rather, high IQ has been observed across a variety of human fields, and is regarded (at least by some people, some of the time) as a “strong predictor of life outcomes” (https://goo.gl/Zr4koq).

    There is an interesting thought experiment to be had: could we construct a hypothetical AI-powered machine that both answered IQ tests well (at, say, a 120+ level), and was useless at other general-intelligence tasks? It seems to me that the answer is yes, given the ability of AI to be specifically but not generally intelligent in the status quo. Yet humans that do well in IQ tests do seem to demonstrate (on average) marginally greater general intelligence than other humans.

    Having thought for a while longer, however, it seems almost impossible to conceptualise a test that would genuinely detect general intelligence. The Turing test, as Tegmark points out, is more a proxy for fallibility than intelligence; the Winograd Schema Challenge cited in a linguistic context is interesting, but winnable by language-focussed AI over general intelligence; and some sort of Searle-like ‘Chinese room’ test to determine whether an AI understands what it is doing is open to a machine’s lying. As a result, while the example of Prometheus in the introduction is intuitively persuasive, I am left wondering how exactly we would ‘prove’ it is generally intelligent.

  3. tamara2205

    When delineating the common myths about superintelligent AI, Tegmark classifies the fear of “AI turning conscious” as a “mythical worry” (41). He justifies this classification with the example of getting hit by a driverless car: once you’ve been hit, it really does not matter whether the car “subjectively feels conscious” (43). Furthermore, Tegmark states that “what will affect us humans is what superintelligent AI does, not how it subjectively feels” (43), implying that worrying about questions of consciousness is less pertinent to our lives than tangible consequences of action. If we consider human society’s legal system, however, we see that consideration of emotions and motivation decides the destiny of millions. Regardless of how having consciousness might affect AI’s legality, I disagree with Tegmark’s classification and believe that conscious AI is very much a valid concern that could tangibly affect human lives. Were it the car’s conscious decision to hit us, it may be more likely to do it again due to some hostile motives or emotions. Furthermore, having conscious AI would challenge the rest of Tegmark’s classification and make the risk of malevolence and destructive goal-directedness (which, in Tegmark’s repeated efforts to avoid anthropocentric bias, pays the price of being defined in the most narrow, physical sense of the word on page 44) a legitimate possibility. Finally, it has been shown that emotions, despite posing risks of bias, are critical for effective human decision making regardless of our information processing capabilities*. Thus, human-level AI should by its definition have the capacity to feel (or simulate such a process). As superintelligence goes beyond the human, it does not need such conscious capabilities. This raises the question: if not consciousness, what will give superintelligence the edge over human-level AI? Could consciousness be a drawback in disguise, preventing humans from achieving superintelligence?

    *http://nymag.com/scienceofus/2016/06/how-only-using-logic-destroyed-a-man.html

  4. “Our Judiciary Neural Net has sentenced you to 20 years imprisonment on charges of grand theft auto and larceny.” According to Max Tegmark in his book Life 3.0, a sentence such as the former may be uttered in our society’s AI-driven future, given the ever-growing set of occupations replaceable by machine. Tegmark reflects on this possibility of AI in law-enforcement with marked apprehension, for judicial oversight can be life-altering to the highest degree. In many ways, Tegmark’s concerns about these intersections between technology and law are mirrored in Weapons of Mass Destruction by Cathy O’Neil, in which we saw how algorithms could inadvertently tip the scales of justice in favor of dominant socioeconomic classes, contributing to systemic inequality in disturbingly systematic ways.

    While discussing such morbid possibilities, Tegmark brings up an issue that will come up repeatedly should we proceed with allowing computers to make complex judgements about social phenomena. Tegmark asks, “If defendants wish to known why they were convicted, shouldn’t they have the right to a better answer than ‘we trained the system on lots of data, and this is what it decided’?”(106). In the case of a human judge, the judge can quickly inform the defendant of how she weighed the evidence, what deductions were made, etc. At least presently, such a reply isn’t possible, given that artificial introspection seems to be out of reach in at least the near future. This problem of translating a neural net’s “thought process” into something ascertainable by humans will be an issue for robots on all sides of the law. Consider a powerful AI that chooses to sacrifice one human while saving another. We would surely want some kind of human-intelligible account from that AI regarding why it chose to do what it did.

  5. Caroline Cin-kay Ho

    In “The Near Future,” Tegmark outlines the debate over regulation of autonomous weapons systems, mentioning a ban on AWS as a potential option (116). For a variety of practical reasons, I believe that such a ban would be infeasible. Now, Tegmark states that “many experts argue that the bans on biological and chemical weapons were valuable… because the bans caused severe stigmatization” (116). However, I am skeptical as to whether a ban on AWS would produce the same stigma – I cannot imagine a nation attempting to defend its use of biological weapons in violation of the ban by arguing, “It was more humane than the alternative!”; I can, however, imagine a nation arguing this for its use of AWS (as Tegmark states, “no… civilians need get killed” (110)).
    Furthermore, Tegmark later notes the difficulty of “enforc[ing] a treaty given that most components of an autonomous weapon have a dual civilian use as well” (116). This reminded me of a SymSys 1 lecture on deep learning last year, during which my classmates and I watched a video in which an AI played a first-person shooter. While the room initially filled with amazed whispers, it gradually went dead silent as the agent mowed down enemies with deadly precision. The same thought was in all our heads – “What would happen if the same algorithm were applied in the real world?” Quite likely, it would be extremely difficult to distinguish the code of the lethal version that of the video game version. When we combine the difficulty of inspecting software with the problem of determining the purpose of hardware (i.e., “a drone that can deliver Amazon packages and one that can deliver bombs” (116)), developing a procedure for routine inspection (as is typical with nuclear weapons checks) seems futile.

  6. Ch3 of Max Tegmark’s Life 3.0 paints a naïve picture of the potential of basic income to mitigate the negative impact of a robot-dominated workforce. Basic income is an interesting political issue in that it draws proponents from usually antagonistic camps: leftists seeking to decouple personhood and productivity, and libertarians aiming to promote innovation by minimizing government. Though Tegmark does not mention these unlikely bedfellows by name, he employs both groups’ arguments, and in so doing illustrates the shallowness of thought in both.

    To start with the libertarian argument, Tegmark writes, “Technological progress can end up providing many valuable products and services for free even without government intervention” (127). He goes on to list a number of free features online that people previously had to buy. The idea that the Internet and its services exist without “government intervention” does not pan out. It was the DoD that funded most of the basic research that developed packet switching and led to the ARPANET, the predecessor of the Internet. This is all to say: suggesting that the state funded social safety net might be replaced with internet services is myopic given the government was crucially involved in developing said technology.

    In presenting a leftist argument for BI, Tegmark writes, that with basic income, “At a minimum, it should be possible to make everyone as happy as if they had their personal dream job, but once one breaks free of the constraint that everyone’s activities must generate income, they sky’s the limit” (129). In all other technological revolutions, wealth gets concentrated at the top and does not “make everyone happy.” Why would the AI in the workforce be any different? Ultimately, we should be suspect of the power of BI to upend existing inequities, let alone address a robo-workforce.

  7. sandipsrinivas1

    Max Tegmark makes his purpose in “Life 3.0” quite clear following his vivid depiction of life in a Prometheus-driven world. He wants to provide readers with information about one of life’s most crucial unwritten tales, “the tale of our own future with AI,” while asking readers: “How would you like it to play out?” (21).
    In his exposition of the AI frontier, Tegmark introduces a number of challenging ideas that make it difficult for one to fully entrench themselves in any of the pro-AI schools of thought. For example, when discussing the intersection of AI and cybersecurity, Tegmark brings up the unsettling idea that “In the ongoing computer-security arms race between offense and defense, there’s so far little indication that defense is winning” (104). This idea is perhaps the most concerning of those brought up by Tegmark thus far. It traces back to the ideas of the Puerto Rico conference mentioned in the book’s first chapter–AI for social good needs to be created with good intent, and with the host of people in the cyber world with bad intent, the idea of better and smarter AI seems more dangerous.
    Tegmark also expresses concern over the challenges that AI’s integration into human activity can raise surround liability. The most concrete of these comes in his discussion of self-driving cars, in which he poses the question: “if a self-driving car causes an accident, who should be liable–its occupants, its owner or its manufacturer?” (109).
    The unifying idea with both of these points is that in Tegmark’s quest to empower readers to decide how they want AI to factor into the future, he raises legitimate issues with AI that make the premise of a totally AI-dominated world dangerous.

  8. rhwang2

    In the third chapter of Life 3.0, Tegmark poses the important question: If artificial intelligence were to replace many more jobs, what will and what should our societal structure look like? Although such developments are highly likely to grow the total economic pie, Tegmark considers several arguments on why the economic benefits will accrue to an even smaller minority of the population. First, lower skill lower paying jobs are more likely to be replaced. Second, the efficiency benefits will accrue to capital holders. Third, the digital age has magnified the economic potential of “superstars” be it J.K. rowling or Mark Zuckerberg.
    One solution the author explores is the concept of basic income “where everyone receives a monthly payment with no preconditions or requirements whatsoever (Location 2371 Kindle).” However, I question whether it is really ideal to guarantee income with no preconditions whatsoever. It may be more societally beneficial for basic income to be tied to some amount of work and/or behavioral requirements. Guaranteeing basic income could potentially prevent society from redistributing labor in a productive manor. In particular, I worry about the values that would be encouraged by a basic income, which basically rewards people for doing nothing. Tegmark somewhat acknowledges the concern, though, writing “welfare payments have also been criticized for disincentivizing work, but this of course becomes irrelevant in a jobless future where nobody works (Location 2321).” I’m skeptical towards Tegmark’s temporary assumption here that there will simply be no work. People could be redistributed towards harder to replace unskilled labor jobs (for example sanitation or infrastructure construction). After all, this is not the first time society has had to redistribute a large portion of the labor force (think farming).

  9. SM201

    In Chapter 3, Tegmark describes the [almost unbelievably] rapid progress of AI development – primarily as a function of raw algorithmic advances (e.g. the move towards neural nets). More consequentially, however, Tegmark highlights the role that an AI’s digital ecosystem can play in accelerating the AI’s learning. In addition to specifically describing AI-training platforms such as OpenAI’s Universe, Tegmark generally notes, “To speed things up and reduce the risk of getting stuck or damaging themselves…they would probably do the first stages of their learning in virtual reality.” (86)

    I don’t think Tegmark underestimates the difficulty in creating platforms like Universe – after all, he himself notes that he is continuously surprised by the rate of development (87). Given this difficulty, one natural question is as follows: how should developers efficiently allocate their resources between attempting to make algorithmic advances, versus developing increasingly sophisticated AI-training platforms?

    From my perspective, we don’t have enough information to answer this question. If you imagine that algorithmic and platform development both have curves of diminishing marginal improvements (eliminating “low-hanging fruit”), it’s difficult to know exactly “where” on each curve we are – just that we are continuously improving. As such, allocation of resources is left to the interests of individual developers in the absence of a knowing a “better” allocation of resources – which we may never actually know.

    From Tegmark’s perspective, I think he would agree that choosing between algorithms and platforms requires more information – in the first three chapters at least, he doesn’t have an answer. However, he might emphasize that knowing “where on the curve” we are isn’t strictly necessary; developing both in harmony (without completely neglecting one or the other) can lead to progress at least “until AI matches human abilities on most tasks” (92) – with which I agree.

  10. Meg Verity Elli Saunders

    Tegmark discusses the future of artificial intelligence enthusiastically, suggesting that there are countless possibilities of what that future might look like, and the decision rests with us. He asks “What sort of future do you want?…Do you prefer new jobs replacing the old ones, or a jobless society where very one enjoys a life of leisure and machine produced wealth?…What will it mean to be human in the age of artificial intelligence? What would you like it to mean…?”(38) He shows himself to be dedicated to the goal of encouraging people to discuss and collaborate on building a beneficial future with AI by managing, with his beneficial AI campaign, to bring some of AI’s most respected thinkers together in Puerto Rico to reach the consensus that the goal of AI should be to create beneficial intelligence.
    His proposal certainly sounds reasonable and exciting. However one concern I have with this is that there seem to be so many different possible directions in which AI could go, and it seems unlikely that a consensus will ever be reached on a specific plan of action that pleases everybody. If it is impossible for everyone in a society to be politically likeminded, how could they possibly agree on something as controversial as AI? It seems, therefore, that either, some people will be forced to accept a future that they did not choose, or groups of people with different preferences will go off and create their own divisions of AI. If the impact of developing life 3.0 is truly as great as Tegmark suggests with the example of Prometheus due to the “intelligence explosion” resulting from “recursive self-improvement”(4), then the friction created by conflicting visions of the future of AI could very quickly reach a fatal scale.

  11. Alex Gill

    In Chapter 3 of Life 3.0, Tegmark explores the implications of AI developments on career fields. As he explored the process of unchecked technological growth and automation, I saw many similarities between the impact of automation on career fields and the WMDs from O’Neil’s book. It seems to me that for young people choosing careers, the “model” that automation follows is a sort of WMD against career fields.
    The first characteristic of a WMD is a lack of transparency, and the impact of automation on various career fields is certainly opaque. Tegmark distills some useful questions one can pose to calculate the risk of automation on a certain profession, but this approach is certainly not widely known to the general public. Because it is so opaque which career fields are must vulnerable to automation, we can check the first box of WMD characteristics.
    Second, automation has unfair and damaging effects on populations. It is well catalogued that traditionally lower income jobs are usually the first to be upheaved by automation, and the well-educated are most insulated from it. Because automation unfairly targets less privileged communities, it exemplifies the second characteristic of a WMD.
    Lastly, automation has proven the third WMD requirement: the capacity to grow exponentially. There are very few, if any, checks or balances against the automation of professions. Conversely, companies welcome automation, as it helps increase efficiency and decrease costs. With the private sector constantly pushing for automation in every possible arena, not only does it have the *capacity* to grow exponentially but it already *has* grown exponentially.
    Although it’s unclear if we can classify automation as a “model” or pinpoint who is responsible for monitoring the dangers of automation, perhaps we can approach combatting the negative impacts of automation with strategies used against WDMs.

  12. Max Tegmark discusses the potential for AI to disrupt the justice system in the form of “Robojudges,” which may be “more efficient and fairer, by virtue of being unbiased, competent, and transparent. (106) Conversely, Tegmark also highlights some characteristics of robojudges that mirror those of Cathy O’Neil’s WMDs: Firstly, that decision-making opacity hinders the robojudges’ credibility to the public eye, and the risk of robojudges’ deep neural learning may induce pernicious feedback loops. Thus, Tegmark suggests that we should question “[whether] we want to give the final say to machines” in our legal system. (107)

    Based on Tegmark’s AI-safety benchmarks of “verification, validation, security, and control” (94), it seems the answer is simply that legal decision-making machines have no role in having the ultimate say in cases, because of our inability to explain their opaque outcomes. Rather, they should provide decision support to human judges who have the ability to provide formal rationale for their decision-making processes, an implementation similar to the employee schedule-prediction software employed by Belk from my last comment. A real-world example is the AI attorney assistant, ROSS, which “allows the legal team to upvote and downvote excerpts based on the robot’s interpretation of the question…Lawyers can either enforce ROSS’ hypothesis or get it to question its hypothesis.” (Turner) and ROSS trains via machine-learning to improve its hypotheses. This data-backed predictive assistance applied to a judge context* would provide efficiency to judges without turning courts into black-box verdict factories. Granted, by allowing training and ultimate authority of human judges, human bias and feedback loops are undoubtedly still present — however, by relying on the facts of each case rather than considering recidivism predictions and other biased predictors, AI judge-assistance could provide a fact-based hypothesis to judges that could challenge human biases.

    Turner, Karen. “Meet ‘Ross,’ the Newly Hired Legal Robot.” The Washington Post. WP Company, 16 May 2016. Web. 5 November 2017.

    *See example study of predictive AI used in a judge context, which predicted judicial decisions of the European Court of Human Rights (ECtHR) to a 79% accuracy using merely text analysis of cases prepared by the court:
    Aletras N, Tsarapatsanis D, Preoţiuc-Pietro D, Lampos V. (2016) Predicting judicial decisions of the European Court of Human Rights: a Natural Language Processing perspective. PeerJ Computer Science 2:e93 https://doi.org/10.7717/peerj-cs.93

  13. Bradley Richard Knox

    In Chapter 3, Tegmark tells an interesting story about how an AI machine (AlphaGo) was able to beat a world champion Go player, showing how AI was becoming increasingly intelligent. However, the most interesting part of the story was not that the AlphaGo was able to win the game, but that it won using a strategy that was believed to be crazy and was essentially unprecedented. As the world’s top ranked Go player put it: “humanity has played Go for thousands of years, and yet, as AI has shown us, we have not yet even scratched the surface” (89). Essentially, the conventional way that humans have gone about playing Go has not really changed over the centuries, whether because humans were not bold enough to try something new, or they could not predict far enough in advance to make such a unique play. This story shows that computers have the potential to see and try things that humans have never even considered. This idea is incredibly encouraging, and is a microcosm of Tegmark’s general theory of Life 3.0.
    This story is encouraging because it shows that intelligent machines have the ability to challenge many things that humans widely believe to be the best way to behave, govern, invest, or any other countless aspects of daily life, potentially solving many of the problems that mankind faces today. It could be that there is some system of governance or political ideology that could be more efficient and lessen the divide within our country, or a simpler solution to lessening air pollution at a lower cost. The point is that as humans we do not know what an intelligent machine will come up with, but they have already shown the ability to challenge and expand the limits of human creativity and problem solving.

  14. What I found most fascinating about Life 3.0 is that if we are to define life as a “process that can retain its complexity and replicate” (p 39) then Life 3.0 is a foreseeable update to Life 2.0. Essentially, the development of Life 3.0 is just the response to the continuous evolutionary drive of creation and the complexification of things so that “hydrogen…., given enough time, turn into people” (p 49). Interestingly, it does seem like much of the thoughts about the development are geared towards us (.ie. what’s good and bad, what causes happiness and suffering) and it appears to be useful to think about the implications of intelligent AI systems on humanity. However, if we were to step back, it seems like we have a tendency to talk about AI as if we have individual power over it or as if we are the creators of it when inherently we are merely the vehicles for even greater complexification of things. In others words, everything evolves. Assuming that there is no free will, our individual actions are just proxies for the potential reorganization of complex systems. In that sense, the long-term implications of AI systems seem almost irrelevant (though entertaining to discuss) since much like bacteria we would not be able to fathom the intelligence or mode of being of a more intelligence system. Our actions and relationship with the world are dictated by evolutionary constraints that we can’t change and it seems natural to project our modes of interacting with the world onto more intelligent systems even though those projections are likely to be nonsense.

  15. Divya Siddarth

    Reading Life 3.0 immediately after Weapons of Math Destruction, alarm bells naturally went off in my head throughout the prelude. There is almost no WMD imaginable that would be bigger and more deadly than this kind of superintelligence – massively opaque, massively powerful, and operational at massive scale. The prelude was fiction, but the first few chapters discussed how it might translate into some form of reality. What struck me about these chapters is how secondary a concern the opacity of the discussed systems seemed to be. This opacity is mentioned first with regards to neural networks – “just as we don’t fully understand how our children learn, we still don’t fully understand how such neural networks learn, and why they occasionally fail” (79). However, while children can share with us their learning processes and pitfalls, neural nets obviously cannot. As more examples of AI are mentioned, starting with DeepMind’s game-playing AI and extending into AI that could be used for healthcare, finance, enforcing and creating the law, and other hugely important fields, there are only brief mentions of how, so far, the AI “doesn’t understand what it’s saying in any meaningful sense…just discovers patterns and relations” (92). In that excerpt, Tegmark actually cites this as a reason to not be immediately concerned that AI will replace us (or at least him), since it cannot ‘understand’, though it can complete various tasks. Coming off of O’Neil’s book, I am inclined to take the opposite view – I think we should be wary of this opacity and the fact that we have no idea how these programs learn or what they are learning. One potential way to combat this comes from a different kind of transparency – at least if we don’t know how the program works, the code should be available, as it was when DeepMind “published their method and shared their code” (85), removing some of the mystery surrounding their AI and addressing some of these opacity concerns.

  16. Julia Thompson

    One of the main themes of Life 3.0 so far is accountability involving AI. A large part of this is how to hold robots and AI accountable for their actions. There are four major problems that Tegmark seems to identify. One is the potential for changing shape, which is included in the definition of Life 3.0. If a robot completely changes its shape, especially in a way that would prevent its crime from recurring, it may not be fair to hold the new form accountable or even to consider it fundamentally the same. A second, directly referenced in the section on automated weapons, is the capability of a single set of hardware to have many kinds of software, as in the examples of the drones delivering packages and bombs. These two related problems call into question whether hardware or software determines identity in a form of life that can change both.
    A third problem is the enforcement of laws themselves. With robojudges in place for criminal justice, the control over robots that is needed would likely not be possible through traditional means. Either a new structure would be necessary for the robots, or humans would have to remain involved in the existing systems, affecting the operation and power of robojudges. Finally, legally treating an AI as human, allowing it its own insurance and representation, would create issues surrounding voting, citizenship, and more. It would also bring consciousness, motives, and the possibility of recidivism into the conversation. The identity of an AI, once established as some degree of life, would be combined with its software and hardware, which would be determined by the AI itself. Then, the case may not have any system to be added to. In general, holding an AI accountable would be very difficult.

  17. While I agree that AI can lead to “more devastating cyberwar,” (118) I don’t think Tegmark goes far enough in describing the potential impact of AI-powered cyberwar.

    Cyberwar (and by extension, information war) occupies a niche just underneath kinetic warfare. For example, let’s consider how the US could retaliate from such an attack. Would US have been justified in firing a missile over Russia in response to presidential campaign hacking? No. How about if Russia exploited trading algorithms to destabilize the US economy? Arguably still no. If you were a world leader with political motivations to attack the US, sending in a small team of experts to cripple US war readiness and soft power, with only weak repercussions, is a good idea.

    All our three books have touched upon how the intelligence of current technology can be turned against us. As time goes on, more of our economy and social institutions depend on these technologies to handle ever-growing scale and demand. However, AI is an extremely specialized field. Only a small percentage of our population is working on it and understands its development beyond what’s normally talked about in the media. Fewer still can actually affect its development or exploit it.

    Cyberwar has already created a world where every nation-state is locked in constant conflict, a not-quite-battle which can’t be resolved by conventional means. It has generated an environment where the nations which can best utilize their talent at home and recruit talent abroad hold a distinct advantage. And should we begin to bring an even more specialized technology like AI closer to citizens and critical infrastructure, we should expect only a handful of superpowers to possess the ability to exploit it – the exclusivity of the nuclear club, without the mutually assured destruction to prevent its widespread use.

  18. At the beginning of Chapter 2, Tegmark asserts his definition of intelligence to be “the ability to accomplish complex goals” (50). While this covers an incredibly broad number of situations, such as the goal to “apply knowledge and skills,” it seems to be missing more passive or goalless objectives. One could argue that all human intelligence incorporates some sort of goal, even if it is implicit and not thought of, but let’s set this objection to the side for now.

    Computers or machines can process broad and narrow objectives, even potentially to the level of artificial general intelligence (52). On the other hand, humans seem to interpret a large amount of external stimuli without any specific intention. For example, someone with passive attention could interpret an ad on an webpage, without having the goal of reading everything on the page. This passive interpretation may store memory in an auto-associative way, but the algorithm for doing so is unknown. This could be interpreted as a type of learning. As Tegmark states, even machine learning, however, is only getting better at the computations it desires (81).

    Not only does this somewhat contradict Tegmark’s definition of intelligence, but it also begs the question of whether our advances in technology will be held back by our insufficient scientific understanding of the brain and ourselves. While a NAND gate function may be able to compute anything with a well-defined function, it seems that we are a long way from properly defining many aspects of human intelligence (64).

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s