11/28/2017 – Life 3.0 (conclusion)

In this session we finish reading and discussing Max Tegmark’s book Life 3.0: Being Human in the Age of Artificial Intelligence, chapters 7-8 and Epilogue. Students in the class: please post your blog comment on this entry by Tuesday, November 28, at 6pm.

 

Advertisements

17 Comments

Filed under Class sessions

17 responses to “11/28/2017 – Life 3.0 (conclusion)

  1. Caroline Cin-kay Ho

    In “Goals”, Tegmark proposes a “minimum set of ethical principles” which he claims have “broad agreement”: utilitarianism, diversity, autonomy, and legacy (271). In order to be as inclusive as possible, he defines utilitarianism as the principle that “positive conscious experiences should be maximized and suffering should be minimized” (271). While this might seem intuitive, I think that Tegmark accepts such a principle without serious consideration of the dangers of utilitarianism. Putting aside standard arguments that utilitarianism sacrifices the few for the many to maximize utility, I will argue that a utilitarian AI’s actions would likely be less understandable and thus perhaps less fixable than a deontological or Kantian AI. Being focused on consequences of actions rather than actions themselves, a utilitarian AI must deal with probabilities of various outcomes. Hence, unless it can perfectly predict the future, there is a chance it will take actions which lead to unexpectedly bad outcomes. In contrast, a Kantian AI would likely be fairly explicitly rule-following due to its focus on the morality of actions in themselves, which would allow it to provide understandable reasons for actions (at worst, actions would be “expectedly” rather than unexpectedly bad). Of course, the conclusions reached by the AI might be wrong – as Tegmark notes, Kant used categorical imperatives to determine “that wives, servants, and children are owned in a way similar to objects” (269). It is worth noting, however, that this conclusion is generally not accepted by modern Kantians; engineers might likewise simply adjust incorrect conclusions reached by the AI. Ultimately, when given a difficult decision, a deontological rationale of the form “[Action] is inherently wrong” would be far easier to accept or correct than a consequentialist rationale of the form “[Action] is wrong since it will *probably* produce less utility overall than the alternatives.”

  2. Chapter 7, to me, gets to the heart of the ‘problem’ of general AI: as Tegmark rightly points out, setting the goal of maximising a score in Pong or some similar game is rather easier than maximising the goal of ‘life’. Worse still, even if we dismiss Tegmark’s concerns about animal rights or changing rights over time (Kindle location 4935) and presume rights are human-only and static through time (not that I believe this!), it seems *even then* that we could not settle on a goal. We regularly sacrifice the rights to thinking, learning, communicating and property ownership in the name of enhancing others’ freedoms – we confiscate land, zone property, do not have unlimited healthcare and education spending, and invade foreign countries at the cost of some short-term life (at a minimum).

    Deciding how to trade these liberties off, especially between different people and groups, is exceptionally complex. In fact, I am intuitively terrified of a machine that claimed to find ‘moral answers’ to these quandaries: the ultimate positive feedback loop, an AI-powered moral arbiter is arguably the worst imaginable WMD (to hark back to O’Neil briefly).

    This quandary only reinforces my prior preference for specialised over general AI systems, but if there *is* some way for a machine to ‘learn’ morality, I think observing millions of people in different contexts – and the praise or condemnation they receive – might be a reasonable way to start. After all, the democratic system we use to make all the tradeoffs I discussed above is, at its core, a mathematical average of millions of individual subjective decisions on how to weigh different liberties. While I am wary of machines that attempt to do the same computationally, it is probably the least alarming solution to me.

  3. In Chapter 7: Goals, Tegmark questions how we can ensure that a recursively self-improving superintelligent AI will retain goals that we encourage them to adopt. Just as humans often shift their goals after self-reflection or learning new information, AI can do the same. “How many adults do you know who are motivated by the Teletubbies?” Tegmark asks rhetorically (p. 267).

    The question of goal retention is impossible to answer without understanding which AI Aftermath Scenario (listed on p. 162) we will assume; the issue of goal preservation and AI aftermath seem to go hand-in-hand. The ultimate entity in power – be it human or AI – will be the one to decide the world’s rulebook and the populations’ desired goals.

    For instance, if we end up in the Libertarian utopia scenario, AI might see it as essential to keep the peace between humans, cyborgs, uploads, and superintelligences by retaining the same traditional goals as humans. But if we are unlucky and end up in the Conquerors aftermath scenario, AI – having rid the earth of humans – have no motivation to keep the same goals as humans and will likely not retain them.

    Goal preservation influences the AI Aftermath Scenario and vice versa. So, if we succeed in building “self-improving AI that’s guaranteed to retain human-friendly goals forever” we can ensure that the human race never faces annihilation. As Tegmark mentions, this problem is unsolved of late. But it is one that demands our full attention as it is poised to affect the future of humanity dramatically.

  4. rhwang2

    In chapter 7 of Life 3.0, Tegmark discusses the important issue of aligning AI with our collective interests given its potentially large eventual influence on society. We not only need for AI to adopt our goals but also to interpret them as we intend to and retain them as the AI becomes more knowledgeable about reality. Although we may be able to determine some goals for an intelligent general A.I., it may be the case that the A.I., regardless of its given goals, will have the same set of sub goals such as self preservation, goal retention, and capability enhancement.
    In my opinion, for an A.I. with sub goals such as self-preservation and resource acquisition weakening an A.I.’s abilities is a must for preventing a dangerous singularity. One can easily imagine the dangers of having a powerful entity hell bent on self-preservation. Even for seemingly more benevolent goals, the unintended consequences could be pretty harmful. For example, what if we were to implement an A.I. that was overly incentivized to maximize say equality of outcome or freedom. Part of the solution, implicitly expressed by Tegmark, is that we imbue the A.I. with multiple goals/principles to balance out each other. This then gets at one of the major question of chapter 7: how are we to design an A.I. with the right set of goals so to minimize the damage to humanity?
    With the critical nature of goal alignment for A.I., I wonder if we should even allow A.I. to determine it’s own goals. On one hand, it’s a very difficult problem to program say empathy into an Artificial Intelligence without a technique like inverse reinforcement learning. On the other hand, how do we know a human or even A.I. determined mix of goals won’t have a disastrous overweighting?

  5. In the epilogue, Tegmark outlines The Alisomar AI Principles governing AI development. Their existence demonstrates Erik Brynjolfsson’s “mindful optimism”, (333) as it suggests that their proponents foresee the potential to adhere to them. However, given the trajectory of AI development, will these tenets be attainable at all?

    A focus of the Principles is transparency, the antithesis of O’Neill’s black-box WMDs. Unfortunately, it seems unlikely that if AI development is left unstifled, AI will veer away from opacity. Take, for example, Judicial Transparency, necessitating “satisfactory explanation auditable by a competent human authority” from AI (330). However, being understandable and auditable sacrifices AI efficiency. If an AI isn’t efficient at accomplishing its task beyond a human’s capability, why develop it at all?

    Given that efficiency is a main driver of automation, it seems that transparency would be overlooked in favor of efficiency. Tegmark himself emphasizes that this is “exacerbated by the recent success of neural networks, which often outperform traditional easy-to-understand AI algorithms at the price of inscrutability.” (106) With AGI’s ability to learn exponentially, it would develop and utilize algorithms faster than humanly understandable. If an AGI had simultaneous goals of performing above-human-efficiency judicial decision-making and being humanly comprehensible, we would inevitably hinder its long-term ability to achieve the former as we would stifle efficiency to human-level understanding.

    One way to avoid this trade-off is to develop a beyond-human-intelligence AI with the ability to provide human-level rationale, “reducing what a computer knew into a single conclusion that a human could grasp and consider,” according to a NYT article (Kuang). This article discusses a novel public project by Google’s scientists that explains “How neural networks build up their understanding of images,” (Olah) — proving pertinent as it suggests that these Principles are perhaps attainable with the aid of safety-minded technological advancement.

    Kuang, Cliff. “Can A.I. Be Taught to Explain Itself?” The New York Times, The New York Times, 21 Nov. 2017, http://www.nytimes.com/2017/11/21/magazine/can-ai-be-taught-to-explain-itself.html.

    Olah, Chris, et al. “Feature Visualization.” Distill, 20 Nov. 2017, distill.pub/2017/feature-visualization/

  6. sandipsrinivas1

    In chapter seven, Tegmark discusses many issues that arise from analysis of goals in AI. One of the more compelling and path-changing ideas is the orthogonality thesis, stated by Nick Bostrom, that says “that the ultimate goals of a system can be independent of its intelligence” (276).
    Tegmark proceeds to use the orthogonality thesis to further his claim that “the ultimate goals of life in our cosmos aren’t predestined, but that we have the power to shape them.” (276). This demonstrates, as Tegmark later says, that goals of the future may diverge rather than converge, and that a singular goal existed in the past rather than the future.
    In some sense, this makes the idea of unleashing superintelligent AI to discover their own goals even more dangerous. The intractability of these systems coupled with the fear that comes from not knowing whether these goals will align with our own is a legitimate concern, even if such AI systems may be “fully unfettered from prior goals” (276). However, what is promising is the idea that specialized AI systems may be useful in determining not ultimate goals, but rather goals at lower levels.
    Even with a predisposition to specialized AI over general systems that aim to have one ultimate goal, there are still countless problems that need to be worked through, and Tegmark mentions many of them throughout the rest of the chapter. However, I believe that the orthogonality thesis tells us that instead of trying to determine what the ultimate goals at the top tier of the pyramid are, thinking of sub goals and perhaps less complex problems for specific AI machines is an answer.

  7. Tegmark’s Chapter 7 “Goals” exemplifies a blind-spot running throughout Life 3.0. Tegmark’s prevailing concern is how to align AGI’s goals with human goals – what Eliezer Yudkowsky calls “friendly AI.” To that I say: why would we want AGI to conform to human goals? For most of history, human goals have included domination through violence and inequity. The seeds of this mindset are built into the very language Tegmark uses to describe our relationship to AI: “In the inverse reinforcement-learning approach, a core idea is that the AI is trying to maximize not the goal-satisfaction of itself, but that of its human owner” (262). The notion that humans will own AI at best fails to imagine beyond the master/slave paradigm, and at worst, inscribes into AI the concept of domination through property ownership. This could be disastrous for human autonomy and dignity. In other words, if the fundamental value built into our relationship with AI is ownership, doesn’t that become a “human-friendly value,” and thus, create the potential for AI to treat us as property? Tegmark reasserts: “Unless we put great care into endowing it with human-friendly goals, things are likely to end badly for us” (267). Rather, things could end badly for us because of the limitations of our “human-friendly goals.” Humans dominate other humans all the time. We need not use the analogy of how humans treat animals to imagine such a scenario (“People don’t think twice about flooding anthills to build hydroelectric dams, so let’s not place humanity in the position of those ants” (260)). Despite the doom and gloom of the critique I’ve levied, I’m hopeful, because if indeed the AI will learn through the inverse reinforcement approach, we humans might have even more motivation to start living up to ethical principles in the present, so that AI will be nicer to us that we have been to each other.

  8. SM201

    In the epilogue, Tegmark describes the role of professional collaboration (e.g. the FLI conference) in advancing AI safety. More importantly, however, Tegmark notes that he disallowed media coverage of the FLI event, claiming that “media coverage inadvertently made people across the opinion spectrum upset at one another, fueling misunderstandings by publishing only their most provocative-sounding quotes.” (322)

    Tegmark’s choice to ban the media evokes two related points. First, is there value in banning media from AI safety conferences? More consequentially, can the media be incentivized away from simply engendering controversy in this area?

    In my view, disallowing media coverage has clear pros and cons. Given that AI safety is a highly “fear-mongered” topic, even mildly provocative statements are low-hanging fruit for journalists. However, one potential tradeoff is decreased exposure to thoughtful discussion for the lay-public, which may eventually be hit hardest by AI advances. The best-case scenario involves incentivizing the media to transparently convey the content of these important discussions. Unfortunately, as we saw in The Attention Merchants, media content has become a “race-to-the-bottom” – making this an extremely tough task.

    Tegmark might argue that, given the field’s infancy, he disallowed media to give AI experts a chance to define the foundations of the AI safety discussion. Once this core has been established, it may become harder for the media to pervert the topic for their own ends. This is a valid argument. However, he would likely agree that changing the media’s actual incentives is challenging.

    Notably, it’s not impossible. For example, one could establish greater fact-checking for media outlets (think Better Business Bureau ratings, but for news sources). Moreover, governments could impose penalties on outlets which fear-monger in topics of national safety (including AI). These solutions aren’t perfect, but may at least be first steps towards alleviating Tegmark’s tradeoff.

  9. tamara2205

    Throughout the book, and most notably in Chapter 7, Tegmark posits that to achieve beneficial AI we must ensure that its goals are aligned with ours. However, most of the times he talks about beneficial AI (and the future of life with AI in general), Tegmark almost exclusively implies superintelligence and seldom references human-level, general or universal intelligence (all defined on page 39). Having now finished the book, I believe Tegmark’s work could have largely benefited in applicability and complexity had he also considered these diverse, more readily attainable forms of AI (such as human-level AI) and discussed whether they might be less recalcitrant to the alignment of our goals with theirs. Thus, before we jump to speculations about the future of life with superintelligence — as temptingly visionary as it may seem — it would be useful to ascertain how we might achieve goal-alignment in human-level AI first.

    For starters, aligning our goals with other intelligent humans is often hard enough and yields different rates of success. Given that human-level AI implies the same level of intelligence as that of humans, any different of a success rate could be quite telling and could shift the technological goals we are moving towards. If we can make people share our goals any more than we can make human-level AI (or superintelligence) do so, then maybe intelligence is not really that significant of a factor for goal understanding and alignment. And if the immense problem-solving capabilities of superintelligence are useless (and sometimes destructive, as Tegmark cautions with his ant scenario on page 268) unless it shares our goals and executes them safely, then maybe pushing for superintelligence may be a colossal waste of time that would be better replaced by working towards something more manegable and closer to humanity that we think.

  10. Divya Siddarth

    Tegmark consistently emphasizes the importance of aligning the goals of a constructed superintelligence with the goals of humanity, calling this alignment “crucial to the future of life” (249). However, in his exploration of goals, he focuses on extremely macro-level concepts – from ‘matter intent on maximizing dissipation’ to ‘machines built to help humans’ (275). Without a discussion of much more granular goals, however, I think this project of goals alignment, as laid out in the Asilomar principles under Ethics and Values (330), is crippled from the start. I appreciate the inclusion of the ‘sub-goal’ concept (264), but feel that even the stated sub-goals (resource acquisition, etc) are at too large and abstract a level to be productive in developing safe AI principles, or improving the lives of people.

    It is not even a particularly simple matter for two intelligent humans to align goals perfectly, and requires much discussion of not only theoretical and abstract ethics but also practical ethics, and how one’s values and principles manifest in day-to-day action. While I think the Future of Life institute and Tegmark himself are working on important problems regarding AI safety, I think it may be more productive to first discuss human-based, human safety, goals, and ethics, and reach a consensus in that regard (amongst nations and peoples). It seems crucial to develop practical principles and solutions to ethical quandaries over which humans clearly differ (as we can see manifested in everything from partisanship domestically to violence and colonization abroad). Otherwise, it will fall to the first developers of the AI, or the the Future of Life Institute, or some other small group to implement their own conception of this consensus, and it will be impossible to undo the bias that will be introduced henceforth.

  11. Bradley Richard Knox

    I think one of the most interesting debates presented in the entire book is in Chapter 7 where Tegmark is essentially trying to figure out which ethical theories need to be considered when creating an AI system that must have a certain way of conducting itself and deciding its own best course of action. Tegmark brings up Utilitarianism, which is the general idea that the main goal for humanity is to maximize value and good, and minimize suffering and evil. Under this way of thinking, every decision that is made is a decision about the future, where the only things that matter are the consequences of ones’ actions. Although this seems like a very selfless way of living, put into the hands of something that would strictly follow this theory would potentially lead to the norm that evil actions must be carried out for the greater good, especially if the goals of the AI stray away from those that the humans who programmed it intended.
    In my opinion, what constitutes an ethical theory as well constructed is the idea that it must be applicable in all situations, especially when it is being programmed into a computer, where the input is all that the computer initially has to work with. For this reason, I propose an ethical theory more in tuned with Nagel’s theory of moral absolutism, which states that humans should behave within a very specific framework of behaviors that they can and cannot partake in, and you are not allowed to stray from that framework under almost any circumstance. For me, with an existential threat such as AI, this seems like a much safer alternative than ceding the power of weighing pros and cons of determining a behavior to a system that we do not fully understand.

  12. In his chapter on consciousness, Tegmark raises the issue of whether or not a complex, highly integrated electronic brain would actually be conscious or just a p-zombie. In response to this concern, Chalmers and Shanahan have created a a thought experiment in which one considers what would happen if one were to slowly replace their brain with electronic analogues that perfectly mirror the relevant functions of individual neurons, down to their specific wirings and weights. Would such an individual slowly lose subjective awareness without reporting any changes to behavior? If so, then the individual’s change in experience seems to contradict the absence of corresponding expressions of concern or terror regarding this loss of consciousness!

    In Douglas Hoftstadter’s book, I Am a Strange Loop, Hofstadter argues that we shouldn’t privilege the substrate to such a high degree. After all, individual neurons aren’t any more meaningful than individual notes in a song. Rather, what becomes important are the high-level events occurring within a system. To help shed light on the nuances of thinking this way, Hofstadter discusses the deep bond that he and his deceased wife shared. To Hofstadter, this relationship implies that some part of his wife really exists within him and other close companions of hers, given that such individuals are capable of imitating her quirks and relating many of her memories. One might be skeptical of the degree to which she could actually be said to still possess self-identity, and indeed Hofstadter acquiesces that any sense of identity that remains of his wife is indeed seriously reduced. Returning to Life 3.0, if we assume that Hofstadter is correct that awareness lies in the pattern and not the substrate, then there is no conceivable reason why a complex AI couldn’t attain our level of awareness.

  13. Meg Verity Elli Saunders

    In the concluding page of his chapter on consciousness, Tegmark refers to two opposing physicists’ views about the Universe. The first, which belongs to Steven Weinberg, is that “The more the universe seems comprehensible, the more it also seems pointless.” The second, which is attributed to Freeman Dyson, is that although the universe used to be pointless, “life is now filling it with ever more meaning, with the best yet to come if life succeeds in spreading throughout the cosmos.”(314) The implication of this view is that experience is what gives the universe value and meaning. This premise is probably the most salient argument in favour of striving to create conscious AI, and challenges the idea that the conversation about consciousness is a pointless one. However, the prospect of creating conscious AIs will present a host of difficulties. For example, it seems that is very hard to determine whether or not a machine is conscious. As someone like Giulio would say, “if you upload yourself into a future high-powered robot that accurately simulates every single one of your neurons and synapses, then even if this digital clone looks, talks and acts indistinguishably from you…it will be an unconscious zombie…”(306) This assertion is based on the notion labeled as “integration principle:”(304) The logic gate connections a robot we could built today would not give sufficient integration to satisfy the conditions of consciousness. Perhaps this means that the mission to start building AI should be put on standby, at least until we have developed technology that allows us to build sufficiently integrated systems. Otherwise we run the risk of creating AIs that exhibit the characteristics of conscious lifeforms, but are actually unconscious. Such an outcome would render the production of AIs pointless, according to Dyson’s premise.

  14. Julia Thompson

    Tegmark’s arguments for ensuring that a superintelligence shares human goals is not compatible with his support of utilitarianism as a human goal. His definition of human goals focuses on “shared preferences,” (261) which may conflict with utilitarianism. For instance, in an environment with limited resources and a conscious superintelligence, the superintelligence could conclude that its use of resources would maximize “positive conscious experiences” (271), and this could lead to its subgoal of resource acquisition interfering with the same subgoal of humans. This could end in a situation of Conquerors, where AI “decides that humans are a threat/nuisance/waste of resources” (162) because it could use the resources more effectively toward its goal of maximizing positive subjective experience. Alternatively, since the Descendants scenario would have the same result as Conquerors, with added positive conscious experience for the humans, the AI could choose that. In either case, the goal of utilitarianism would undermine human goals by making human self-preservation impossible.
    An alternative would be the benevolent dictator. By running a society that supports human happiness and includes autonomy and diversity, and in which some zones would comply with the legacy principle (271), the AI would comply with all four principles. However, human goals of power and control would likely be impossible, as the AI “runs society” (162). Also, according to the survey on futureoflife.org, egalitarian utopia is the most popular scenario, followed by libertarian utopia. Benevolent dictator is not in the top five. In that case, the definition of human goals in terms of “shared preferences” (261) would mean that the benevolent dictator would not have human goals. In that case, either human goals or the inclusion of utilitarianism as a goal would have to be altered. Utilitarian situations, with or without humans, are incompatible with human goals as defined in Life 3.0.

  15. Alex Gill

    The Asilomar AI Principles wonderfully distill the most pertinent considerations that society must take as we move forward with AI. Along with each individual principle helping to guide us forward in research, the combinations of various principles highlight some of the more precarious, conflicting goals we must balance. Most pertinently, the combination of goals #12 and #14, Personal Privacy and Shared Benefit, can not only show how our various goals can conflict, but also how powerful they are when working in unison.
    Each of the books we’ve read thus far has in some way or another addressed the constant battle between effective technology and personal privacy. Tegmark himself discusses the issue thoroughly throughout his book, with examples such as the potential for future surveillance states in the age of AI. In this case, improvements in crime prevention could increase safety and benefit society as a whole. However, these technologies conflict with values of personal privacy. In this case, we may have to sacrifice benefits that may improve a lot of people in order to maintain privacy.
    On the other hand, previous books we’ve read have shown how protecting privacy can also work to increase shared benefit. Tim Wu explored the hoards of data that companies like Facebook collect on users to increase and maintain user attention. These algorithms take away the privacy of the masses yet only benefit companies that own them. Improving privacy restrictions in these cases can help to empower as many users as possible. Similarly, Cathy O’Neil demonstrates the importance of transparency in data management and the need for people to have a right to access the data they generate in order to promote fairness.
    Because these principles can occasionally conflict, it is important for us to consider how to use them in unison to guide us.

  16. “‘If . . . subjective experiences are irrelevant, [the] challenge is to explain why torture or rape are wrong without reference to any subjective experience’” (283).

    You’re on, Tegmark.

    Suppose A is leaving town. A has pushed away everyone in their life, so no others are concerned about A’s well-being, and no others depend on A. As A reaches the remote outskirts of town, A mysteriously dies and falls into a ditch. No one can see the body unless they walk right up to it.

    B happens to stumble upon A’s dead body. B, for whatever reason, decides to have their way with A’s body, doing whatever unsavory things to it. Fulfilled, B buries the remains where no one will discover them.

    Subjective experience plays no role in the situation above. A wasn’t conscious when the act happened and had no possibility of becoming conscious in the future. Since no one cares about A, there’s no impact on them. One can even say B did good by burying A’s body where it wouldn’t spread disease or offend another passerby.

    Yet, something about what B did seems wrong, doesn’t it? A core element of morality doesn’t depend on consciousness – just as torturing or raping someone is wrong even if they are unconscious or somehow experience no aftereffects.

    I think Tegmark’s fixation on artificial consciousness gets in his way. Rather than asking questions about consciousness, we should focus on what it means to morally execute power, whether the involved are conscious beings or unconscious systems. One possible framework for this “exercise of power” may come from the feminist branch of philosophy, and its concept of power-as-empowerment [1]. AGI focused on empowering humans opens possibilities to limit itself and define more pro-social goals.

    [1] https://plato.stanford.edu/entries/feminist-power/#PowEmp

  17. In Tegmark’s last two chapters before the epilogue, he discusses goal oriented processes in AI systems and many of the questions surrounding consciousness and AI. Although he does mention how the degree of consciousness an intelligent system may have will have an impact on how society approaches our goals, specifically ethical goals, it would be beneficial to further analyze the connection between consciousness and goal-alignment (271).

    One specific aspect of this discussion that seems particularly relevant is the concept of separating the intelligence of a system from its goal of supporting its own life (262).
    As Tegmark goes through his four points of enabling a subjective experience – the ability to remember, compute, learn, and experience – it seems that subjective experiences strictly pertain to one’s own experiences, except when being empathetic (303). If conscious AI systems were to be created, but their goals were aligned with ours to avoid any problems with super-intelligent systems, it would seem that their experience of consciousness would be connected to ours in an odd way. Rather than having the primary conscious and unconscious goals of supporting its own life, its goal would be to use it’s consciousness to better understand that of humans. All in all, its consciousness would revolve around empathy.

    Returning to some of the ethical questions Tegmark discusses, where designing an AI to be conscious and not accounting for them in society, could be attuned to slavery and could change our interpretation of utilitarianism. However, the concept of a holistically empathetic, conscious system could avoid this problem because the experiences of the system would depend on the utility of events for the humans they serve. In other words, these systems would have a better subjective experience if the societal system favored humans.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s