Intro

AI development has reached a point where it is no longer the concern solely of researchers, this paper being a case in point. The impacts of greater information processing, robotics, automation and autonomous systems have far reaching consequences that extend to all present societies and likely all future generations. The ability of citizens to inform themselves and speak intelligently on the matter is crucial for devising inclusive policy responses to changes in economics, politics and culture. Consider this just one more crank’s foray into an increasingly fashionable field (Bostrom 2014, 376).

Two presentations with different conclusions on the level of concern we should harbor regarding superintelligence got me thinking, which one should I trust. Really, which one should we all pay more heed to? In the spirit of mashups I took these two TED talks on the topic and pitt them in a fabricated debate, a al Intelligence Squared.

What follows is a summarization of the two viewpoints, a listing of salient arguments, a section on points of agreement, and a look at where they diverge so as to tackle the substance of the positions, or lack thereof. The critique of the positions will be primarily backed by a literature review that includes content from videos and podcasts for added context.

That was the plan in any case. Things got interesting the further we dug into one of the talks.

Summary Positions

Pollyanna - Grady Booch; Don’t fear superintelligent AI

Summary: Modern movies have played on “our fears of being subjugated by some unfeeling, artificial intelligence who is indifferent to our humanity.” Grady Booch believes “such fears are unfounded.” The advance in computing has provided him with the confidence that AI can be built. The list of activities and abilities are large, growing in sophistication, and ever expanding. Systems are able to input data from millions of devices and predict failure, communicate with humans in natural language, “recognize objects, identify emotions,… set goals,… and learn along the way.” A theory of mind is not far behind and “we must learn how to [build systems that have an ethical and moral foundation]”.

New technologies incite trepidation that so far have proven overblown when applied to cars, telephones, and the written word. There has been some degree of truth to our fears but more often “these technologies brought to us things that extended the human experience in some profound ways.”

AI should not be feared “because it will eventually embody some of our values”, in much the same way we train systems to recognize objects and play games by learning from the examples and goals we provide. Additionally, the subjects of human knowledge, such as law, which we expose systems to come imbued with human values they cannot help but pick up along the way, e.g. “[a] sense of mercy and justice that is part of the law.”

Two overblown concerns revolve around rogue agents, who cannot possibly misuse AI on their own due to the amount and sophistication of resources needed (which no lone wolf would have at their disposal), and powerful AI systems with goals contrary to human needs, because “super knowing is very different from super doing.” The AI’s being built will not “control the weather,… direct the tides, [nor] command us capricious, chaotic humans.” Furthermore any AI resembling these apprehensions would have to compete with us economically and “we can always unplug them.”

We are part of “an incredible journey of co-evolution with our machines” which will further extend human experience. AI/superintelligence worries are a dangerous distraction from very real issues here with us today having to do with a decline in human labor needs, the sensibility needed when aiming to educate the world but at the same time respecting our differences, extending and expanding human life, and quite possibly solving how to “take us to the stars.” It is an exciting time and we are just at the beginning.

Cassandra - Sam Harris; Can we build AI without losing control over it?

Summary: The two scenarios are we continue to work on and improve artificial intelligence or we stop. The thought of what would be required for the latter to happen, given the importance, power, and desirability of more intelligence, would be quite frightening and likely entail serious derailment to our society/civilization. Assuming we do not destroy or irreversibly setback our civilization we will continue to press forward. However, this alternative is not necessarily a cheery one.

Based on intelligence being nothing more than information processing, having machines capable of such processing, a continued effort to improve this mechanism, and the likelihood that we are not the pinnacle of intelligence, it would seem that it is only a matter of time before we bring about superintelligence.

Should we create an entity of viable artificial general intelligence (AGI), given the processing speed and capabilities inherent in it, it would be fair to assume this intelligence exploring the possibilities and making improvements to itself (on software, code structure, and hardware configuration) not to mention the creation of ever smarter machines, which in turn build still smarter machines.

The prospect of such a resource raises concerns for a number of different areas: economic winner-take-all scenarios, political oppression, and possible rival country first strikes to suppress AGI release, to name just the near term human created concerns. Beyond this is the worry of an AGI that does not share our values and possibly does not recognize our “worth”. Any divergence in interests could resemble the relationship between a building developer and an anthill with humans in the role of the ants.

Salient Points

Major Pollyana points:

  • AI will be built
  • Values need to be installed so as to have our best interests in mind
  • Rogue agents/lone wolves will not be able to unleash this capability
  • Super knowing different from super doing
  • AI would have to compete with us economically
  • We can always unplug them
  • AI/superintelligence worries are a dangerous distraction from the risks already here with us today

Major Cassandra points:

  • AI is coming; fast or slow, it’s just a matter of time
  • Assuming we don’t destroy ourselves we will continue to push forward
  • We are probably not the pinnacle of intelligence
  • Only a matter of time before we have a self improving machine
  • Impact of such an AI or its prospect: first strike attacks, hyper wealth concentration
  • Divergence in values or view on our worth could be catastrophic to humanity

Convergence

Before getting contentious I will outline the points of agreement between the two presentations.

  1. Both speakers believe that AI can be built. Harris adds the insightful caveat that given the importance of having more intelligence the alternative to not bringing about AI may very well be the result of societal collapse.
  2. This is assuming that we have some general idea presently or in the near future on how to create increasingly smart machines. This would be the second point of agreement. Booch is bolstered by the technological advances he has seen and Harris points out that intelligence may very well be nothing more than information processing.
  3. A third area of convergence is around the idea of imbuing these powerful machines with values aligned with our own. Booch is sanguine on this point believing it to be an extension of supervised learning. Harris does not disclose his level of optimism on this point, though this viewer was left feeling he was distinctly less confident, but conjures up the stakes involved by way of pointing out how we treat ants, neither with undue disdain nor comparable moral worth, and suggesting this relationship might fit an AI-human relationship should we misjudge.
  4. There appears to be a tacit agreement on the possibility of machines having superior “knowledge” to humans. This seems uncontroversial even today in narrow ways (chess, Go, spreadsheets, etc) and for what appears to be around the corner (self driving cars helping to mitigate accident deaths). Booch at least entertains the idea by reassuring the audience that that capability does no outright translate into “super doing”, while Harris explicitly states the likelihood that we are not the pinnacle of intelligence, helping to further couch the stakes.
  5. The final point on which I see convergence is the recognition that the path to AI is already impacting us today and some versions of what the future may look like can be glimpsed by looking at the labor disruption and capital/resource concentration currently occurring and expected to accelerate if unimpeded.

“ATTACK” on Harris

I found myself agreeing with the Harris position before coming across these two talks. It is important to put that out there to clearly state any potential bias, but it is not as if I have not been convinced to back off of my “eccentric” worries (Agar 2016) in granting too much volition to our machines (Brooks 2014) by some of the other resources I looked into in preparation for this piece.

Harris does a good bit of storytelling, laying out his premises, stringing together the various pieces, and elegantly leading us by the hand to his conclusions all the while skirting any of the details but rather making certain assumptions (about information processing, its tie to intelligence; market forces and working situations) as the common baseline.

Each of these assumptions require their own set of arguments to substantiate, most notably the self-improving aspect (Horvitz and Selman 2009). Granted, this is practically impossible to do in so short a time and difficult to pull off so smoothly regardless of the amount of time given.

Harris is so well spoken, measured and logical the listener is left wondering if we are not already in the presence of an artificial intelligence. His presentation is well thought out and succinct, hitting on many relevant and frightening issues with an economy of words and time. If this is how AI is going to be it’s not so bad. However, if you want to see the subtle ways machines are still far below our capabilities we need look to our other speaker, who through verbal misdirection does an even more impressive job of instilling fear of artificial general intelligence.

Well, hand me a badge and call me Agent Kujan

The intention of this piece was to bring in two presentations for an interrogation of their positions and determine who proved the more reliable witness. By bringing in opposing views we might better triangulate what our proper level of AI concern should be.

I began the piece by placing Booch in the role of Polyanna for obvious reasons. Though acknowledging future concerns and present issues he is downright giddy about the prospects of AI. His bubbly demeanor is undeniably contagious and I for one could not help being disarmed by his friendly manner. Yet, by the end of his talk I was left feeling more worried than by Harris’ presentation, who is supposed to be the wet blanket between the two.

Upon looking at the details, the various ways in which Booch’s presentation is incomplete, misleading, contradictory, and superficial becomes obvious in a way not fully appreciated when just passively listening to him. The doubts left by these open ended positions were what made me uneasy and first raised my suspicions. What I thought were so many false notes in his talk made me realize that I was listening without hearing him, he was playing a different melody than the one he was advertising.

Booch does a masterful job of employing the lovable uncle type with an “aw shucks” air about him. He delivers hefty subjects with a light touch and easy air. Jumping from topic to topic but all the while keeping the thread. But be not deceived by our bumbling hero, for he is as wily as a fox, as attentive as Colombo. Yes, this befuddled seeming self-described geek is a rhetorical master. Counter to the straight forward approach of laying out one’s position and backing it with strong arguments, relying on evidence and tightly knit logic, Booch pantomimes such an approach only to undermine the position he claims so earnestly to hold to.

At first you cannot help but fall for the act. His endearing grin and non-deceptive eyes make you want to believe that this is the kind of person that says what he means and means what he says. Oh no, my innocent reader, do not be fooled by this avuncular presentation. If you listen closely, pay attention, and actually hear the things he professes it is clear that he could not possibly believe them. Rather, Booch is roping us in with his descriptions, each one just a bit off but not so egregious to raise alarms on their own.

kobayashi-mug
MGM Studios

When you step back to see what he has laid out and the picture he’s painted you can’t help but feel like the dupe. And faster than you can say Kobayashi you realize that he has taken you in and gained your confidence. That optimism he was building up in you about not fearing AI is all constructed of a house of cards. At this point our cheery disposition comes tumbling down and with it the curtain from our eyes. Booch has played us in order to do us a favor, to instill in us a healthy fear about our uncertain future in the company of intelligent machines.

Don’t believe me? Let’s tie up those loose ends together.

The greatest trick the Devil ever pulled…

The way i hear it… (Incomplete)

Let’s start with the big one which coincidently occurs early on in the talk. Booch enumerates the many things computers can and will be able to do. The rattling off of tasks and services serves to illustrate the awesome array of activities that we see and can reasonably expect to be exposed to. The wrapup rhetorical questions to this list culminates with the big one of building systems with an ethical and moral foundation. The punchline: “this we must learn to do.” While previous examples are in the bag, illustrating the (cognitive-like) capabilities and expected increasing abilities the question of whether these machines are aligned with our values is open ended.

A look at the literature, regardless of the side taken, demonstrates that no one considers this a simple task. For me this is the linchpin of the argument and while the speaker gets credit for bringing it up and acknowledging that it has not been resolved and stressing (by using the word “must”) the importance of the task, he has to be held equally accountable for addressing possible avenues to a solution, or lack thereof. Booch suggest as much by making the analogy to how computers are now trained to recognize images. Instead of instructing or hard coding the answers we “teach” the machines to recognize flowers by feeding it thousand/millions of images. This example of computer vision and image classification is an interesting take, especially for audience members unfamiliar with machine learning,1 but beyond hinting at a possible avenue raises many concerns for its applicability to ethical behavior. The reason for this are varied and numerous. Human behavior, ethics, values, and desires are diverse, contradictory, multi-dimensional, variable, time dependent, and difficult to even define. Computers still get image recognition wrong, what are the prospects or our confidence in their ability to make ethical decisions in line with our own?

A statement that we will return to in its fuller context, but whose spirit is retained and an example of another incomplete promise, is that these machines “will eventually embody some of our values.” I cannot speak for the reader but for me the two scary words here are “eventually” and “some”. With respect to the former term we have to wonder what timescale we are speaking of and what effective capabilities these machines will have in the meanwhile and their ability to act in the world at large. It is worth wondering what state humanity will be in by the time the machines “eventually” embody these values. As for the latter scary word, “some”, we have to worry that the machines get the good parts but even then we are certainly not in the clear. The fear of unintended consequences (Amodei 2016) as a result of a machine that sticks to the letter of the law, if you will, can be quite frightening (not a few sci-fi books and movies explore this idea). Even before AI came into the picture we humans were weary of unencumbered power that was equally unsophisticated, think of your Greek myths and Genie stories (not to mention a few good jokes).

The speaker celebrates the progress of humanity through time by way of certain technologies. He references previous fears met with and partially realized but overall eclipsed by the increase in human experience and overall well-being. AI would undeniably have some of the same mixed results but the scale, sophistication, and speed of this latest advance is so different as to possibly not follow the previous outcomes of past technologies (Bostrom 2013), which while monumental in their own right played out in slower, more localized, uneven ways. There was time to digest and readjust. We may have no chance to do either this time.

Back when I was picking beans in Guatemala (Misleading)

In this section we witness two primary tricks by which the presenter misdirects the audience. The first is with the flexible use of the terms AI and superintelligence. The title of the talk mentions superintelligence directly and toward the end of the talk Bostrom is referenced by name. Additionally, there are some details sprinkled throughout that suggest Booch has this concept in mind, at times. At other times he steps away from an AGI/superintelligence framework and instead describes something a bit more narrow, apparently vacillating as it suits his argument.

Inconsistent

The first hint at this tactic is when he refers to “a hard engineering problem with elements of AI”. If we are talking about just more powerful computers, more clever algorithms, or access to more data, differences in quantity and not quality, then we may be sure that we have narrow AI in mind (Brooks 2014). This is hardly the foremost concern in people’s mind, especially after the speaker references movies such as “The Terminator” and “The Matrix” to set the stage. This is a subtle case of bait and switch. The title grabs you, the intro primes you, and then we are talking about flower classifications and more friendly-tuned personal assistants that we can take on our journey to Mars. None of these near term narrow AI issues and challenges are of direct interest or concern to the work of Bostrom in dealing with superintelligence.

Irrelevant

The second way that we are misled is in the descriptions of what would be required for an AI (presumably AGI now) to have the abilities that are at the root of our “fears”. In an effort to put us at ease and gain some perspective we are reassured that “[w]e are not building AI’s that control the weather, that direct the tides, that command us capricious, chaotic humans.” Huh? I am not sure where Booch gets this picture from. I do not recognize it from either the movies he mentions or Bostrom’s book. This line of argument appears to be a red herring and an unnecessary distraction (like the second half of this sentence).

Booch gets a bit more specific when he revisits this premise and outlines what would be required for superintelligence to pose a threat, using “The Terminator” as his inspiration. Perhaps he has seen a sequel or prequel I am not aware of for he yet again sets the bar at a ridiculously high level for what would be required for machines to pose a threat. Booch talks of a superintelligence that commands human will and directs every device in every corner of the world.2 This isn’t just to be a stickler but to be true to the thought experiments that all good science fiction stories provide. Skynet “comes online” and attacks the humans. There is no control of human will to speak of. Perhaps there is an awareness of their targets’ motivations, making them more formidable than a blind attacker. I bring this up to not allow the speaker’s overstating the requirements for a dangerous superintelligence. If you have one minute you can remind yourself that having become self aware Skynet proceeds to fight back/protect itself, making calculated moves to engender nuclear apocalypse. It, Skynet, never requires godly powers. Initially it is just another player.

Booch retains his confidence and easy manner throughout, almost pulling a Jedi mindtrick.

A rumor’s not a rumor that doesn’t die… (Contradictory)

Loosely tying together the previous two sections we have elements of each combined to raise an issue regarding consistency. At least “consistency” as a theme and how it introduces contradictions. Several times we are told about eventual value embedding and that through exposure to our legal system “a sense of our values” will also come along for the ride. That hardly seems to be in question or worth arguing about. How much, what combination, and will it be sufficient are more interesting topics (Russell 2015).

Nevertheless we leave these behind in order to juxtapose this passing along of our values with the red herring of not building AI’s that control the weather and the tides. Alongside this ridiculous threshold we also have a mention to another source that is, at least by association, implied to be just as difficult to master: “command[ing] us capricious, chaotic beings”.3 It would appear that this line of reasoning would have it both ways.

We as humans are predictable and orderly enough to learn ethics and morals from but not so orderly so as to hold a controlling sway over. It’s difficult to put down my pint calmly enough so as not to spill any of this delicious beer on my way to raising both hands up in exasperation. At the courage of this man pulling such a line of reasoning. Presumably there is enough signal within the noise of human behavior plus the combination of statutory records to help a machine understand and get a sense of our values but the same signal and recorded behavior (digital breadcrumbs included) does not cut through for control purposes.

First of all I do not see how one is necessarily more difficult than the other. Let’s grant that Booch has yet again some outrageously high bar in mind and that perhaps he is referring to an AI that through auditory or text communication alone would be able to control all of humanity. (A superintelligence limited to a screen output may [initially] have far less success at a takeover versus showing itself adept at making ethical decisions. However, the contrast in stakes for actions limited to a screen readout versus acting in the world are quite dramatic.) Fine, you win that point but once again why would an AI or any malevolent actor for that matter, machine or otherwise, need that level of ability to further its goals?

Human history is chock full of oppression, abuse, coercion and exploitation. Unfortunately another characteristic that comes along for the ride to accompany death, famine, war, and conquest is the paucity of revolt we would have hoped to have seen for our ancestor’s sake. Time and again we have been shown the relative ease with which a people may be subjugated and for horrendously long periods of time. Moreover, beyond outright brutality there are other less violently coercive measures, such as drugs, comfort and distraction. Basically, bread and circuses.

To highlight again the contradiction of the position, consider also the standards being applied. It would appear far more likely that a superintelligence would be able to abuse and take advantage of us, purposefully or incidentally (paper clips, anyone?), than for it to be in line with our values as we would wish for them to be interpreted and applied against us now that we are in a less advantageous role. Let us also remember, and while this is not a direct counter to the presentation, it is inline with the theme of contradictions, that our actions as humans are very often at odds with the things we profess to stand for (see: Zambrano, Lucifer Effect & Milgram’s Obedience to Authority experiments). Many of us (past, present, in the future no doubt, collectively and as individuals) have played lip service to our higher standards and aspirations, truly only holding strong to them when found in a wanting position but having no qualms in setting aside those very same ideals and laws when it has suited our needs and proven expeditious to our desires. To circle back to the presentation, have we not provided a machine intelligence with ample evidence of how to subjugate us humans by simply teaching it our history?

A man can convince anyone he’s somebody else, but never himself. (Superficial)

And now we are at the most fun and lighthearted of the sections of objections. Why should we be any more somber with our criticism than the speaker is about his topic? This may prove to be a serious subject, perhaps the most consequential of the 21st century as Bostrom reminds us, though Mother Nature and our change of the climate may have something to say about that, but there is no reason to take ourselves too seriously here.

It is in this section that we point to our hero beginning to lose steam and inspiration. The effort of keeping up the charade seems to be taking its toll and Booch begins to “show” his hand. Some of these defenses are so silly that the ridiculousness of the position obscures the lack of conviction in the performance.

As with the previous section we will take the lines as they appear in the talk. First up is the assumption that “these kinds of substances are much larger [than an internet virus], and we’ll see them coming.” Comedic gold; verbal slapstick, really. In a list of befuddling comments this one leaves me downright speechless. Taking his statement literally could hardly be more silly than the more likely intended meaning of the existence of AI being difficult to impossible to hide and as a such being aware of its presence, though what safeguard that would prove remains open to debate. The possibility of such a program being obscured from detection appears unlikely to our speaker and I cannot exactly say why (remember, I’m left speechless).

“Super knowing is very different than super doing,” until it’s not. The concept and challenges of keeping a superintelligence boxed in is taken up in several areas, including in Bostrom’s book (Bostrom 2014; Yudkowsky 2002). Additionally, sandboxing such an entity would mitigate risks but not resolve the danger in itself.

Values are the crux of this whole thing. How to instantiate an agent with our well being “in mind” is what it’s all about. After all, the benefits to be accrued from a friendly/benevolent superintelligence are astronomical and unlikely to be limited even by our imagination. Yet our well being sways on this fulcrum and “teaching [machines] a sense of our values” is a scary proposition when you think about it for more than a minute. What comes along with that “sense”? Historical biases, capricious and arbitrary decisions, racial and gender inequalities, class discrepancies, etc.

There be lone wolves and rogue nations out there but fear not for they can not possibly bring about AI on their own. However, the technical thresholds for implementing many machine learning and AI technologies are falling daily. What once required the ability of a wunderkind is now being taught in online courses. There are presently a “wide availability of high-quality machine learning and scientific calculation libraries” (Bostrom 2014, 363) and we have good reason to believe this will continue to be the case, all just a git repo clone away. Additionally, this makes no allowances for the “progress” that “legitimate” nations are making on the weapons front. Perhaps our fears are incorrectly focused.

The most quixotic statement has to be about AI “hav[ing] to compete with human economies, and thereby compete for resources with us.” I cannot tell if this a point for or against his argument. An AI advance enough to compete with us would dust us! I suppose I know one movie prequel Booch has missed, which speaks directly to this issue, The Second Renaissance, Parts I and II of “The Animatrix”. Spoiler alert: we lose. Beyond the intelligence horsepower and speed deficiency we also have to deal with contiguity of objectives concerns, aka goal coordination (Bostrom 2014, 74). AI can far more easily align their/its actions. Meanwhile, we have people today willing to sell nuclear weapons to extremists. Lenin may prove precient when it comes to a human-AI showdown when he suggested that the capitalists would sell them the ropes they would be hung with.

Lastly, “don’t tell Siri this - we can always unplug them.” Another highly speculative and overly sanguine prediction. It is far more likely that even if we had the option to unplug such a system we may be in the predicament of having integrated it with other (critical) infrastructure and systems we depend on, leaving us in the fragile condition of the cure being as bad as the disease (all our systems [food, energy, water, transportation, trade, finance] grinding to a [temporary] halt).

How do you shoot the devil in the back? What if you miss?

And so there you have it. What should have been a constructed debate consisting of presentations of opposing views turned into an one-sided piece. I suppose we are still left with the option of deciding who did it best. A matter of taste plays into the verdict but whether or not you prefer your arguments tightly presented, logically tied together and rationally delivered there is something to be said for the conniving manner of the other approach. When executed correctly it is difficult to argue that it is not more effective as a rhetorical device. After all, we are not discussing mathematical proofs here. This is where Booch appears to understand his audience better than Harris.

Booch knows he is dealing with chaotic and capricious beings, to borrow his terms. The best way to sway us humans may not always be, rarely really, via the direct and factual approach. Instead, by grabbing our emotions a speaker may more effectively move the audience. Booch does this by luring us in with his deceptively earnest delivery. Little by little he wraps us in his position. You believe that he is weaving a network of anti-fears but all the while he takes the arguments to still further ridiculous positions (reductio ad absurdum).

At some point a sympathetic listener finds herself going from an unconscious nodding to a barely perceptible shaking of their head. Before the realization has registered rationally, the unease of emotions brought on by the absurd progression of assumptions has turned those feelings of reassurance into a queasiness. The listener has been wrapped up in an indefensibly sanguine hold, caught in a web of superficial palliatives. This is something to fear after all and retaining cozy feelings to the contrary will only make the realization that much worse.

Thank you Booch for your masterful work. Your final proposition might be honest4 or ironic but your method of building up to it is profoundly instructive.5


Notes

1 To be clear, the idea of leveraging ML techniques/technology to help address the alignment problem is well supported. See: Russell 2015; Goertzel and Pitt 2012; Bostrom 2003
2 I believe our speaker has his stories mixed up. For total control see, I Have No Mouth, and I Must Scream.
3 Is this a reference to nano manufacturing and control (Bostrom 2013)? If so, the “commanding” need not come before the takeover but could follow just as easily afterward or, as I get into below, there might be no need for that level of control and/or intrusion.
4 AI Risk Denier (Pistono and Yampolskiy 2016).
5 To paraphrase a William James endnote from The Varieties of Religious Experience.


REFERENCES

Agar. N. 2016. Don’t Worry about Superintelligence. Journal of Evolution and Technology 26(1) (February): 73-82. Available at http://jetpress.org/v26.1/agar.htm (accessed April 14, 2017). Concrete Problems in AI Safety

Amodei, D. et al. 2016. Concrete Problems in AI Safety. arXiv:1606.06565v2 [cs.AI]. Available at https://arxiv.org/abs/1606.06565 (accessed April 14, 2017).

Bostrom, N. 2014. Superintelligence: Paths, dangers, strategies. Oxford: Oxford University Press.

Bostrom, N. 2003. Ethical Issues in Advanced Artificial Intelligence. Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence 2(I): 12-17. Available at http://www.nickbostrom.com/ethics/ai.html#_ftn1 (accessed February 27, 2017).

Brooks, R. 2014. Artificial intelligence is a tool, not a threat. Rethink Robotics (blog). http://www.rethinkrobotics.com/blog/artificial-intelligence-tool-threat/ (accessed June 6, 2017).

Goertzel. B. and J. Pitt. 2012. Nine ways to bias open-source AGI toward friendliness. Journal of Evolution and Technology 22(1) (February): 116–31. Available at http://jetpress.org/v22/goertzel-pitt.pdf (accessed June 6, 2017).

Horvitz, E., and B. Selman. 2009. Interim Report from the Panel Chairs. AAAI Presidential Panel on Long Term AI Futures. AAAI Panel held 21–22 February, Pacific Grove, CA. Available at www.aaai.org/Organization/Panel/panel-note.pdf (accessed July 20, 2017).

Muehlhauser, L. and N. Bostrom. 2014. WHY WE NEED FRIENDLY AI. Think, 13(36): 41-47 doi:10.1017/S1477175613000316. Available at http://journals.cambridge.org/abstract_S1477175613000316 (accessed March 2, 2017).

Pistono. F. and R.V. Yampolskiy. 2016. Unethical Research: How to Create a Malevolent Artificial Intelligence. arXiv:1605.02817 [cs.AI]. Available at https://arxiv.org/abs/1605.02817 (accessed April 14, 2017).

Russell, S., D. Dewey, and M. Tegmark. 2015. Research Priorities for Robust and Beneficial Artificial Intelligence. Association for the Advancement of Artificial Intelligence. Available at https://futureoflife.org/data/documents/research_priorities.pdf (accessed July 8, 2017).

Yudkowsky, E. 2002. The AI Box Experiment. Webpage. Available at http://sysopmind.com/essays/aibox.html (accessed July 13, 2017).