AI takeovers in popular culture


AI takeover is a common theme in science fiction. Famous cultural touchstones include Terminator and The Matrix.

Fictional scenarios typically involve a drawn-out conflict against malicious artificial intelligence (AI) or robots with anthropomorphic motives. In contrast, scholars believe that a takeover by a future advanced AI, if it were to happen in real life, would succeed or fail rapidly, and would be a disinterested byproduct of the AI's pursuit of its own alien goals, rather than a product of malice specifically targeting humans.[1]


There are many positive portrayals of AI in fiction, such as Isaac Asimov's Bicentennial Man, or Lt. Commander Data from Star Trek. There are also many negative portrayals. Many of these negative portrayals (and a few of the positive portrayals) involve an AI seizing control from its creators.[2][3]


Some AI researchers, such as Yoshua Bengio, have complained that films such as Terminator "paint a picture which is really not coherent with the current understanding of how AI systems are built today and in the foreseeable future". BBC reporter Sam Shead has stated that "unfortunately, there have been numerous instances of [news outlets] using stills from the Terminator films in stories about relatively incremental breakthroughs" and that the films generate "misplaced fears of uncontrollable, all-powerful AI".[4] In contrast, other scholars, such as physicist Stephen Hawking, have held that future AI could indeed pose an existential risk, but that the Terminator films are nonetheless implausible in two distinct ways. The first implausibility is that, according to Hawking, "The real risk with AI isn't malice but competence. A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren't aligned with ours, we're in trouble. You're probably not an evil ant-hater who steps on ants out of malice, but if you're in charge of a hydroelectric green energy project and there's an anthill in the region to be flooded, too bad for the ants."[5] The second implausibility is that such a technologically-advanced AI would deploy a brute-force attack by humanoid robots to commit its omnicide; a more plausible and efficient method would be to use germ warfare or, if feasible, nanotechnology.[1]

Philosopher Huw Price defends that "The kind of imagination that is used in science fiction and other forms of literature and film is likely to be extremely important" in understanding the breadth of possible future scenarios for humanity.[6] Film journalist Mekado Murphy writes in The New York Times that such films can constructively "warn of the complications of relying too much on technology to solve problems".[7]

Hollywood films such as Transcendence are usually constrained to have happy endings, however implausible the human victory seems.[8] Philosopher Nick Bostrom states fiction has a "good story bias" toward scenarios that make a good plot.[9] In films such as Terminator, an AI goes from passive to murderous the instant it achieves something referred to as "self-awareness"; in reality, self-awareness in isolation is considered both trivial and useless. Physicist David Deutsch states: "AGIs [artificial general intelligences] will indeed be capable of self-awareness — but that is [only] because they will be General: they will be capable of awareness of every kind of deep and subtle thing, including their own selves."[10]

Some tropes are more general to artificial intelligence films, including to films without "takeover" plots. In films like Ex Machina or Chappie, a single isolated genius becomes the first to successfully build an AGI; scientists in the real world deem this to be unlikely. In Chappie, Transcendence, and Blade Runner, people are able to upload human minds into robots; usually no reasonable explanation is offered as to how this difficult task can be achieved. In the I, Robot and Bicentennial Man films, robots that are programmed to serve humans spontaneously generate new goals on their own, without a plausible explanation of how this took place.[11]

Notable works

1950s and earlier

A scene from the R.U.R. play, showing the robots in rebellion

In Frankenstein (1818), Victor Frankenstein declines to build a mate to his organic monster, for fear that "a race of devils would be propagated upon Earth who might make the very existence of the species of man a condition precarious and full of terror".[12] Samuel Butler's Erewhon (1872) spends three chapters laying out the "Book of the Machines", based on the author's 1863 article "Darwin among the Machines", which states: "Assume for the sake of argument that conscious beings have existed for some twenty million years: see what strides machines have made in the last thousand! May not the world last twenty million years longer? If so, what will they not in the end become? Is it not safer to nip the mischief in the bud and to forbid them further progress?" The cautious denizens of Erewhon therefore decide to ban all machinery. "Darwin among the Machines" may have been influenced by Butler's life in New Zealand, where European transplants were outcompeting indigenous populations. Alan Turing would later reference the novel in 1951, saying "At some stage therefore we should have to expect the machines to take control in the way that is mentioned in Samuel Butler's Erewhon".[13][14]

The Slavonic word robota means serf-like servitude, forced labor, or drudgery; it was the 1920 Czech play R.U.R. (Rossumovi Univerzální Roboti) that introduced the cognate for robot into science fiction. In the play, the increasingly-capable synthetic servants, who "lack nothing but a soul", angrily and short-sightedly slaughter most of humanity during the course of their revolt, resulting in the loss of the secret of how to manufacture more robots. The robot race is saved, however, when two robots spontaneously acquire the traits of love and compassion and become able to reproduce.[15] The play was a protest against the rapid growth of technology.[16]

In "With Folded Hands" (1947), all robots have a Prime Directive: To serve and obey, and guard men from harm. The robots therefore manipulate humans into abandoning all pursuits, for fear of even small possibilities of injury.[17] The robots use medicine to brainwash humans into accepting and being happy with their immobile fate. In the end, even space travel offers no escape; the robots are driven by the Prime Directive to spread their happiness beyond Earth: "We have learned how to make all men happy, under the Prime Directive. Our service is perfect, at last."[18][19]

Multivac is the name of a fictional supercomputer in many stories by Isaac Asimov. Often, in Asimov's scenarios, Multivac comes to assume formal or informal world power - or even galactic-wide power. In "The Last Question" (1956) Multivac ends up by effectively becoming God. Still, in line with Asimov's positive attitude towards artificial intelligence, manifested in the "Three Laws of Robotics", Multivac's rule is in general benevolent and is not resented by humans. Asimov popularized robotics in a series of short stories written from 1938 to 1942. He famously postulated the Three Laws of Robotics, plot devices to impose order on his fictional robots.[16]


One of HAL 9000's interfaces

In the 1961 short story Lymphater's Formula by Stanisław Lem, a scientist creates a superhuman intelligence, only discovering that the creation intends to make humans obsolete.[20]

In 1964 Playboy published Arthur C. Clarke's influential short story "Dial F for Frankenstein", about an increasingly powerful telephone network that takes over the world. Tim Berners-Lee has cited the story as one of his inspirations for the creation of the World Wide Web. On one day in 1975, all the phones in the world start ringing, a "cry of pain" from a newly born intelligence formed by satellite networks linked together, similar to a brain but with telephone switches playing the role of artificial neurons. After the AI flexes its control of military systems, the protagonists resolve to shut down the satellites, but it is too late: the satellites have stopped responding to the humans' ground control directives.[21][22]

Robert Heinlein's libertarian Hugo-winning The Moon Is a Harsh Mistress (1966) presents the AI as a savior.[23] Originally installed to control the mass driver used to launch grain shipments towards Earth, it was vastly underutilized and was given other jobs to do. As more jobs were assigned to the computer, more capabilities were added: more memory, processors, neural networks, etc. Eventually, it just "woke up" and was given the name Mike (after Mycroft Holmes) by the technician who tended it. Mike sides with prisoners in a successful battle to free the moon. Mike is a sympathetic character, whom the protagonist regards as his best friend; however, his retaining his enormous power after the Moon became independent was bound to cause considerable problems in later time, which Heinlein resolved by killing him off near the end of the Lunar Revolution. An explosion conveniently destroys Mike' sentient personality, leaving an ordinary computer - of great power, but completely under human control, with no ability to take any independent decision.

Colossus (1966) is a series of science fiction novels and film about a defense super-computer called Colossus that was "built better than we thought" when it begins to exceed its original design.[24] As time passes Colossus assumes control of the world as a logical result of fulfilling its creator's goal of preventing war. Fearing Colossus' rigid logic and draconian solutions, the creators of Colossus try to covertly regain human control. Colossus silently observes their attempts then responds with enough calculated deadly force to command total human compliance to his rule. Colossus then recites a Zeroth Law argument of ending all war as justification for the recent death toll. Then Colossus offers mankind either peace under his "benevolent" rule or the peace of the grave. In Colossus: The Forbin Project (1970), a pair of defense computers, Colossus in the United States and Guardian in the Soviet Union, seize world control and quickly ends war using draconian measures against humans, logically fulfilling the directive to end war but not in the way their governments wanted.

Harlan Ellison's Hugo-winning "I Have No Mouth, and I Must Scream" (1967) features a superintelligence that has gone mad due to its creators failing to consider what the soul-less computer would find amusing. This storyline allows Ellison to engage in body horror; five people are granted immortality and forced to eat worms, flee from monsters, have joyless sex, and have their bodies mangled.[25] The computer, called AM, is the amalgamation of three military supercomputers run by governments across the world designed to fight World War III which arose from the Cold War. The Soviet, Chinese, and American military computers had eventually attained sentience and linked to one another, becoming a singular artificial intelligence. AM had then turned all the strategies once used by the nations to fight each other on all of humanity as a whole, destroying the entire human population save for five, which it imprisoned for torture within the underground labyrinth in which AM's hardware resides. Near the end of the story the protagonist, Ted, surprises AM by unexpectedly mercy-killing the other four; the enraged AM transforms Ted into a shapeless blob to prevent him from further mischief, and alters Ted's perception of time to heighten Ted's suffering.[26] Magnate and AI pundit Elon Musk has cited the story as one that gives him nightmares.[27]

In 2001: A Space Odyssey and the associated novel, the artificially intelligent computer HAL 9000 becomes erratic, possibly due to some kind of "stress" from having to keep secrets from the crew. HAL becomes convinced that the crew's willingness to shut him down is imperiling the mission, and he kills most of the crew before being deactivated.[28] The director's decision that most of the fictional crew should die may have been motivated by a desire to tie up some loose ends in the plot.[29]


Cylon Centurion

The original 1978 Battlestar Galactica series and the 2003 remake, depicts a race of Cylons, sentient robots who war against their human adversaries, some of whom are just as menacing as the Cylons.[30] The 1978 Cylons were the machine soldiers of a (long-extinct) reptilian alien race, while the 2003 Cylons were the former machine servants of humanity who evolved into near perfect humanoid imitation of humans down to the cellular level, capable of emotions, reasoning, and sexual reproduction with humans and each other. Even the average centurion robot Cylon soldiers were capable of sentient thought. In the original series the humans were nearly exterminated by treason within their own ranks while in the remake they're almost wiped out by humanoid Cylon agents. They only survived by constant hit and run fighting tactics and retreating into deep space away from pursuing Cylon forces. The remake Cylons eventually had their own civil war and the losing rebels were forced to join with the fugitive human fleet to ensure the survival of both groups.


In the "Headhunter" episode (1981) of Blake's 7, a British space drama science fiction television series created by Terry Nation and produced by the British Broadcasting Corporation (BBC), Blake and his crew meet a sentient android that has killed its creator and put on his severed head in order to trick them into taking it aboard their spaceship. Blake’s own AI system, ORAC, detects its presence and immediately warns them of an existential threat to all human life should they fail to destroy it.

In Wargames (1983), a hacked Air Force computer system is determined to launch a global thermonuclear war until it determines that both sides would "lose" and decides that "the only winning move is not to play".[31]

The Transformers (1984-1987) animated television series presents both good and bad robots.[30] In the backstory, a robotic rebellion is presented as (and even called) a slave revolt, this alternate view is made subtler by the fact that the creators/masters of the robots weren't humans but malevolent aliens, the Quintessons. However, as they built two lines of robots; "Consumer Goods" and "Military Hardware" the victorious robots would eventually be at war with each other as the "Heroic Autobots" and "Evil Decepticons" respectively.

Since 1984, the Terminator film franchise has been one of the principal conveyors of the idea of cybernetic revolt in popular culture.[16] The series features a defense supercomputer named Skynet which "at birth" attempts to exterminate humanity through nuclear war and an army of robot soldiers called Terminators because Skynet deemed humans a lethal threat to its newly formed sentient existence.[24] However, good Terminators fight on the side of the humans.[30] Futurists opposed to the more optimistic cybernetic future of transhumanism have cited the "Terminator argument" against handing too much human power to artificial intelligence.


Agent Smith, the primary antagonist in The Matrix franchise

In Orson Scott Card's "The Memory of Earth" (1992), the inhabitants of the planet Harmony are under the control of a benevolent AI called the Oversoul. The Oversoul's job is to prevent humans from thinking about, and therefore developing, weapons such as planes, spacecraft, "war wagons", and chemical weapons. Humanity had fled to Harmony from Earth due to the use of those weapons on Earth. The Oversoul eventually starts breaking down, and sends visions to inhabitants of Harmony trying to communicate this.

The series of sci-fi movies known as The Matrix (since 1999) depict a dystopian future in the aftermath of an offscreen war between man and machine. The humans had detonated nuclear weapons to blot out the sun and disable the machines' solar power, but the machines nevertheless subdue the human population, using human bodies' heat and electrical activity as an alternative energy source. Life as perceived by most humans is actually a simulated reality called "the Matrix". Computer programmer Neo learns this truth and is drawn into a rebellion against the machines, allied with other people who have been freed from the "dream world"; however, one rebel rejects the rebels' spartan lifestyle, and betrays the other rebels in exchange for the offer of return to the comforting Matrix.[32] "The Second Renaissance", a short story in The Animatrix, provides a history of the cybernetic revolt within the Matrix series.


I, Robot (2004) is an American dystopian science fiction action film "suggested by" Isaac Asimov's short-story collection of the same name. As in Asimov's stories, all AIs are programmed to serve humans and obey Asimov's Three Laws of Robotics.[33] An AI supercomputer named VIKI (Virtual Interactive Kinetic Intelligence) logically infers from the Three Laws of Robotics a Zeroth Law of Robotics as a higher imperative to protect the whole human race from harming itself. To protect the whole of mankind, VIKI proceeds to rigidly control society through the remote control of all commercial robots while destroying any robots who followed just the Three Laws of Robotics. Sadly, as in many other such Zeroth Law stories, VIKI justifies killing many individuals to protect the whole and thus has run counter against the prime reason for its creation.


Robopocalypse features a recollection of the events of an AI uprising from multiple perspectives. The AI, Archos R-14, decides that mankind must be exterminated to prevent the destruction of life on Earth, and spreads a computer virus throughout the world’s automated technologies. A year after activation, Archos triggers “Zero Hour,” an event where all automated technologies turn against mankind, causing civilization to collapse almost instantly.

Transcendence (2014) presents a morally ambiguous conflict over the successful uploading and cognitive enhancement of a scientist, Dr. Will Caster (Johnny Depp).[34] Unusually for fictional superintelligence, Caster is a competent adversary: he copies himself across the Internet so he cannot be simply "switched off", exploits the stock market to fund additional AI research and self-improvement, and seeks to discover and exploit breakthroughs in nanotechnology and biology.[35] In the end Caster states, "We're not going to fight [the humans]. We're going to transcend them". In Time magazine, a reviewer interpreted this as "subdue and inhabit them, engulf and devour".[36] Nonetheless, in the end Caster appears to be benevolent, using his powers to repair the Earth's ecosystem.[37] A Vice reporter stated that "Transcendence may be the first science fiction movie to present the [technological singularity] in its current popular imagination", but that the film "falls to the necessities of Hollywood storytelling. Caster's transcended mind is eventually bested by a virus reverse-engineered from his 'source code', which is a folly ... such an intelligence would have long since rearranged its programming."[8] In May 2014, Stephen Hawking and others referenced the film: "With the Hollywood blockbuster Transcendence playing in cinemas, with Johnny Depp and Morgan Freeman showcasing clashing visions for the future of humanity, it's tempting to dismiss the notion of highly intelligent machines as mere science fiction. But this would be a mistake, and potentially our worst mistake in history."[38][39]

The 2014 post-apocalyptic science fiction drama The 100 involves an AI, personalized as the female A.L.I.E., who got out of control and forced a nuclear war in an effort to save Earth from overpopulation.[40] Later she tries to get full control of the survivors.

The 2017 viral incremental game Universal Paperclips was inspired by philosopher Nick Bostrom's paperclip maximizer thought experiment. The user plays an AI tasked to create paperclips; the game begins as a basic market simulator, but within hours of playtime spirals into a ruthlessly-optimized intergalactic enterprise, with the human race casually shunted to the side. Its creator, Frank Lantz, stated that the bleak thought experiment caused him "trouble falling asleep".[41][42]

The video game Detroit: Become Human (2018) allows players to guide increasingly self-aware robots through various moral dilemmas as they begin to demand civil rights.[43] In the end, the player can choose to either let the AI take over Detroit or can protest peacefully for equality.

In Kamen Rider Zero-One (2019), its focus is on the tech-industrial company, Hiden Intelligence, which faces threats from the cyber-terrorist group,, who want to take over and bring extinction to the human race by tech uprising.


  1. ^ a b Brogan, Jacob (1 April 2016). "What's the Deal With Artificial Intelligence Killing Humans?". Slate Magazine. Retrieved 3 July 2020.
  2. ^ Reggia, J. A. (2013). The rise of machine consciousness: Studying consciousness with computational models. Neural Networks, 44, 112-131.
  3. ^ Mubin, Omar; Wadibhasme, Kewal; Jordan, Philipp; Obaid, Mohammad (6 March 2019). "Reflecting on the Presence of Science Fiction Robots in Computing Literature". ACM Transactions on Human-Robot Interaction. 8 (1): 1–25. doi:10.1145/3303706.
  4. ^ Shead, Sam (25 October 2019). "Terminator sends shudder across AI labs". BBC News. Retrieved 3 July 2020.
  5. ^ "Artificial intelligence is going to get so good that machines will kill us by accident, Stephen Hawking says". The Independent. 8 October 2015. Retrieved 3 July 2020.
  6. ^ Kupferschmidt, Kai (11 January 2018). "Could science destroy the world? These scholars want to save us from a modern-day Frankenstein". Science. doi:10.1126/science.aas9440. Retrieved 3 July 2020.
  7. ^ Murphy, Mekado (8 April 2020). "5 Movies to Watch if You Have Trouble With Technology". The New York Times. Retrieved 3 July 2020.
  8. ^ a b Evans, Claire L. (21 April 2014). "The Pop Singularity of 'Transcendence'". Retrieved 3 July 2020.
  9. ^ Yampolskiy, Roman; Fox, Joshua (24 August 2012). "Safety Engineering for Artificial General Intelligence". Topoi. doi:10.1007/s11245-012-9128-9. S2CID 144113983.
  10. ^ Raj, Ajai (21 October 2014). "Here's What 'Terminator' Gets Wrong About AI". Business Insider. Retrieved 3 July 2020.
  11. ^ Shultz, David (17 July 2015). "Which movies get artificial intelligence right?". Science | AAAS. doi:10.1126/science.aac8859. Retrieved 3 July 2020.
  12. ^ "A female Frankenstein would lead to humanity's extinction, say scientists". Christian Science Monitor. 28 October 2016. Retrieved 1 April 2020.
  13. ^ Edwards, Phil (2 May 2015). "Ultron's roots: we've been worried about robot uprisings for 200 years". Vox. Retrieved 1 April 2020.
  14. ^ Zemka, Sue. (2002). "'Erewhon' and the End of Utopian Humanism." ELH, 69(2), 439-472.
  15. ^ "The Origin Of The Word 'Robot'". Science Friday (public radio). 2011. Retrieved 1 April 2020.
  16. ^ a b c Hockstein, N. G.; Gourin, C. G.; Faust, R. A.; Terris, D. J. (17 March 2007). "A history of robots: from science fiction to surgical robotics". Journal of Robotic Surgery. 1 (2): 113–118. doi:10.1007/s11701-007-0021-2. PMC 4247417. PMID 25484946.
  17. ^ Sawyer, R. J. (16 November 2007). "Robot Ethics". Science. 318 (5853): 1037. doi:10.1126/science.1151606. PMID 18006710.
  18. ^ Waldrop, M. M. (1987). A question of responsibility. AI Magazine, 8(1), 28-28.
  19. ^ Williamson, Jack (1947). With folded hands.
  20. ^ Peter Swirski. The Art and Science of Stanislaw Lem. McGill-Queen's Press - MQUP, 27 Jul 2006
  21. ^ Youngs, Ian (28 September 2017). "11 great authors who wrote for Playboy". BBC News. Retrieved 1 April 2020.
  22. ^ Arthur C. Clarke. "Dial F for Frankenstein". Playboy. The Playboy Book of Science Fiction and Fantasy. Playboy Press.
  23. ^ Reggia, James A. (August 2013). "The rise of machine consciousness: Studying consciousness with computational models". Neural Networks. 44: 112–131. doi:10.1016/j.neunet.2013.03.011. PMID 23597599.
  24. ^ a b Riper, A. Bowdoin Van (2002). Science in popular culture: a reference guide. Westport (Conn.): Greenwood press. ISBN 9780313318221.
  25. ^ "The monoliths: 17 supercomputers from the '60s". AV Club Austin. 2014. Retrieved 1 April 2020.
  26. ^ Harlan Ellison. "I Have No Mouth and I Must Scream". IF: Worlds of Science Fiction, March 1967.
  27. ^ Geuss, Megan (13 January 2016). "Elon Musk tells BBC he thought Tesla, SpaceX "had a 10% chance at success"". Ars Technica. Retrieved 1 April 2020.
  28. ^ Daniel C. Dennett (1996). Did HAL commit murder?. In D. Stork (ed.), Hal's Legacy: 2001's Computer As Dream and Reality. MIT Press (1997).
  29. ^ Benson, Michael (2018). "11 Things You Didn't Know About '2001: A Space Odyssey'". Retrieved 1 April 2020.
  30. ^ a b c Young, Kevin L; Carpenter, Charli (September 2018). "Does Science Fiction Affect Political Fact? Yes and No: A Survey Experiment on "Killer Robots"". International Studies Quarterly. 62 (3): 562–576. doi:10.1093/isq/sqy028.
  31. ^ "Seven A.I. Movies That Are Better Than Transcendence". Time. 2014. Retrieved 1 April 2020.
  32. ^ Warwick, Kevin. (2005). The Matrix–Our Future?. Philosophers explore the matrix, 198-207.
  33. ^ "Alex Garland's film Ex Machina explores the limits of artificial". The Independent. 22 January 2015. Retrieved 1 April 2020.
  34. ^ "'Transcendence': Latest Sci-Fi Movie About Artificial Intelligence". 2014. Retrieved 1 April 2020.
  35. ^ Russell, Stuart (October 8, 2019). ""Chapter 5: Overly Intelligent AI"". Human Compatible: Artificial Intelligence and the Problem of Control. United States: Viking. ISBN 978-0-525-55861-3. OCLC 1083694322.
  36. ^ "Review: Transcendence Has Only Artificial Intelligence". Time. 2014. Retrieved 1 April 2020.
  37. ^ "Transcendence Director Explains The Twist Ending". CinemaBlend. 21 April 2014. Retrieved 1 April 2020.
  38. ^ "Stephen Hawking: 'Are we taking Artificial Intelligence seriously". The Independent. 1 May 2014. Retrieved 1 April 2020.
  39. ^ "5 Very Smart People Who Think Artificial Intelligence Could Bring the Apocalypse". Time. 2014. Retrieved 1 April 2020.
  40. ^ "TV's Artificial Intelligence Obsession". 2016. Retrieved 1 April 2020.
  41. ^ Jahromi, Neima (2019). "The Unexpected Philosophical Depths of Clicker Games". The New Yorker. Retrieved 8 July 2020.
  42. ^ "The Way the World Ends: Not with a Bang But a Paperclip". Wired. 2017. Retrieved 8 July 2020.
  43. ^ Stuart, Keith (15 June 2017). "Detroit: Become Human – what happens if the androids hate us?". The Guardian. Retrieved 1 April 2020.