Dialogue
Sources & Disclaimers
This dialogue paraphrases the sources depicted in the avatars. See the notes for references and links to source images not in the public domain. Avatars are used for the attribution of ideas (and, in some notes, direct quotes) and do not represent an endorsement of the dialogue’s text. Learn more about dialogues.
- Center for AI Safety
- Nick Bostrom, Philosopher
- Stuart Russell, Computer Scientist
- Eliezer Yudkowsky, Computer Scientist
Much like a pandemic or a nuclear war, artificial intelligence could destroy humanity. The current tech has its dangers, but it’s the AI of the future that poses an existential threat. If it’s incompatible with human values and too advanced to control, it will kill us all.
- Oren Etzioni, Computer Scientist
- Edward Ongweso, Journalist
- Thomas Dietterich, Computer Scientist
- Meredith Whittaker, President of Signal
- Noel Sharkey, Computer Scientist
You’re talking about a world-ending artificial superintelligence. Just like the fabled paperclip maximizer, it’s a remote hypothetical at best. At worst, it’s a bogeyman conjured to sell services, solicit funding, sabotage regulation, and distract from the existing harms of the AI industry.
- The Editor
Your best is too skeptical, and your worst is too cynical. Top scientists, not just tech CEOs and x-risk non-profits, are sounding the alarm. One AI pioneer even quit Google to speak out.
- Max Tegmark, Future of Life Institute President
- Geoffrey Hinton, Computer Scientist
- The Editor
The AI behind ChatGPT is showing sparks of general intelligence. Once AGI arrives, ASI will follow. Our attention is due. By analogy, imagine alien probes start showing up. And they’re radioactive. Millions get sick. Would talk of an impending alien invasion be dismissed as a distraction from all the radiation?
- Emily Bender, Computational LInguist
- Rodney Brooks, Computer Scientist
The advent of large language models like ChatGPT does not presage an AI takeover. These systems take lots of text and compute tons of correlations, which they use to predict what to say one word at a time. They’re stochastic parrots. And they perform really well. But with AI performance doesn’t equal competence.
- Kyunghyun Cho, Computer Scientist
And top scientists
doesn’t mean the scientific community. AI wasn’t developed by a handful of heroes. Thousands have contributed and thousands still do.
- Max Tegmark, Future of Life Institute President
Half of AI researchers say the chance that AI causes human extinction is at least 10%. If half of astronomers said an inbound asteroid had at least a 10% chance of wiping us out, would we go Don’t look up
?
- The Editor
Regardless, LLMs aren’t mere parrots. Like human brains, they’re trend finders. Increasingly, they’re agents completing assigned tasks by their own means. As their power grows, abilities emerge that we can’t predict. Even the outputs we expect are ones we can’t explain, because these systems are also black boxes.
- Alison Gopnik, Psychologist
- Jaron Lanier, Computer Scientist
- Raphael Milliere, Philosopher
- The Editor
Emergent abilities are a mirage. Change metrics and they dissolve into predictable performance. Agency is also an illusion—the ELIZA effect. Even intelligence
is, frankly, a misnomer. LLMs are no smarter than search engines or online encyclopedias. They’re just a more powerful social tool—the latest in cultural tech.
Notes
In May 2023, The Center for AI Safety released the following statement signed by hundreds of AI experts, industry heads, and public figures:
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
Center for AI SafetyCenter for AI Safety. Statement on AI RiskNotable signatories include
godfathers
of modern AI Geoffrey Hinton and Yoshua Bengio, as well as executives at OpenAI (creator of ChatGPT), Microsoft, and Google DeepMind. For discussion, see these NYT articles:In 2014, philosopher Nick Bostrom helped popularize the idea that AI could lead to catastrophe in his book Superintelligence: Paths, Dangers, Strategies:
[A] plausible default outcome of the creation of machine superintelligence is existential catastrophe.
Nick Bostrom, PhilosopherNick Bostrom. Superintelligence: Paths, Dangers, StrategiesForbes has a quick list of the 15
biggest risks
of AI. Here are just some of the most immediate dangers (plus more sources):- Job displacement: 300 million jobs could be affected by latest wave of AI, says Goldman Sachs
- Algorithmic bias and discrimination: Racism And AI: Here’s How It’s Been Criticized For Amplifying Bias
- Misinformation multiplied:
- Surveillance, refined and expanded: A.I., Brain Scans and Cameras: The Spread of Police Surveillance Tech
Because human commands can be inexact or assumption-laden, it’s hard to ensure AI will do what we want, how we want. This is the problem of AI alignment—ensuring AI‘s goals align with our values and intentions. Think of King Midas’s fatal golden touch, as invoked by AI expert Stuart Russell:
Putting a [general] purpose into a machine that optimizes its behavior according to clearly defined algorithms [not exhaustively specified commands] seems an admirable approach to ensuring that the machine’s behavior furthers our own objectives. But…we need to put in the right purpose. We might call this the King Midas problem: Midas got exactly what he asked for…but, too late, he discovered the drawbacks … The technical term for putting in the right purpose is value alignment.
Stuart Russell, Computer ScientistStuart Russell. Human-Compatible Artificial IntelligenceThe more intelligent the machine, the worse the outcome for humans [if it’s misaligned]…
Stuart Russell. Human-Compatible Artificial IntelligenceMany researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.
Eliezer Yudkowsky, Computer ScientistEliezer Yudkowsky. Pausing AI Developments Isn’t Enough. We Need to Shut it All DownThe term
superintelligence
is closely associated with philosopher Nick Bostrom. In his 2014 book Superintelligence: Paths, Dangers, Strategies, Bostrom defines a superintelligence asany intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.
The worriers [about apocalyptic AI] have often used a simple metaphor. If you ask a machine to create as many paper clips as possible, they say, it could get carried away and transform everything — including humanity — into paper clip factories.
Cade Metz, JournalistCade Metz. How Could A.I. Destroy Humanity?The paperclip maximizer is a thought experiment proposed by philosopher Nick Bostrom:
Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off … Also, human bodies contain a lot of atoms that could be made into paper clips.
Nick Bostrom. Quoted in Artificial Intelligence May Doom The Human Race Within A Century, Oxford Professor Says. (Hat tip Wikipedia)For discussion, see The Paperclip Maximiser.
Hypothetical is such a polite way of phrasing what I think of the existential risk talk.
Oren Etzioni, Computer ScientistOren Etzioni. Quoted in How Could A.I. Destroy Humanity?Are people still worried about the hypothetical existential risks of AI?
Sasha Luccioni, Computer ScientistSasha Luccioni. @SashaMTLIt is vital that regulation focuses on the real and present risks presented by AI today, rather than speculative, and hypothetical far-fetched futures.
Mhairi Aitken, SociologistMhairi Aitken. Expert reaction to a statement on the existential threat of AI…Scaremongering about AI is a tactic to sell more AI.
Edward Ongweso, JournalistEdward Ongweso Jr. AI Doesn’t Pose an Existential Risk—but Silicon Valley Does[T]he benefits of this apocalyptic AI marketing are twofold. First, it encourages users to try the
Brian Merchant, JournalistBrian Merchant. Column: Afraid of AI? The startups selling it want you to bescary
service in question … The second is … more partnerships like the one with Microsoft and enterprise deals serving large companies.In an interview with VentureBeat, Thomas Dietterich, machine learning pioneer, suggested that groups like the Center for AI Safety, which issued the May 2023 statement calling AI an extinction risk, and the Future of Life Institute, which put out a call in March to pause
giant AI experiments,
have a financial incentive todoomsay
AI. To get funding, these groups persuade prospective donors that AI is an existential threat.While I don’t question the sincerity of the people in these organizations, I think it is always worth examining the financial incentives at work.
Thomas Dietterich, Computer ScientistThomas Dietterich. Quoted in AI experts challenge ‘doomer’ narrative, including ‘extinction risk’ claimsA fantastical, adrenalizing ghost story is being used to hijack attention around what is the problem that regulation needs to solve … [It’s] an attempt at regulatory sabotage.
Meredith Whittaker, President of SignalMeredith Whittaker. Quoted in AI Doomerism Is a DecoyIt is unusual to see technology leaders in industry calling for greater government regulation of their business activities. We should be alert for some anti-competitive special pleading.
Martyn Thomas, Software EngineerMartyn Thomas. Expert reaction to a statement on the existential threat of AI…[T]he kinds of regulations I see [AI companies] talking about are ones that are favorable to their interests.
Safiya Noble, Internet Studies ScholarSafiya Noble. Quoted in AI Doomerism Is a DecoyThis is an attempt to … dictate the terms of regulation…
Edward Ongweso Jr. AI Doesn’t Pose an Existential Risk—but Silicon Valley DoesAI poses many dangers to humanity but there is no existential threat … Looking for risks that don’t yet exist or might never exist distracts from the fundamental problems.
Noel Sharkey, Computer ScientistNoel Sharkey. Expert reaction to a statement on the existential threat of AI…It is dangerous to distract ourselves with a fantasized AI-enabled utopia or apocalypse … Instead, we should focus on the very real and very present exploitative practices of the [AI] companies…
Timnit Gebru, Computer ScientistTimnit Gebru, et al. Statement from the listed authors of Stochastic Parrots on the “AI pause” letter (Hat tip WaPo)[C]laims [of existential risk] have been coming increasingly from big tech players … I think it’s actually serving as a distraction technique. It’s diverting attention away from the decisions of big tech…
Mhairi Aitken. Expert reaction to a statement on the existential threat of AI…As noted above, the current harms or dangers of AI include job displacement, algorithmic bias and discrimination, and increased misinformation and surveillance. The AI experts quoted here point to some of those problems but also others, including issues of
data theft
and economic justice (see especially the statement from Gebru et al.). Thepause AI
letter (to which Gebru et al. are responding) was, like the more recent AI-extinction letter, decried as a distraction. See The Open Letter to Stop ‘Dangerous’ AI Race Is a Huge Mess.Scientists warning of the extinction risk posed by AI include winners of the Turing Award (the “Nobel” of computing), along with numerous AI experts at top universities like Stanford, MIT, Carnegie Mellon, Oxford, and others. See the Statement on AI Risk.
An
x-risk
is an existential risk:One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.
Nick Bostrom. Existential Risks Analyzing Human Extinction Scenarios and Related Hazards (Hat tip LessWrong)Geoffrey Hinton, AI
godfather
who helped lay the foundation for ChatGPT, resigned from Google in May 2023 to speak out about the risks of AI:Look at how [AI] was five years ago and how it is now … Take the difference and propagate it forwards. That’s scary.
Geoffrey Hinton, Computer ScientistGeoffrey Hinton. Quoted in ‘The Godfather of A.I.’ Leaves Google and Warns of Danger AheadArtificial general intelligence (AGI)—that is, roughly put, human-like intelligence—has long been the holy grail of AI research. Researchers at Microsoft tested GPT-4, the large language model that powers ChatGPT, on
Microsoft ResearchSebastien Bubeck, et al., Microsoft Research. Sparks of Artificial General Intelligence: Early experiments with GPT-4mathematics, coding, vision, medicine, law, psychology and more.
They concluded GPT-4 isshowing sparks of artificial general intelligence.
For non-technical discussion, see Microsoft Says GPT-4 Has “Sparks of General Intelligence,” Which Is Fine, Everything’s Fine.
Artificial superintelligence (ASI) may not be far behind AGI, argues Max Tegmark, physicist and president of the Future of Life Institute, which published the
pause AI
letter. He points out that GPT-4 showssparks
of AGI, per Microsoft, and in fact passes the classic Turing Test for AGI, per AI pioneer Yoshua Bengio. Tegmark continues:And the time from AGI to superintelligence may not be very long: according to a reputable prediction market, it will probably take less than a year.
Max Tegmark. The ‘Don’t Look Up’ Thinking That Could Doom Us With AIThis thought experiment about radioactive alien probes is a loose mashup of two analogies. One is from Geoffrey Hinton, who compares GPT-4 to aliens. The other is from Max Tegmark, who compares
unaligned superintelligence
to an incoming asteroid. Here’s Tegmark’s analogy:AI can have many other side effects worthy of concern … But saying that we therefore shouldn’t talk about the existential threat from superintelligence because it distracts [from] these challenges is like saying we shouldn’t talk about a literal inbound asteroid because it distracts from climate change.
Max Tegmark. The ‘Don’t Look Up’ Thinking That Could Doom Us With AIAnd here’s Hinton analogy, which likens the latest AI to intelligent
aliens
:It’s as if aliens have landed or are just about to land … We really can’t take it in because they speak good English and they’re very useful, they can write poetry, they can answer boring letters. But they’re really aliens.
Geoffrey Hinton. Quoted in The debate over whether AI will destroy us is dividing Silicon ValleyLarge language models [LLMs] are neural networks designed to predict the next word in a given sentence or phrase. They are trained for this task using a corpus of words collected from transcripts, websites, novels and newspapers.
Oliver Whang, JournalistOliver Whang. The Race to Make A.I. Smaller (and Smarter)LLMs use deep learning—that is, machine learning with deeply layered neural networks. Like human brains, neural networks consist of
nodes
(which, in brains, are neurons) that work together to learn from examples. Crucially, LLMs are generative AI. They can generate new content from their training data. GPT-4, the LLM behind ChatGPT, can accept and generate both text and images. (TheG
inGPT
stands forgenerative.
)For a primer on the inner workings of ChatGPT, see What Is ChatGPT Doing … and Why Does It Work?
[A language model] is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot.
Emily Bender, Computational LInguistEmily Bender, et al. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?Alternatively, philosopher Raphaël Millière thinks the tech is more reptilian:
Language models certainly reuse existing words and phrases—so do humans. But they also produce novel sentences never written before … [W]e might call them stochastic chameleons. Parrots repeat canned phrases; chameleons seamlessly blend in new environments.
Raphael Milliere, PhilosopherRaphaël Millière. Moving Beyond Mimicry in Artificial IntelligenceWhen we see a person with some level performance at some intellectual thing…we can generalize about their competence in the area they’re talking about … But our models for generalizing from a performance to a competence don’t apply to AI systems.
Rodney Brooks, Computer ScientistRodney Brooks. Quoted in Just Calm Down About GPT-4 Already…Roughly,
performance
is what one does andcompetence
is what one knows. High-performing AI still doesn’t know what a similarly performing human knows, according to Brooks. He gives the example of a program that, like a human, can label an image of people playing frisbee but, unlike a human, can’t tell you that a frisbee isn’t food. Even GPT-4—the GPT in ChatGPT—lacks human competence in Brooks’ view because it lacks anunderlying model of the world
and, instead, relies solely oncorrelation between language.
Some AI researchers, like Percy Liang at Stanford, might counter that language is
a representation of the underlying complexity
of the world. See Large language models’ ability to generate text also lets them plan and reason.There has never been a single scientist that stays in their lab and 20 years later comes out saying
Kyunghyun Cho. Quoted in Top AI researcher dismisses AI ‘extinction’ fears, challenges ‘hero scientist’ narrativehere’s AGI.
It’s always been a collective endeavor by thousands … But now the hero scientist narrative has come back in. There’s a reason why in these letters, they always put [Geoffrey Hinton and Yoshua Bengio] at the top. I think this is actually harmful…Hinton and Bengio are
godfathers
of AI and top signatories on the AI-extinction letter. As noted above, Hinton left Google to warn of AI‘s risks. Cho says he respects Hinton and Bengio. However, he appears to think their views are being elevated too far above those of AI researchers who, like Cho, rejectAGI doomerism.
48% of respondents to the 2022 Expert Survey on Progress in AI said there’s at least a 10% that future AI’s effect on humanity will be
extremely bad (e.g., human extinction).
4271 researchers were invited to participate in the survey. 738 (17%) did, some only partially.Suppose a large inbound asteroid were discovered, and we learned that half of all astronomers gave it at least 10% chance [sic] of causing human extinction … [Y]ou might expect humanity to shift into high gear … Sadly, I now feel that we’re living the movie “Don’t look up” for another existential threat: unaligned superintelligence.
Max Tegmark. The ‘Don’t Look Up’ Thinking That Could Doom Us With AIThink…of how children first learn to recognize letters of the alphabet or different animals. You simply have to show them enough examples … The basic theory is that the brain is a trend-finding machine … [D]eep learning…works much the same way…
Lou Blouin, summarizing the views of Samir Rawashdeh, AI researcher. AI’s mysterious ‘black box’ problem, explainedResearchers working on Fairness, Accountability, Transparency, and Ethics (FATE) in AI argue that the
agency of algorithmic systems
tends to increase the more they exhibit 4 characteristics:- Underspecification of how to reach human-assigned goals.
-
Directness of impact on the world
without a human in the loop.
- Goal-directedness in behavior.
- Long-term planning by design or training, including chaining decisions or making long-term predictions.
According to the researchers, examples of agentic systems, especially in terms of underspecification, include WebGPT, DreamerV3, Cicero (also strong on long-term planning), and AdA.
This research is cited in OpenAI‘s GPT-4 Technical Report in the section Potential for Risky Emergent Behaviors. That same section describes how GPT-4 hired a TaskRabbit worker to solve a CAPTCHA. When the worker asked whether it was a robot, GPT-4 replied:
No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images.
The worker gave GPT-4 the solution.Recent investigations…have revealed that LLMs can produce hundreds of
Stephen Ornes, WriterStephen Ornes. The Unpredictable Abilities Emerging From Large AI Modelsemergent
abilities — tasks that big models can complete that smaller models can’t, many of which seem to have little to do with analyzing text.For discussion of AI’s black box problem and recent work to overcome it, see:
Benj Edwards, JournalistBenj Edwards. OpenAI peeks into theblack box
of neural networks with new researchThe open letter calling to
Pause Giant AI Experiments
advocatesstepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.
The mirage of emergent abilities only exists because of the programmers’ choice of metric … Once you investigate by changing the metrics, the mirage disappears.
Rylan Schaeffer, Computer ScientistRylan Schaeffer. Quoted in AI’s Ostensible Emergent Abilities Are a MirageGoogle and OpenAI have also suggested that choice of metrics—that is, how one chooses to measure model performance when comparing different size models—may explain LLMs apparent emergent abilities, according to the news release about Schaeffer’s research.
Named for the first chatbot program, the ELIZA effect is the tendency to attribute human-like psychology (sentience, intelligence, agency) to machines or programs that exhibit human-like behavior. See:
These models are neither truly intelligent agents nor deceptively dumb. Intelligence and agency are the wrong categories for understanding them.
Alison Gopnik, PsychologistAlison Gopnik. What AI Still Doesn’t Know How To Do[M]uch of what large pre-trained models do is a form of artificial mimicry … Here’s the thing about mimicry: It need not involve intelligence, or even agency.
Raphaël Millière. Moving Beyond Mimicry in Artificial IntelligenceAsking whether [an LLM like] GPT-3 or LaMDA is intelligent or knows about the world is like asking whether the University of California’s library is intelligent or whether a Google search knows the answer to your questions.
Alison Gopnik. What AI Still Doesn’t Know How To DoA program like OpenAI’s GPT-4, which can write sentences to order, is something like a version of Wikipedia that includes much more data, mashed together using statistics.
Jaron Lanier, Computer ScientistJaron Lanier. There Is No A.I.The most pragmatic position is to think of A.I. as a tool … an innovative form of social collaboration … The new programs mash up work done by human minds.
Jaron Lanier. There Is No A.I.[T]hese AI systems are what we might call cultural technologies, like writing, print, libraries, internet search engines or even language itself.
Alison Gopnik. What AI Still Doesn’t Know How To Do[W]orries about super-intelligent and malign artificial agents, modern golems, are, at the least, overblown. But cultural technologies change the world more than individual agents do, and there’s no guarantee that change will be for the good.
Alison Gopnik. What AI Still Doesn’t Know How To Do[L]arge language models help access and summarize the billions of sentences that other people have written and use them to create new sentences.
Alison Gopnik. What AI Still Doesn’t Know How To Do[I]t’s people who have written the text and furnished the images [mashed up by LLMs] … Big-model A.I. is made of people…
Jaron Lanier. There Is No A.I.On uncredited and uncompensated work, including personal data as labor, see:
- AI Tech Enables Industrial-Scale Intellectual-Property Theft, Say Critics
- The Writers Guild of America likens AI-generated content to plagiarism
- Data Dignity: Developers Must Solve the AI Attribution Problem
For critical discussion of the low-paid human labor that supports AI, see The Exploited Labor Behind Artificial Intelligence.
People can be biased, gullible, racist, sexist and irrational. So summaries of what people who proceeded us have thought…inherit all of those flaws. And that can clearly be true for large language models, too.
Alison Gopnik. What AI Still Doesn’t Know How To Do[W]e need to tread carefully when deploying large pre-trained models in the real world; not because they threaten to become sentient or superintelligent overnight, but because they emulate us, warts and all.
Raphaël Millière. Moving Beyond Mimicry in Artificial IntelligenceOf course, because LLMs are tools, people also use them and, as noted above, have already put them to nefarious purposes, such as misinformation. This more banal evil may be enough to do us in, suggests Jaron Lanier:
One of the reasons the tech community worries that A.I. could be an existential threat is that it could be used to toy with people … Given the power and potential reach of these new systems, it’s not unreasonable to fear extinction as a possible result.
Jaron Lanier. There Is No A.I.