The long-term future of AI
In 1965, I. J. Good's article Speculations Concerning the First Ultraintelligent Machine included the following remark:
"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control."
For most of the history of AI, this issue has been ignored. Indeed,
Good himself continues, "It is curious that this point is made so
seldom outside of science fiction." As the capabilities of AI systems
improve, however, and as the transition of AI into broad areas of
human life leads to huge increases in research investment, it is
inevitable that the field will have to begin to take itself
seriously. The field has operated for over 50 years on one simple assumption: the more intelligent, the better.
To this must be conjoined an overriding concern for the benefit of humanity. The argument is very simple:
Some organizations are already considering these questions, including the
Future of Humanity Institute at Oxford,
the Centre for the Study of Existential Risk at Cambridge,
the Machine Intelligence Research Institute in Berkeley,
and the Future of Life Institute at Harvard/MIT.
I serve on the Advisory Boards of CSER, FLI, and MIRI.
- AI is likely to succeed.
- Unconstrained success brings huge risks and huge benefits.
- What can we do now to improve the chances of reaping the benefits and avoiding the risks?
Just as nuclear fusion researchers consider the problem
of containment of fusion reactions as one of the primary
problems of their field, it seems inevitable that issues of control
and safety will become central to AI as the field matures. The
research questions are beginning to be formulated and range from
highly technical (foundational issues of rationality and utility,
provable properties of agents, etc.) to broadly philosophical.
Media, publications, etc.
- Stuart Russell, The Future of AI: What if We Succeed?, panel at IJCAI 13, Beijing, August 9, 2013.
- The Computer Whiz on Robo-Mysteries, interview by Tishani Doshi, The New Indian Express, March 2, 2014.
- Stephen Hawking, Stuart Russell, Max Tegmark, and Frank Wilczek,
``Transcending Complacency on Superintelligent Machines.''
Huffington Post, April 19, 2014.
- Stuart Russell, Transcendence: An AI Researcher Enjoys Watching His Own Execution, Huffington Post, April 29, 2014.
- Workshop on the Future of Artificial Intelligence held at AAMAS 14, Paris, May 6, 2014.
- Interview on the subject of the movie Transcendence with Stuart Russell, Christof Koch, on NPR Science Friday, May 9, 2014.
- Interview on the long-term future of AI with Stuart Russell, on Canadian Broadcasting Corporation's Spark with Nora Young, May 31, 2014. [transcript]
- Stuart Russell, Of Myths and Moonshine, contribution to the conversation on The Myth of AI on edge.org.
- Stuart Russell and more than 7000 others, Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter, January, 2015.
- Value Alignment, Berkeley IdeasLab Debate Presentation at the World Economic Forum, Davos, January 21, 2015.
- Panel discussion live on NHK TV (Japan), World Economic Forum, Davos, January 22, 2015.
- Interview on Hub Culture TV, World Economic Forum, Davos, January 23, 2015.
- Our Fear of Artificial Intelligence, by Paul Ford, MIT Technology Review, February 1, 2015.
- Stuart Russell, Will they make us better people?, contribution to the Annual Question, 2015 on edge.org.
- Invasion of the Friendly Movie Robots, by Don Steinberg, Wall Street Journal, February 26, 2015.
- The Future of Artificial Intelligence, with Stuart Russell, Eric Horvitz, Max Tegmark, on NPR Science Friday, April 10, 2014.
- Concerns of an Artificial Intelligence Pioneer, by Natalie Wolchover, Quanta Nagazine, April 21, 2015.
- How smart is today's artificial intelligence?, PBS Newshour, May 8, 2015.
- Will your job get outsourced to a robot?, PBS Newshour, May 20, 2015.
- Stuart Russell, The Long-Term Future of (Artificial) Intelligence, video of talk at the Centre for the Study of Existential Risks (Cambridge), May 15, 2015.
- Professor Stuart Russell's talk at the Centre for the Study of Existential Risks (Cambridge), by Calum Chace, May 15, 2015.
- The Good, The Bad and The Robot: Experts Are Trying to Make Machines Be 'Moral', by Coby McDonald, California Magazine, June 7, 2015.
- How Smart Should We Allow Robots to Get?, Science Friday, June 9, 2015.
- The ethics of AI: how to stop your robot cooking your cat, by John Havens, The Guardian, June 23, 2015.
- On AMC's 'Humans,' Wrong Approach to Robots May Be Just What Real Humans Need, by Hilary Brueck, Forbes Magazine, June 28, 2015.
- Are Super Intelligent Computers Really A Threat to Humanity?, panel discussion
at the Information Technology and Innovation Foundation, Washington, DC, June 30, 2015. Subsequent media coverage:
- What the debacle of climate change can teach us about the dangers of artificial intelligence, by Matt McFarland, Washington Post, July 1, 2015.
- The Terminator question: Scientists downplay the risks of superintelligent computers , Yuan Gu, PC World, July 1, 2015.
- Robot apocalypse unlikely, but researchers need to understand AI risks, Grant Gross, IDG News Service, July 1, 2015.
- Should We Fear "Terminator"-Style Robot Uprisings? A Washington Think Tank Discusses., Graham Vyse, InsideSources, July 1, 2015.
- How Do We Stop Artificial Intelligence from Overpowering Humans?, by Hallie Golden, NextGov, July 1, 2015.
- The Real Threat Posed by Powerful Computers, by Quentin Hardy, New York Times, July 11, 2015.
- Which movies get artificial intelligence right?, by David Shultz, July 17, 2015.
- Fears of an AI pioneer, by John Bohannon, Science, Vol. 349 no. 6245, July 17, 2015, 252.
- Tech experts voice concern over artificial intelligence, by Karina Huber, CCTV, July 23, 2015.
- Artificial Intelligence expert likens AI dangers to nuclear weapons, by Mark Stockley, Naked Security, July 24, 2015.
- Intelligent robots don't need to be conscious to turn against us, interview by Guia Marie del Prado, Tech Insider, August 9, 2015.
- Stuart Russell, Moral Philosophy Will Become Part of the Tech Industry, Time, September 15, 2015.
- Is it in the best interest of AI not to kill us all?, Hopes and Fears blog, September 25, 2015.
- Artificial intelligence: Should we be as terrified as Elon Musk and Bill Gates?, by Jason Hiner, ZDNet, October 20, 2015.
- 19 A.I. experts reveal the biggest myths about robots, interview by Guia Marie del Prado, Business Insider, October 20, 2015.
- Big Think: Moral Philosophy Will Be Big Business in Tech, interview by Queena Kim, KQED Radio, October 25, 2015.
- This is what will happen when robots take over the world, interview by Szu Ping Chan, Daily Telegraph, November 21, 2015.
- Advancing Artificial Intelligence: An Interview with Stuart Jonathan Russell, PhD, interview by Katlyn Nemani, CardioSource World News (American College of Cardiology), Vol. 4 No. 10, pages 48-50, October, 2015.
- Stuart Russell, Future of Artificial Intelligence and the Human Race, TEDxYouth@EB, Ecole Bilingue de Berkeley, November 14, 2015.
- The End of Employment (trailer), short documentary film by Lena Halberstadt, released December 21, 2015.
- What's So Exciting About AI? Conversations at the Nobel Week Dialogue, blog post by
Meia Chita-Tegmark, Huffington Post, December 22, 2015.
- Stuart Russell, Tom Dietterich, Eric Horvitz, Bart Selman, Francesca Rossi, Demis Hassabis, Shane Legg, Mustafa Suleyman, Dileep George, and Scott Phoenix,
Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter,
AI Magazine, Vol. 36, No. 4, 2015.
- Stuart Russell, Daniel Dewey, and Max Tegmark,
Research Priorities for Robust and Beneficial Artificial Intelligence,
AI Magazine, Vol. 36, No. 4, 2015.
- Stuart Russell and others, The State of Artificial Intelligence,
panel session moderated by Jen Moon, World Economic Forum, January 20, 2016.
- A world where everyone has a robot: why 2040 could blow your mind, interview by Declan Butler, Nature, 24 February, 2016.
- Digital Genies, interview by Jacob Brogan, Slate, April 22, 2016.
- Anticipating artificial intelligence, editorial in Nature, 26 April, 2016.
- Artificial Intelligence: Friend or Foe?, panel discussion at Milken Institute Global Conference, Los Angeles, May 2, 2016.
- Scientists Warn AI Can Be Dangerous as Well as Helpful to Humans, interview by Elizabeth Lee, Voice of America, May 11, 2016.
- We can't prevent AI changing the world but we can stop robots cooking cats, by Nick Heath, TechRepublic, June 6, 2016.
- Google Tackles Challenge of How to Build an Honest Robot, by Jack Clark, Bloomberg News, June 21, 2016.
- UC Berkeley launches Center for Human-Compatible Artificial Intelligence, by Jeffrey Norris, Berkeley News, August 29, 2016.
- Funding Announcement from Open Philanthopy Project, August 30, 2016.
- The rise of robots: forget evil AI - the real risk is far more insidious, by Olivia Solon, The Guardian, August 30, 2016.
- New Center for Human-Compatible AI, by Ariel Conn, Future of Life Institute, August 30, 2016.
- Could artificial intelligence help humanity? Two California universities think so, by Amina Khan, Los Angeles Times, August 31, 2016.
- Why Artificial Intelligence Needs Some Sort of Moral Code, by Jon Vanian, Fortune, September 5, 2016.
- How Tech Giants Are Devising Real Ethics for Artificial Intelligence, by John Markoff, New York Times, August 31, 2016.