• Home
  • News
  • Coins2Day 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
CommentaryAI

Elon Musk Is Wrong Again. AI Isn’t More Dangerous Than North Korea.

By
Michael L. Littman
Michael L. Littman
Down Arrow Button Icon
By
Michael L. Littman
Michael L. Littman
Down Arrow Button Icon
August 15, 2017, 3:22 PM ET

Elon Musk’s recent remark on Twitter that artificial intelligence (AI) is more dangerous than North Korea is based on his bedrock belief in the power of thought. But this philosophy has a dark side.

If you believe that a good idea can take over the world and if you conjecture that computers can or will have ideas, then you have to consider the possibility that computers may one day take over the world. This logic has taken root in Musk’s mind and, as someone who turns ideas into action for a living, he wants to make sure you get on board too. But he’s wrong, and you shouldn’t believe his apocalyptic warnings.

Here’s the story Musk wants you to know but hasn’t been able to boil down to a single tweet. By dint of clever ideas, hard work, and significant investment, computers are getting faster and more capable. In the last few years, some famously hard computational problems have been mastered, including identifying objects in images, recognizing the words that people say, and outsmarting human champions in games like Go. If machine learning researchers can create programs that can replace captioners, transcriptionists, and board game masters, maybe it won’t be long before they can replace themselves. And, once computer programs are in the business of redesigning themselves, each time they make themselves better, they make themselves better at making themselves better.

The resulting “intelligence explosion” would leave computers in a position of power, where they, not humans, control our future. Their objectives, even if benign when the machines were young, could be threatening to our very existence in the hands of an intellect dwarfing our own. That’s why Musk thinks this issue is so much bigger than war with North Korea. The loss of a handful of major cities wouldn’t be permanent, whereas human extinction by a system seeking to improve its own capabilities by turning us into computational components in its mega-brain—that would be forever.

Musk’s comparison, however, grossly overestimates the likelihood of an intelligence explosion. His primary mistake is in extrapolating from recent successes of machine learning the eventual development of general intelligence. But machine learning is not as dangerous as it might look on the surface.

For example, you may see a machine perform a task that appears to be superhuman and immediately be impressed. When people learn to understand speech or play games, they do so in the context of the full range of human experiences. Thus when you see something that can respond to questions or beat you soundly in a board game, it is not unreasonable to infer that it also possesses a range of other human capacities. But that’s not how these systems work.

In a nutshell, here’s the methodology that has been successful for building advanced systems of late: First, people decide what problem they want to solve and they express it in the form of a piece of code called an objective function—a way for the system to score itself on the task. They then assemble perhaps millions of examples of precisely the kind of behavior they want their system to exhibit. After that they design the structure of their AI system and tune it to maximize the objective function through a combination of human insight and powerful optimization algorithms.

At the end of this process, they get a system that, often, can exhibit superhuman performance. But the performance is on the particular task that was selected at the beginning. If you want the system to do something else, you probably will need to start the whole process over from scratch. Moreover, the game of life does not have a clear objective function—current methodologies are not suited to creating a broadly intelligent machine.

Someday we may inhabit a world with intelligent machines. But we will develop together and will have a billion decisions to make that shape how that world develops. We shouldn’t let our fears prevent us from moving forward technologically.

Michael L. Littman is a professor of computer science at Brown University and co-director of Brown’s Humanity Centered Robotics Initiative.

About the Author
By Michael L. Littman
See full bioRight Arrow Button Icon
Rankings
  • 100 Best Companies
  • Coins2Day 500
  • Global 500
  • Coins2Day 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Coins2Day Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Coins2Day Brand Studio
  • Coins2Day Analytics
  • Coins2Day Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Coins2Day
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Coins2Day Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Coins2Day Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.