For most of my life, I’ve been disappointed in robots. Movies always depicted them as walking, talking, humanoid, smart—and cool. But for decades, real robots have been little more than assembly-line arms at car factories. In the past three years, though, something has shifted. Self-driving cars have logged nearly two million miles on public roads. Drones have gotten smart enough to avoid hitting things. And two-legged, walking robots are suddenly real. Now luminaries—including Bill Gates, Stephen Hawking and Elon Musk—are speaking out about the dangers of our increasingly smart machines. “Full artificial intelligence could spell the end of the human race,” Hawking has told the BBC. It’s one thing for an easily spooked public to mistrust artificial intelligence. But Gates, Hawking and Musk? As it turns out, all three were responding to an initiative by Massachusetts Institute of Technology professor Max Tegmark. In 2014 he co-founded the Future of Life Institute, whose purpose is to consider the dark side of artificial intelligence. “When we invented less powerful technology, like fire,” Tegmark told me, “we screwed up a bunch of times; then we invented the fire extinguisher. Done. But with more powerful technologies like human-level artificial intelligence, we want to get things right the first time.” The worry is that once AI gets smart enough, it will be able to improve its own software, over and over again, every hour or minute. It will quickly become so much smarter than humans that—well, we don’t actually know. “It could be wonderful, or it could be pretty bad,” Tegmark says. In many of Isaac Asimov’s futuristic tales, humans programmed robots with the Three Laws of Robotics. For example: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” Wouldn’t that kind of software safeguard work? “The funny thing about Asimov’s novels,” Tegmark says, “is almost all of the Three Laws stories are about how something goes wrong with them.” Programming machines to obey us precisely can backfire in unexpected ways. “If you tell your super-AI car to get to the airport as fast as possible, it’ll get you there—but you’ll arrive chased by helicopters and covered in vomit.” Not exactly as intended. But there are bigger dangers. In July, Tegmark’s group released an open letter expressing alarm over the rising threat of autonomous weapons—a terrorist’s dream. (Hawking, Musk and Apple co-founder Steve Wozniak were among the letter’s 2,500 co-signers.) The United Nations is discussing a ban on AI weapons. On a more day-to-day scale, robots will likely take even more of our jobs. The first to go, of course, will be the ones that are the most repetitive or the most easily automated, such as store clerks, tax preparers and paralegals. (Some Japanese banks already employ robots to assist customers.) “If you teach kindergarten or you’re a massage therapist, you’ll get to keep your job a lot longer,” Tegmark says. He imagines that, finances aside, the loss of jobs will also mean a loss of human fulfillment. “Today so much of our sense of purpose comes from our jobs. We should think hard about the sort of jobs that we would like to keep doing and getting our identity from. Education? The arts, culture, service jobs? Or what, exactly?” Such alarm bells prompted Musk (co-founder of Tesla Motors and founder of SpaceX) to donate $10 million to the Future of Life Institute (and serve, with Hawking and others, as a scientific adviser for the cause). The group has so far received hundreds of research-grant proposals, funded dozens of them and held major meetings on the topic. The message, in the end, is not that AI will lead us inevitably to doomsday or a life of ennui but that our contemplation of its effects should keep pace with rapid developments in AI itself. “AI also has enormous upsides—potential to cure all diseases, eliminate poverty, help life spread into the cosmos—if we get it right. Let’s not just drift into this like a sailboat without its sail up properly. Let’s chart our course, carefully planned,” Tegmark says.

In the past three years, though, something has shifted. Self-driving cars have logged nearly two million miles on public roads. Drones have gotten smart enough to avoid hitting things. And two-legged, walking robots are suddenly real.

Now luminaries—including Bill Gates, Stephen Hawking and Elon Musk—are speaking out about the dangers of our increasingly smart machines. “Full artificial intelligence could spell the end of the human race,” Hawking has told the BBC.

It’s one thing for an easily spooked public to mistrust artificial intelligence. But Gates, Hawking and Musk?

As it turns out, all three were responding to an initiative by Massachusetts Institute of Technology professor Max Tegmark. In 2014 he co-founded the Future of Life Institute, whose purpose is to consider the dark side of artificial intelligence.

“When we invented less powerful technology, like fire,” Tegmark told me, “we screwed up a bunch of times; then we invented the fire extinguisher. Done. But with more powerful technologies like human-level artificial intelligence, we want to get things right the first time.”

The worry is that once AI gets smart enough, it will be able to improve its own software, over and over again, every hour or minute. It will quickly become so much smarter than humans that—well, we don’t actually know. “It could be wonderful, or it could be pretty bad,” Tegmark says.

In many of Isaac Asimov’s futuristic tales, humans programmed robots with the Three Laws of Robotics. For example: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” Wouldn’t that kind of software safeguard work?

“The funny thing about Asimov’s novels,” Tegmark says, “is almost all of the Three Laws stories are about how something goes wrong with them.”

Programming machines to obey us precisely can backfire in unexpected ways. “If you tell your super-AI car to get to the airport as fast as possible, it’ll get you there—but you’ll arrive chased by helicopters and covered in vomit.” Not exactly as intended.

But there are bigger dangers. In July, Tegmark’s group released an open letter expressing alarm over the rising threat of autonomous weapons—a terrorist’s dream. (Hawking, Musk and Apple co-founder Steve Wozniak were among the letter’s 2,500 co-signers.) The United Nations is discussing a ban on AI weapons.

On a more day-to-day scale, robots will likely take even more of our jobs. The first to go, of course, will be the ones that are the most repetitive or the most easily automated, such as store clerks, tax preparers and paralegals. (Some Japanese banks already employ robots to assist customers.) “If you teach kindergarten or you’re a massage therapist, you’ll get to keep your job a lot longer,” Tegmark says. He imagines that, finances aside, the loss of jobs will also mean a loss of human fulfillment. “Today so much of our sense of purpose comes from our jobs. We should think hard about the sort of jobs that we would like to keep doing and getting our identity from. Education? The arts, culture, service jobs? Or what, exactly?”

Such alarm bells prompted Musk (co-founder of Tesla Motors and founder of SpaceX) to donate $10 million to the Future of Life Institute (and serve, with Hawking and others, as a scientific adviser for the cause). The group has so far received hundreds of research-grant proposals, funded dozens of them and held major meetings on the topic.

The message, in the end, is not that AI will lead us inevitably to doomsday or a life of ennui but that our contemplation of its effects should keep pace with rapid developments in AI itself. “AI also has enormous upsides—potential to cure all diseases, eliminate poverty, help life spread into the cosmos—if we get it right. Let’s not just drift into this like a sailboat without its sail up properly. Let’s chart our course, carefully planned,” Tegmark says.