Elon Musk tells American governors why governments should be “proactive” about managing AI risks
In 2014, Elon Musk’s warnings about the dangers and risks associated with AI helped spark the debate on what steps, if any, government and industry bodies should take to regulate the development of AI. Three years later, he’s still voicing his concerns, and this weekend he brought them up with some of the most influential politicians in America.
In a speech before the National Governors Association at their summer retreat in Rhode Island, Musk said that governments need to be proactive when it comes to managing the public risks of AI:
On the artificial intelligence front, I have exposure to the very most cutting edge AI, and I think people should be really concerned about it. I keep sounding the alarm bell, but until people see like robots going down the streets killing people, they don’t know how to react because it seems so ethereal. . . . AI is a rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late.
Normally, the way regulation is set up is that a whole bunch of bad things happen and there’s public outcry and then after many years, the regulatory agency set up to regulate the industry. There’s a bunch of opposition from companies who don’t like being told what to do by regulators. It takes forever. But in the past there has been [things that are] bad but not something which represented a fundamental risk to the existence of civilization. AI is a fundamental risk to the existence of human civilization, in a way that car accidents, airplane crashes, faulty drugs, or bad food were not. [Those] were harmful to a set of individuals within society, of course, but they were not harmful to society as a whole. AI is a fundamental existential risk for human civilization. And I don’t think people fully appreciate that.
It’s not fun being regulated. It can be pretty irksome. In the car business, we get regulated by the Department of Transportation, by the EPA and a bunch of others. And there’s regulatory agencies in every country. In space, we get regulated by the FAA. But if you ask the average person, “Hey, do you want to get rid of the FAA? And just take a chance on manufacturers not cutting corners on aircraft because profits were down that quarter?” It’s like, “Hell no. That sounds terrible.”
I think even people who are pretty libertarian, free market, I think they are probably like, “Yeah, we should keep an eye on the aircraft companies and make sure they’re building good aircraft.” There’s a role for regulators that is very important. I’m against overregulation, for sure, but we have to get on that with AI, pronto…
This is really like the scariest problem to me. I really think we need government regulation here, just ensuring the public good is served. Because you’ve got companies that kind of have to race to build AI or they’re going to be made uncompetitive. Essentially, if your competitor is racing to build AI and you don’t, they will crush you. So then you’re like, “We don’t want to be crushed, so I guess we need to build it, too.” That’s where you need the regulators to come in and say, “You all need to really just pause and make sure this is safe. Once the regulators are convinced that it is safe, then you can go. But otherwise, slow down.” You need the regulators to do that for all the teams in the game. Otherwise, your shareholders will be saying, “Why aren’t you developing AI faster? Because your competitor is.”
Later, Musk got a rather self-promotional “question” from Arizona Governor Doug Ducey. Ducey started by plugging his own work in cutting regulations and then, well, never really asked a question:
GOV. DUCEY: I was surprised by your suggestion to bring regulations before we know exactly what we are dealing with, with AI. I have heard the example used if I were to come up with a colorless, odorless, tasteless gas that was explosive, people would say, “Well, you have to ban that,” and then we would have no natural gas. You have given some of these examples of how AI can be an existential threat… [But] typically, policymakers don’t get in front of entrepreneurs or innovators.
Musk’s response to Ducey’s non-question was spot on:
MUSK: Well, I think the first order of business would be to gain insight. Right now, the government does not even have insight . . into the status of AI activity. Make sure the situation is understood. Once it is, then put regulations in place to ensure public safety. That’s it.
And for sure, the companies doing AI–well, most of them, not mine–will squawk and say, “This is really going to stifle innovation, blah, blah, blah [sic]. It is going to move to China.” It won’t. Has Boeing moved to China? Nope. They’re building Boeing aircraft here. Same on cars. And so the notion that if you establish a regulatory regime, that companies will simply move to countries with lower regulatory requirements is false on the face of it, because none of them do. Unless it’s really overbearing, but that’s not what I’m talking about here. I was talking about making sure there is awareness at the government level. I think that once there is awareness, people will be extremely afraid, as they should be.
Two points from Musk’s speech that I’d like to emphasize. First, he accurately points out that there is no real evidence that industries flee in the face of reasonable regulations. That is true even when the regulation is quite intrusive, as in the case of the pharmaceutical, automotive, and aerospace industries. And, not for nothing, those companies do quite well for themselves financially. Do they pass on the costs of regulation to consumers? Sure. But I think consumers are generally ok with paying more for a flight if the higher price allows them to be reasonably sure that their plane won’t be involved in a mid-air collision. (It’s been nearly 30 years since the last time an airline flight in the United States suffered a mid-air collision, and 8 years since anyone died on a US airline flight. Not bad for government work.)
Second, no one is talking about doing FAA/FDA/NHTSA-style regulation of the AI industry anyway, at least not in the foreseeable future. The talk is instead about making sure that governments inform themselves about both the risks and benefits of AI development. Musk believes that educational process would itself be sufficient to convince governments of the need for AI regulations. My take is subtly different–I think that governments should educate themselves so that if (read: when) the technology appears to be reaching a point where it could pose a public risk (which isn’t the case yet, and likely will not be for at least the next 5 or so years), governments will be well-positioned to implement effective regulations that ensure public safety.
You can watch the entire video of Musk’s remarks here.