Poll shows that support for national and international regulation of AI is broad, but is it deep?

Source: Calvin and Hobbes, Bill Watterson, Oct 27, 1987


Yesterday, Morning Consult released perhaps the most wide-ranging public survey ever conducted on AI-related issues.  In the poll, 2200 Americans answered 39 poll questions about AI (plus a number of questions on other issues).

The headline result that Morning Consult is highlighting is that overwhelming majorities of respondents supported national regulation (71% support) and international regulation (67%) of AI.  Thirty-seven percent strongly support national regulation, compared to just 4% who strongly oppose it (for international, those numbers were 35% and 5%, respectively).

Perhaps even more strikingly, the proportion of respondents who support regulation was very consistent across political and socioeconomic lines.  A full 74% of Republicans, 73% of Democrats, and 65% of independents support national regulations, as do 69% of people making less than $50k/yr, 73% making $50k-$100k, and 65% of those who make more than $100k.  Education likewise matters little: 70% of people without a college degree support national regulation, along with 74% of college grads and 70% of respondents with post-graduate degrees.  Women (75%) were slightly more likely to support such regulations than men (67%).

(Interestingly, the biggest “outlier” demographic group in terms of supporting regulation was…Jewish people.  Only 56% of Jewish respondents support national regulations for AI, by far the smallest proportion of any group.  The difference is largely attributable to the fact that more than a quarter of Jewish respondents weren’t sure if they supported regulation or not (compared to 15% of respondents as a whole).  The most pro-regulation groups were Republican women (80%) and those with blue-collar jobs (76%).)

Support for international regulations was only slightly lower: 67% of respondents overall, with a similar level of consistency among different demographic groups.

The poll’s AI regulation results are interesting, to be sure, but the responses to a number of other questions in the poll are also worth highlighting.

  • How much have you seen, read, or heard about A.I.?: A solid 21% of respondents said that they had heard “nothing at all” and 27% answered “not much.”  This jibes with my impressions of public consciousness on the issue, but what this suggests is that a good many people support regulations for AI despite not knowing much about it.  There are no cross-tabs for different poll questions, so there is no way to tell if support for regulation rises or falls depending on how much people know about it, but my gut tells me that higher familiarity with the technology correlates with lower support for regulation.
  • As you may know, A.I. is the science and engineering of making intelligent machines that can perform computational tasks which normally require human intelligence. Do you think we should increase or decrease our reliance on A.I.?: Equal proportions of people answered “increase” and “decrease,” with 39% each.  Incidentally, Morning Consult stole the circular definition of “artificial intelligence” that I used in my (shameless plug alert!) Regulating AI paper.  I ain’t mad, though.  Circular definitions are the only ones that work for AI.
    • Respondents were also about equally split on whether AI was safe (41%) or unsafe (38%).
  • 57% of respondents said that their lives were already being affected by AI; just 20% said their lives had not yet been affected
  • A long series of questions focused on whether respondents would “feel comfortable or uncomfortable delegating the following tasks to a computer with artificial intelligence.”  Unsurprisingly, people were more comfortable delegating mundane tasks than they were with tasks affecting their safety or personal life. Some of the more interesting responses:
    • Driving a car: 28% comfortable, 67% uncomfortable
    • Flying an airplane: 23% comfortable, 70% uncomfortable (including 53% “very uncomfortable”)
      • It was especially interesting to see that this drew some of the strongest negative responses, given how long commercial planes have used autopilot systems.
    • Medical diagnosis: 27% comfortable, 65% uncomfortable
    • Performing surgery: 22% comfortable, 69% uncomfortable (including 51% “very uncomfortable”)
    • Picking your romantic partner: 23% comfortable, 68% uncomfortable
    • Cleaning your house: 61% comfortable, 31% uncomfortable
    • Cooking meals: 45% comfortable, 47% uncomfortable
  • Another series of questions focused on “whether each statement makes you more or less likely to support further A.I. research.”
    • A.I. can replace human beings in many labor intensive tasks: 40% more likely, 41% less likely
    • Robots can cause mass unemployment: 31% more likely (??), 51% less likely
    • Machines may become smart enough to control humans: 22% more likely, 57% less likely
  • Do you agree or disagree that A.I. is humanity’s greatest existential threat: 50% agree, 31% disagree

So what’s the takeaway from all this?  Well, certainly from a law-and-AI perspective, the strong support for regulation is the most interesting result.  That being said, while support for regulation is quite broad, it does not appear to be especially deep.  Just over a third of respondents strongly support regulation–nothing to sniff at, but not enough to make this a campaign issue anytime soon.

Given that nearly half of respondents knew little-to-nothing about AI, that number could be highly volatile. Support could rise or fall quite quickly if AI’s encroachment into the human world continues apace.  Which direction it goes will depend on whether AI is mainly seen as something that makes our lives easier or puts our lives (or our livelihoods) at risk.

Given US Treasury Secretary Steve Mnuchin’s recent comments dismissing the potential impact of AI on the labor market, it seems unlikely that AI regulation is coming to the US for at least the next 4 years.  The EU has shown some interest in AI-related issues, but they seem to have plenty else on their plate at the moment, and I doubt that AI regulation becomes a European priority.  The same can be said of Australia, Japan, South Korea, and China (although China’s state-driven economic model makes them something of a special case).

That means that despite the broad support for AI regulation, we’re unlikely to see any actual regulations coming down national or international government pipelines over the next few years.  Private sector and industry groups seem to have a window of at least a few years to establish their own system(s) of ethics and self-regulation before they need to worry about the government getting involved.

But that window could close in a hurry.  A major book, documentary, or news story can turn fence-sitters into strong proponents of regulation.  It was no coincidence that the US established the National Transportation Safety Board just one year after Ralph Nader published Unsafe at Any Speed.  The American auto industry quickly went from being almost completely free of national regulation to being one of the most heavily regulated industries in the world.  The same thing could happen to the burgeoning AI industry if it ignores safety concerns just because they don’t seem to pose a business problem right now.  So if Silicon Valley AI companies want to avoid facing the same fate as Detroit, they will need to figure a way to effectively police themselves.

3 comments

  • Daniel Schiff

    Helpful review, thank you.

    Some very funny bits with a sizable % of people wanting AI to control humans and cause mass unemployment. That begs some deeper qualitative study. And of course the irony with people opposing automated airplanes – suggestive of the idea that what people perceive to be AI is a sliding scale.

    Thanks for keeping us updated!

  • Daniel Schiff

    Oh a clarifying thought on the support of further research RE AI controlling humans and causing mass unemployment.

    Some % perhaps want to shut the research down to avoid those risks.

    Another % perhaps want to continue the research in order to *mitigate* those risks.

    What do you think? If that’s correct, the questions need to be redefined to keep these contrasting strategies from being conflated.

    • Matt

      Good points. Given the fairly low level of public awareness of what AI actually is and what it can do, I would imagine that it would be difficult to gauge deeper feelings and fears through a traditional scientific opinion poll. One way to structure it to glean more meaningful information might be to do the mass opinion poll, ask respondents if they would be willing to participate in a follow-up qualitative study, and then select a random subset of the willing for focus groups, interviews, etc. I suspect the answers would be quite revealing.

      Frankly, the mere fact that pollsters are showing an interest in this is itself a positive sign, simply because of the near-complete absence of reliable empirical research on public perceptions of AI, so I’ll take what I can get.

Leave a Reply

Your email address will not be published. Required fields are marked *