Elon Musk tells American governors why governments should be “proactive” about managing AI risks

In 2014, Elon Musk’s warnings about the dangers and risks associated with AI helped spark the debate on what steps, if any, government and industry bodies should take to regulate the development of AI.  Three years later, he’s still voicing his concerns, and this weekend he brought them up with some of the most influential politicians in America.

In a speech before the National Governors Association at their summer retreat in Rhode Island, Musk said that governments need to be proactive when it comes to managing the public risks of AI:

» Read more

Law and AI Quick Hits: Canada Day / Fourth of July edition

Credit: Randy Glasbergen


Here’s a quick roundup of law- and policy-relevant AI stories from the past couple weeks.

A British privacy watchdog ruled that a group of London hospitals violated patient privacy laws in sharing information with Google DeepMind.  Given the constant push for access to data that all the major tech companies are making (in no small part because access to more data is crucial in the age of learning AI systems), expect to see many more data privacy disputes like this in the future.


Canada’s CTV reports on the continued push by some AI experts for “explainable” and “transparent” AI systems, as well as the skeptical response of other AI experts about the feasibility of building AI systems that can “show their work” in a useful way.  Peter Norvig points to a potentially interesting workaround:

» Read more