Notes from the 2016 Governance of Emerging Technologies Conference
This past week, I attended the fourth annual Governance of Emerging Technologies Conference at Arizona State’s Sandra Day O’Connor School of Law. The symposium’s format included a number of sessions that ran concurrently, so I ended up having to miss several presentations that I wanted to see. But the ones I did manage to catch were very informative. Here are some thoughts.
The conference was a sobering reminder of why AI is not a major topic on the agenda of governments and international organizations around the world: there are a whole lot of emerging technologies posing new ethical questions and creating new sources of risk. Nanotechnology, bioengineering, and the “Internet of Things” all are raising new issues that policymakers must analyze. To make matters worse, governments the world over are not even acting with the necessary urgency on comparatively longstanding sources of catastrophic risk such as climate change, global financial security, political and social instability in the Middle East, and both civil and military nuclear security. So it shouldn’t be surprising that AI is not at the top of the agenda in Washington, Brussels, Beijing, or anywhere else outside Silicon Valley, and there is no obvious way to make AI-writ-large a higher policy priority in the immediate future without engaging in disingenuous scaremongering.