Notes from the 2016 Governance of Emerging Technologies Conference

Source: Frank Cotham/The New Yorker


This past week, I attended the fourth annual Governance of Emerging Technologies Conference at Arizona State’s Sandra Day O’Connor School of Law.  The symposium’s format included a number of sessions that ran concurrently, so I ended up having to miss several presentations that I wanted to see.  But the ones I did manage to catch were very informative.  Here are some thoughts.

The conference was a sobering reminder of why AI is not a major topic on the agenda of governments and international organizations around the world: there are a whole lot of emerging technologies posing new ethical questions and creating new sources of risk.  Nanotechnology, bioengineering, and the “Internet of Things” all are raising new issues that policymakers must analyze.  To make matters worse, governments the world over are not even acting with the necessary urgency on comparatively longstanding sources of catastrophic risk such as climate change, global financial security, political and social instability in the Middle East, and both civil and military nuclear security.  So it shouldn’t be surprising that AI is not at the top of the agenda in Washington, Brussels, Beijing, or anywhere else outside Silicon Valley, and there is no obvious way to make AI-writ-large a higher policy priority in the immediate future without engaging in disingenuous scaremongering.

Autonomous vehicles probably will be a high policy priority simply because of their potential economic and social impact.  Maybe autonomous weapon systems (AWSs) will be a higher priority soon as well, but the deliberate pace at which military tech development occurs probably means we won’t see widespread deployment of such systems for at least several years, if not more than a decade. That reduces the sense of urgency surrounding AWS regulations.  Maybe the use of AI in medicine will get a few glances as well.  But beyond those narrow applications, its hard to think of many areas in which we might see policy movement on AI during the next 5-10 years, at least at the level of national legislatures and international bodies.

That being said, here is an overview of some of the presentations from the GET conference:

  • I missed the presentation of Chris Jenks of SMU Law on offensive autonomous weapons (with a particular focus on naval tech), but he was kind enough to send me his slides.  Jenks has made the point that defensive AWSs have actually been in place for years and offensive AWSs may be coming in the near future–if for no other reason than because continued advances in defensive AWSs will probably bring militaries to the point where only an offensive AWS would be able to overcome those defenses.  His latest paper, False Rubicons, Moral Panic & Conceptual Cul-De-Sacs: Critiquing & Reframing the Call to Ban Lethal Automatic Weapons, is available on SSRN.
  • Other AWS presentations came from Colonel/Dr. Metodi Hadji-Janev of the Macedonian Military Academy, who discussed AWSs in the context of the Law of Armed Conflict; and the duo of David Danks (Carnegie Mellon) and Heather Roff (ASU/Oxford), who discussed routes to “trust” with respect to AWSs–that is, how soldiers might gain familiarity and confidence when they deploy (or are deployed alongside) AWSs.  [Side note: Col. Hadji-Janev looked like a boss in a blue suede blazer and red slacks.]
  • Yaniv Heled of Georgia State gave a comprehensive overview of the current state of the law on autonomous vehicles along with an analysis of where the law might go from here.
  • Other noteworthy AI-related presentations that I caught came from W. Nicholson Price (University of New Hampshire) on medical AI systems; Deven Desai (Georgia Tech) on governing algorithms; Peter Asaro (The New School) on liability for autonomous weapons; Wendell Wallach (Yale, et al) on the ethics and governance of AI innovation, and how his joint proposal with Gary Marchant (ASU) for an AI Governance Coordinating Committee may be becoming a reality; Kendra Chilson (ASU) on decision-making; and Uriel Eldan (Zvi Meitar Institute (Israel)) on the coming of robolawyers.
  • AI-relevant keynote talks came from Daniel Christensen of Intel on the emerging Internet of Things and Kay Firth-Butterfield, who discussed Lucid.ai’s Ethics Advisory Panel, which she heads.
  • One non-AI-specific presentation that I really enjoyed came from ASU’s Majia Nadesan, titled “New Technologies and Catastrophic Risks: Hubris in the Anthropocene.”  It included a depressing overview of some recent man-made catastrophes that could have been prevented by competent regulation, such as the BP oil spill, the financial crisis, and the Fukushima Daiichi nuclear disaster.  She made some great points about large corporations’ adeptness at externalizing the risks arising from their operations–a phenomenon with disturbing implications for AI and other emerging technologies whose risk profiles are not fully understood.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.