Law and AI Quick Hits: September 26-30, 2016
A short round up of recent news of interest to Law and AI.
In the Financial Times, John Thornhill writes on “the darker side of AI if left unmanaged: the impact on jobs, inequality ethics, privacy and democratic expression.” Thornhill takes several proverbial pages from the Stanford 100-year study on AI, but does not ultimately offer his view of what effective AI “management” might look like.
Patrick Tucker writes in Defense One that a survey funded by the Future of Life Institute found “that the U.S. military more commonly uses AI not to help but to replace human operators, and, increasingly, human decision making.” In the process, he gives voice to the fears held by many people (well, at least by me) of how an autonomous weapons arms race might play out:
Today, the United States continues to affirm that it isn’t interested in removing the human decision-maker from “the loop” in offensive operations like drone strikes (at least not completely). That moral stand might begin to look like a strategic disadvantage against an adversary that can fire much faster, conduct more operations, hit more targets in a smaller amount of time by removing the human from loop.
Microsoft CEO Satya Nadella sat down for an interview with Dave Gershgorn of Quartz. Among other things, Nadella discusses the lessons Microsoft learned from Tay the Racist Chatbot–namely the need to build “resiliency” into learning AI systems to protect them from threats that might cause them to “learn” bad things. In the case of Tay, Microsoft failed to make the chatbot resilient to trolls, with results that were at once amusing and troubling.