A Look at Law & AI in 2018
AI was busy in 2018. With the year coming to a close, let’s look at a three important developments in law and AI, and consider what they might imply for the coming year.
The Regulation Debate
Perhaps the biggest issue facing law and AI can be broadly put as “regulation.” More precisely, will governments regulate AI, and if so, how? This overarching question permeates the field and touches many different specific issues.
The United States government has been reluctant to regulate AI. Last month, at the FCC’s “Forum on Artificial Intelligence and Machine Learning,” FCC Chairman Ajit Pai stated that the government should exercise “regulatory humility” when dealing with AI. In other words, a hands-off approach. The reason, he said, is that “early [regulatory] intervention can forestall or even foreclose certain paths to innovation.”
Chairman Pai’s comments echo an earlier remark from Treasury Secretary Steven Mnuchin. When asked in 2017 about whether the government should be concerned with AI displacing jobs, he responded, “It’s not even on our radar screen.”
Regulation of AI is, however, on the public’s radar screen. Around the time of Secretary Mnuchin’s comment, my co-editor Matt wrote about a survey of Americans regarding AI regulation. Respondents largely supported regulating AI, both on national (71%) and international (67%) levels.
In 2018, even some leading tech companies began pushing for AI regulation. In July, Microsoft’s President and Chief Legal Officer Brad Smith asserted that there should be “public regulation” of facial recognition technology. Just this month, he wrote a follow-up article confirming this view. The core issues to consider, he stated, are:
- The risk of bias and discrimination in facial recognition tech
- The potential for intrusions of privacy, and
- Mass surveillance that might encroach on democratic freedom
While researchers are diligently working to address these challenges, Mr. Smith stated, “deficiencies remain.” Thus, conceding that commercial firms may not sufficiently self-regulate due to “competitive dynamics,” Mr. Smith called for “a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission.”
Facial recognition is just one example. With this and many other AI fields, there is a growing disconnect. On one hand, the government is expressly taking a hands-off approach. On the other, individuals, advocates, and even some commercial firms see a need for regulation.
As Matt said in his most recent post, it’s hard to predict the exact impact AI will have. In some cases, as with autonomous vehicles, AI has not lived up to the hype that it will cause rapid, ubiquitous, dramatic change. Thus, governments may “have inadvertently gotten it right by mostly” avoiding regulation so far. But in 2019, as AI continues to evolve and impact us in unexpected ways, look for the disconnect to grow between a “hands-off” government and supporters of regulation.
Data Protection
Data is, of course, the lifeblood of AI. Powerful machine learning algorithms need data to learn. We’ve known data is important since “big data” entered the global lexicon, but only recently have average consumers become aware of just how critical their data can be.
For years, “free” services like search engines, email, and social media proliferated. But “free” is a misnomer. Consumers use such services for no monetary payment, but they pay in other ways–typically, they are exposed to ads and allow collection of their data. The collected data, in turn, is of immense value to service providers. They can sell it, use it for targeted advertising, or train machine learning algorithms with it. For an example in this last category, Facebook used 3.5 billion public-supplied images, with corresponding public-supplied hashtags as labels, to train image recognition deep learning networks.
2018 saw a surge in public awareness about data privacy. From the Cambridge Analytica scandal to countless security breaches that exposed user data, individuals learned that data misuse can cause serious problems.
2018 also saw government action. The European Union’s Global Data Protection Regulation (“GDPR”), enacted in 2016, took effect this year with important implications for businesses operating internationally. California passed a similar measure–the California Consumer Privacy Act of 2018 (“CCPA”), which will take effect in 2020. These laws regulate how companies can collect and sell personal data, and give individuals substantive rights regarding their data.
Further, startups are using emerging technologies like blockchain to give individuals power over their data. Companies like Doc.ai, Datum, Wibson, and Ocean Protocol let users sell their personal data for cryptocurrency or other benefits. (Doc.ai, which involves users supplying personal medical information for use in neural networks, is particularly interesting.)
And Sir Tim Berners-Lee is developing a decentralized internet ecosystem called Solid. (Yes, building a “decentralized internet” is also a plotline in HBO’s Silicon Valley.) According to Solid’s website, it will let users control what happens with their data, and developers can build apps that “leverage[]” the data while preserving individual rights.
These services are still works-in-progress. Tellingly, a Wired journalist reported earning only “approximately 0.3 cents” from selling his data for cryptocurrency. And it bears noting that, before AI or blockchain rose to prominence, other startups tried to empower consumers through “data marketplaces,” largely to no avail. (See this article.) But current efforts underscore that rapid change is occurring, both in regulation and the private sector, surrounding the data that fuels artificial intelligence.
In 2019, we will be watching for additional government and private efforts to protect privacy and prevent data misuse, as well as legal disputes regarding laws like the GDPR and CCPA.
Policing and Surveillance
Another hot topic in 2018 was the use of AI in policing and surveillance. Law enforcement officials and civil rights advocates each make valid points on this topic. On one hand, AI can help solve or prevent crime in ways not previously possible. On the other hand, AI should not be used for improper discrimination or unwarranted privacy intrusions.
Here are a few recent examples of how AI has impacted policing and surveillance:
- Automated license plate readers take images of vehicle license plates, capture the dates, times, and GPS coordinates of the vehicle, and upload them to a database that law enforcement can access. The technology, such as from Vigilant Solutions, also purports to use “powerful analytics that make sense” of the data.
- “Predictive policing” uses data such as crime databases and social media to predict where crime is likely to occur, and which persons will likely commit violent crime.
- Amazon licenses face recognition software to law enforcement. While employees voiced concerns, Amazon stated it would continue such licensing. (Amazon and the ACLU engaged in a back-and-forth about whether the tech is flawed and biased.)
- Google used AI to help the Pentagon analyze drone footage (see “Project Maven”). Google employees pushed back, concerned that the military would weaponize AI in connection with drone strikes. In response, Google declined to renew its contract with the Pentagon and published ethics guidelines for using AI.
In matters like these, balancing the relevant interests is not an easy task. In 2019, look for use of AI in policing and surveillance to continue, and for legal and ethical disputes over the propriety of such uses. Along the way, officials and tech companies will hopefully recognize the need to balance competing interests, so that AI can help law enforcement and the military without harming private rights.
Conclusion
Of course, this is far from an exhaustive list of law and AI issues that arose in 2018, or that might dominate news cycles in 2019. We will keep watching this evolving space and trying to provide you with information and insight.
We hope you all have a Happy New Year! See you in 2019.
Thanks for this very apt summary of the year. These do indeed seem to be the most salient issues at the moment. Meanwhile, a couple dozen other countries are moving forward with AI regulation more aggressively than the US is, which also limits our ability to shape the landscape to some extent.