Poll shows that support for national and international regulation of AI is broad, but is it deep?

Source: Calvin and Hobbes, Bill Watterson, Oct 27, 1987


Yesterday, Morning Consult released perhaps the most wide-ranging public survey ever conducted on AI-related issues.  In the poll, 2200 Americans answered 39 poll questions about AI (plus a number of questions on other issues).

The headline result that Morning Consult is highlighting is that overwhelming majorities of respondents supported national regulation (71% support) and international regulation (67%) of AI.  Thirty-seven percent strongly support national regulation, compared to just 4% who strongly oppose it (for international, those numbers were 35% and 5%, respectively).

Perhaps even more strikingly, the proportion of respondents who support regulation was very consistent across political and socioeconomic lines.  A full 74% of Republicans, 73% of Democrats, and 65% of independents support national regulations, as do 69% of people making less than $50k/yr, 73% making $50k-$100k, and 65% of those who make more than $100k.  Education likewise matters little: 70% of people without a college degree support national regulation, along with 74% of college grads and 70% of respondents with post-graduate degrees.  Women (75%) were slightly more likely to support such regulations than men (67%).

» Read more

The Return of the Blog: WeRobot 2017


After a long layoff, Law and AI returns with some brief takes on the 6th annual WeRobot Conference, which was held this past weekend at Yale Law School’s Information Society Project.  If you want a true blow-by-blow account of the proceedings, check out Amanda Levendowski’s Twitter feed.  Consider the below a summary of things that piqued my interest, which will not necessarily be the same as the things that prove to be the most important technical or policy takeaways from the conference.

Luisa Scarcella and Michaela Georgina Lexer: The effects of artificial intelligence on labor markets – A critical analysis of solution models from a tax law and social security law perspective

(Paper, Presentation)

Ms. Scarcella and Ms. Lexer presented perhaps the most topically unique paper of the conference.  Their paper addresses the potential macroeconomic, social, and government-finance impacts of automation.

» Read more

Bias


An interesting pair of stories popped up over the past month covering how the use of AI could affect bias in our society.  This is a fascinating topic from a “law and AI” standpoint due to the sheer number of laws in place worldwide that prohibit certain forms of bias and discrimination in a variety of settings, ranging from employment to hotel accommodations to the awarding of government contracts.

At first blush, one might think that having an automated system make decisions would reduce the risk of bias, or at least those forms of bias that the law prohibits.  After all, such a system would not be susceptible to the many of the most obvious types of biases and prejudices that afflict human decision-makers.  A machine would not have a financial interest in the outcome of any decision (at least not yet), nor would it be susceptible to the dark impulses of racism and sexism.  A machine likewise would presumably be less susceptible to, if not immune from, the more subtle and sometimes even unconscious manifestations of bias that emotion-driven humans exhibit.

Those advantages led Sharon Florentine to pen an article published last month in CIO with a bold headline: “How artificial intelligence can eliminate bias in hiring.”  That title was probably clickbait to a certain extent because the article itself was fairly measured in its assessment of the potential impact of AI on workplace discrimination.  The thesis of the article is that AI systems could be used indirectly to reduce bias by using machine learning to “be an objective observer to screen for bias patterns.”  In other words, AI systems could act as something of a bias detector, raising alerts when a person or company’s decision-making patterns display signs of bias or prejudice.


Kristian Hammond over at TechCrunch, on the other hand, wrote an article indicating how AI systems can actually generate or reinforce bias.  She goes over five potential sources of bias in AI systems:

  • “Data-driven bias.”  This occurs when a learning AI system that learns from a “training set” of data is fed a skewed or unrepresentative training set.  Think of the Beauty.ai “pageant.”
  • “Bias from interaction.”  This occurs when a machine that learns from interactions with other users ends up incorporating those users’ biases.  Tay the Racist Chatbot is an obvious example of this.
  • “Emergent bias.”  Think of this as self-reinforcing bias.  It’s what happens when Facebook’s news feed algorithms recognize that a particular user likes reading articles from a particular political viewpoint and, because they are programmed to predict what that user might to read next, ends up giving the user more and more stories from that viewpoint.  It seems to me that this is pretty much an extension of the first two types of bias.
  • “Similarity bias.”  Hammond’s description makes this sound very similar to emergent bias, using the example of Google News, which will often turn up similar stories in response to a user search query.  This can often lead to many stories being presented that are written from the same point of view and excluding stories written from a contrary point of view.
  • “Conflicting goals bias.”  I honestly have no idea what this one is about.  The example Hammond provides does not give me a clear sense of what this type of bias is supposed to be.

Hammond ended on a positive note, noting that knowledge of these potential sources of bias will allow us to design around them, stating, “Perhaps we will never be able to create systems and tools that are perfectly objective, but at least they will be less biased than we are.”

I have a feeling Hammond’s piece was meant to be much longer but ultimately was cut down for readability.  I’d be interested to see a longer exploration of this subject because of the obvious legal implications of AI-generated bias…especially given that I will be writing a paper on the subject for this year’s WeRobot conference.

Two Law and AI Quick Hits

Note: Updates will be sporadic on Law and AI during the next few weeks, and there likely will only be 1 to 2 posts before mid-December.  The pace will pick up around the New Year.


By far the biggest story this fall in the world of law and AI was the October 12 release of the White House’s report on the future of artificial intelligence.  The report does not really break any new ground, but that’s hardly surprising given the breadth of the topic and the nature of these types of executive branch reports.   At some point before the New Year, I’ll post a more in-depth analysis of the report’s “AI and Regulation” segment.  For now, it’s worth noting a few of the law-relevant recommendations made in the report:

Recommendation 2: Federal agencies should prioritize open training data and open data standards in AI. The government should emphasize the release of datasets that enable the use of AI to address social challenges. Potential steps may include developing an “Open Data for AI” initiative with the objective of releasing a significant number of government data sets to accelerate AI research and galvanize the use of open data standards and best practices across government, academia, and the private sector.

 

Recommendation 5: Agencies should draw on appropriate technical expertise at the senior level when setting regulatory policy for AI-enabled products. Effective regulation of AI enabled products requires collaboration between agency leadership, staff knowledgeable about the existing regulatory framework and regulatory practices generally, and technical experts with knowledge of AI. Agency leadership should take steps to recruit the necessary technical talent, or identify it in existing agency staff, and should ensure that there are sufficient technical “seats at the table” in regulatory policy discussions.

 

Recommendation 18: Schools and universities should include ethics, and related topics in security, privacy, and safety, as an integral part of curricula on AI, machine learning, computer science, and data science.

 

Recommendation 23: The U.S. Government should complete the development of a single, governmentwide policy, consistent with international humanitarian law, on autonomous and semi-autonomous weapons.


Another story that caught my eye was a survey of consumers published in the Harvard Business Review on AI in society.  Some notable findings:

  • Far more consumers see AI’s impact on society as positive (45%) than negative (7%)
  • AI is on most people’s radar: “Nearly six in 10 (59%) said they had seen or read something about AI or had some personal experience with it in the 30 days prior to taking our survey.”
  • A majority of respondents are open to having AI systems perform a wide variety of service industry tasks, including elder care, health advice, financial guidance, cooking, teaching, policing, driving, and providing legal advice.
  • Unsurprisingly, then, job loss due to AI/automation was the most significant concern noted in the study.
  • “[T]he other great concern was increased opportunity for criminality. Half of our respondents noted being very concerned about cyber attacks (53%) and stolen data or invasion of privacy (52%). Fewer saw AI as having the ability to improve social equality (26%).”

Overall, then, consumers seem to have fairly positive views regarding AI.  Time will tell if that optimism increases or decreases as AI becomes more ubiquitous.

Could autonomous vehicles put personal injury lawyers out of business?


A couple weeks ago, CNBC published a thought-provoking commentary by Vasant Dhar on autonomous vehicles (AVs).  Much of the hype surrounding AVs has focused on their ability to prevent accidents.  This makes sense given that, as Dhar notes, “more than 90 percent of accidents result from human impairment, such as drunk driving or road rage, errant pedestrians, or just plain bad driving.”

But Dhar also points out a somewhat less obvious but equally probable consequence of the rise of AVs: Because the sensors and onboard systems in AVs will collect and record massive amounts of data, AVs will greatly simplify the assignment of liability when accidents occur.  This could eliminate the raison d’etre for no-fault insurance and liability, which continues to be the rule in many states and provinces in the U.S. and Canada.  It would also reduce the uncertainty that leads to costly litigation in jurisdictions where the assignment of liability for a car accident depends on who is found to be at fault.  As Dhar explains:

Big data from onboard systems changes everything because we now have the ability to know the physics associated with accidents . . . .

The ever-increasing numbers of sensors on roads and vehicles move us towards a world of complete information where causes of accidents will be determined more reliably and fault easier to establish. With the detail and transparency that big data provides, no fault accidents will not be an option.

This may strike panic in the hearts of personal injury attorneys, but it would be good news for pretty much everyone else.

In the long-run, increasing automation and the introduction of more sophisticated vehicle sensor systems will also have positive downstream effects. As Dhar notes, “the massive increase in data collection from vehicles . . . will happen irrespective of whether vehicles are ever fully autonomous” as manufacturers continue to load more sophisticated sensors and systems on human-driven cars.  This data “could be used to design incentives and reward desirable driving practices in the emerging hybrid world of human and driverless vehicles. In other words, better data could induce better driving practices and lead to safer transportation with significantly lower insurance and overall costs to society.”

You can read Dhar’s full commentary here.

 

Drone Law Today podcast: Where Artificial Intelligence Goes from Here


I had the pleasure of doing a podcast interview with Steve Hogan of Drone Law Today last week, and we had a fascinating, wide-ranging discussion on the future of artificial intelligence.  The podcast episode is now available on Drone Law Today‘s website.  If you haven’t heard the podcast before, check it out and subscribe–there are not a lot of regularly updated resources out there for people interested in the intersection of law and emerging technologies, and Steve’s podcast is a great one.

California takes AI marketing off Autopilot

54daefbf96e49_-_airplane-auto-pilot


California issued new draft regulations this week regarding the marketing of autonomous vehicles.  It included a not-very-subtle dig at Tesla’s much-ballyhooed “Autopilot” system.

As background, the new California regulations specify in greater detail than before what type of technology counts as “autonomous” in the context of cars. Specifically, it says that an “autonomous vehicle” must qualify as Level 3, Level 4, or Level 5 under the Society of Automotive Engineers (SAE) framework.  That framework classifies vehicles along a 0 to 5 scale, with 5 being fully autonomous and 0 being a standard, fully human-controlled car.

The kicker came in the very last section of the draft regulations:

» Read more

Law and AI Quick Hits: September 26-30, 2016

Credit: Charles Schulz

Credit: Charles Schulz


A short round up of recent news of interest to Law and AI.

In the Financial Times, John Thornhill writes on “the darker side of AI if left unmanaged: the impact on jobs, inequality ethics, privacy and democratic expression.”  Thornhill takes several proverbial pages from the Stanford 100-year study on AI, but does not ultimately offer his view of what effective AI “management” might look like.


Patrick Tucker writes in Defense One that a survey funded by the Future of Life Institute found “that the U.S. military more commonly uses AI not to help but to replace human operators, and, increasingly, human decision making.”  In the process, he gives voice to the fears held by many people (well, at least by me) of how an autonomous weapons arms race might play out:

Today, the United States continues to affirm that it isn’t interested in removing the human decision-maker from “the loop” in offensive operations like drone strikes (at least not completely). That moral stand might begin to look like a strategic disadvantage against an adversary that can fire much faster, conduct more operations, hit more targets in a smaller amount of time by removing the human from loop.


Microsoft CEO Satya Nadella sat down for an interview with Dave Gershgorn of Quartz.  Among other things, Nadella discusses the lessons Microsoft learned from Tay the Racist Chatbot–namely the need to build “resiliency” into learning AI systems to protect them from threats that might cause them to “learn” bad things.  In the case of Tay, Microsoft failed to make the chatbot resilient to trolls, with results that were at once amusing and troubling.

The Partnership on AI: A step in the right direction

cartoon6912


Well, by far the biggest AI news story to hit the papers this week was the announcement that a collection of tech industry heavyweights–Microsoft, IBM, Amazon, Facebook, and Google–are joining forces to form a “Partnership on AI”:

The group’s goal is to create the first industry-led consortium that would also include academic and nonprofit researchers, leading the effort to essentially ensure AI’s trustworthiness: driving research toward technologies that are ethical, secure and reliable — that help rather than hurt — while also helping to diffuse fears and misperceptions about it.

 

“We plan to discuss, we plan to publish, we plan to also potentially sponsor some research projects that dive into specific issues,” Banavar says, “but foremost, this is a platform for open discussion across industry.”

There’s no question this is welcome news.  Each of the five companies who formed this group had been part of the “AI arms race” that has played out over the past few years, when major tech companies have invested massive amounts of money in expanding their AI research, both by acquiring other companies and by recruiting talent.  To a mostly-outside observer such as myself, it seemed for a time like the arms race was becoming an end unto itself–companies were making huge investments in AI without thinking about the long-term implications of AI development.  The Partnership is a good sign that the titans of tech are, indeed, seeing the bigger picture.

» Read more

A peek at how AI could inadvertently reinforce discriminatory policies

Source: Before It's News

Source: Before It’s News


The most interesting story that came up during Law and AI’s little hiatus came from decidedly outside the usual topics covered here–the world of beauty pageants.  Well, sort of:

An online beauty contest called Beauty.ai, run by Youth Laboratories . . . ., solicited 600,000 entries by saying they would be graded by artificial intelligence. The algorithm would look at wrinkles, face symmetry, amount of pimples and blemishes, race, and perceived age.

Sounds harmless enough, aside from the whole “we’re teaching computers to objectify women” aspect.  But the results of this contest carry some troubling implications.

Of the 44 winners in the pageant, 36 (or 82%) were white.  In other words, white people were disproportionately represented among the pageant’s “winners.”  This couldn’t help but remind me of discrimination law in the legal world.  The algorithm’s beauty assessments had what lawyers would recognize as a disparate impact–that is, despite the fact that the algorithm seemed objective and non-discriminatory at first glance, it ultimately favored whites at the expense of other racial groups.

The concept of disparate impact is best known in employment law and in college admissions, where a company or college can be liable for discrimination if its policies have a disproportionate negative impact on protected groups, even if the people who came up with the policy had no discriminatory intent. For example, a hypothetical engineering company might select which applicants to interview for a set of open job positions by coming up with a formula that awards 1 point to an applicant with a college degree in engineering, 3 years for a Master’s degree, and 6 points for a doctorate, and additional points for certain prestigious fellowships.  Facially, this system appears neutral in terms of race, gender, and socioeconomic status.  But in its outcomes, it may (and probably would) end up having a disparate impact if the components of the test score are things that wealthy white men are disproportionately more likely to have due to their social and economic advantages.

The easiest way to get around this problem might be to use a quota–i.e., set aside a certain proportion of the positions for applicants from underserved minority groups and then apply the ‘objective’ test to rate applicants within each group.  But such overt quotas are also illegal (according to the Supreme Court) because they constitute disparate treatment.  What about awarding “bonus points” under the objective test to people from disadvantaged groups?  Well, that would also be disparate treatment.  Certainly, nothing prevents an employer from using race as, to borrow a phrase from Equal Protection law, a subjective “plus factor” to help ensure diversity.  But you can’t assign a specific number related to the race or gender of applicants.  The bottom line is that the law likes to keep assessments very subjective when they involve sensitive personal characteristics such as race and gender.

Which brings us back to AI.  You can have an algorithm that approximates or simulates a subjective assessment, but you still have to find a way to program that assessment into the AI–which means reducing the subjective assessment to an objective and concrete form. It would be difficult-to-impossible to program a truly subjective set of criteria into an AI system because a subjective algorithm is almost a contradiction in terms.

Fortunately for Beauty.ai, it can probably solve its particular “disparate impact” problem without having the algorithm discriminate based on race.  The reason why Beauty.ai generated a disproportionate number of white winners is that the data sets (i.e. images of people) that were used to build the AI’s ‘objective’ algorithm for assessing beauty consisted primarily of images of white people.

As a result, the algorithm’s accuracy dropped when it runs into the images of people who don’t fit the patterns in the data set that was used to prime the algorithm.  To fix that, the humans just need to include a more diverse data set–and since humans are doing that bit, the process of choosing who is included in the original data set could be subjective, even if the algorithm that uses the data set cannot be.

For various reasons, however, it would be difficult to replicate that process in the contexts of employment, college admissions, and other socially and economically vital spheres.  I’ll be exploring this topic in greater detail in a forthcoming law practice article that should be appearing this winter.  Stay tuned!

1 2 3 4 5 7