Questions from a young reader

Tom Toles, The Buffalo News, 1997


Last week I got an email from Will, an 8th Grader from Big D (little A, double L, A, S).  He is in a class where the students get to choose a topic to write about, and he chose AI because he had “always wondered about what makes a machine better than humans in an area.”

Will emailed me wanting to know if I could answer some questions he had about AI and its impact on our society.  I happily agreed, and he responded by sending five excellent questions.  After getting approval from Will and his teacher (thanks, Ms. Peterson!), I am posting Will’s questions and my responses below.  (I also sent Will an email with much shorter responses so that he wouldn’t fall asleep halfway through my answers).

Here they are:

What are your thoughts on the rapidly increasing investment in AI of huge companies such as Google and Microsoft?

This is one of the hottest topics in the world of AI policy right now.  In some ways, the investment in AI by these companies is a good thing.  There are so many things we could do with better AI systems, from having more accurate weather forecasts to reducing traffic on highways to helping doctors come up with better diagnoses when someone is sick.  Those things would bring great benefits to lots of people, and they could happen much more quickly if big companies focus their time and money on improving AI.

On the other hand, there are always dangers when big companies get too much power.  The usual way that we deal with those dangers has been through government action.  But modern AI technologies are very complicated—so complicated that sometimes even the people who design them may not totally understand why they do what they do!  It is hard to come up with good rules for things that no one completely understands.

» Read more

California’s latest autonomous vehicle regulations

Credit: Mike Keefe


The ABA’s Science & Technology Law section has an AI and Robotics committee that holds a monthly teleconference “meetup” where a guest speaker presents on an AI/Robotics-related legal issue.  From here forward, I’ll be making a brief post on each monthly meetup.

For the April meetup, Michele Kyrouz gave a presentation on California’s updated autonomous vehicle (AV) regulations.  I wrote a post last fall discussing the new rules governing AV advertising and marketing, and intended to do a longer post discussing the regulation changes as a whole.  This month’s meetup gave me the kick in the pants I needed to actually do that.

» Read more

WeRobot 2017: Fault, liability, and regulation


The last panel of WeRobot 2017 produced what were perhaps my two favorite papers presented at the conference: “An Education Theory of Fault for Autonomous Systems” by Bill Smart and Cindy Grimm of Oregon State University’s Robotics Program and Woodrow Hartzog of Stanford Law School, and “Nudging Robots: Innovative Solutions to Regulate Artificial Intelligence,” by Michael Guihot, Anne Matthew, and Nicolas Suzor of the Queensland University of Technology.

It’s not surprising that both of these papers made an impression on me because each dealt with topics near and dear to my nerdy heart.  “An Education Theory of Fault” addresses with the thorny issue of how to determine culpability and responsibility when an autonomous system causes harm, in light of the inherent difficulty in predicting how such systems will operate.  “Nudging Robots” deals with the equally challenging issue of how to design a regulatory system that can manage the risks associated with AI.  Not incidentally, those are perhaps the two issues to which I have devoted the most attention in my own writings (both blog and scholarly).  And these two papers represent some of the strongest analysis I have seen on those issues.

» Read more

Poll shows that support for national and international regulation of AI is broad, but is it deep?

Source: Calvin and Hobbes, Bill Watterson, Oct 27, 1987


Yesterday, Morning Consult released perhaps the most wide-ranging public survey ever conducted on AI-related issues.  In the poll, 2200 Americans answered 39 poll questions about AI (plus a number of questions on other issues).

The headline result that Morning Consult is highlighting is that overwhelming majorities of respondents supported national regulation (71% support) and international regulation (67%) of AI.  Thirty-seven percent strongly support national regulation, compared to just 4% who strongly oppose it (for international, those numbers were 35% and 5%, respectively).

Perhaps even more strikingly, the proportion of respondents who support regulation was very consistent across political and socioeconomic lines.  A full 74% of Republicans, 73% of Democrats, and 65% of independents support national regulations, as do 69% of people making less than $50k/yr, 73% making $50k-$100k, and 65% of those who make more than $100k.  Education likewise matters little: 70% of people without a college degree support national regulation, along with 74% of college grads and 70% of respondents with post-graduate degrees.  Women (75%) were slightly more likely to support such regulations than men (67%).

» Read more

The Return of the Blog: WeRobot 2017


After a long layoff, Law and AI returns with some brief takes on the 6th annual WeRobot Conference, which was held this past weekend at Yale Law School’s Information Society Project.  If you want a true blow-by-blow account of the proceedings, check out Amanda Levendowski’s Twitter feed.  Consider the below a summary of things that piqued my interest, which will not necessarily be the same as the things that prove to be the most important technical or policy takeaways from the conference.

Luisa Scarcella and Michaela Georgina Lexer: The effects of artificial intelligence on labor markets – A critical analysis of solution models from a tax law and social security law perspective

(Paper, Presentation)

Ms. Scarcella and Ms. Lexer presented perhaps the most topically unique paper of the conference.  Their paper addresses the potential macroeconomic, social, and government-finance impacts of automation.

» Read more

Bias


An interesting pair of stories popped up over the past month covering how the use of AI could affect bias in our society.  This is a fascinating topic from a “law and AI” standpoint due to the sheer number of laws in place worldwide that prohibit certain forms of bias and discrimination in a variety of settings, ranging from employment to hotel accommodations to the awarding of government contracts.

At first blush, one might think that having an automated system make decisions would reduce the risk of bias, or at least those forms of bias that the law prohibits.  After all, such a system would not be susceptible to the many of the most obvious types of biases and prejudices that afflict human decision-makers.  A machine would not have a financial interest in the outcome of any decision (at least not yet), nor would it be susceptible to the dark impulses of racism and sexism.  A machine likewise would presumably be less susceptible to, if not immune from, the more subtle and sometimes even unconscious manifestations of bias that emotion-driven humans exhibit.

Those advantages led Sharon Florentine to pen an article published last month in CIO with a bold headline: “How artificial intelligence can eliminate bias in hiring.”  That title was probably clickbait to a certain extent because the article itself was fairly measured in its assessment of the potential impact of AI on workplace discrimination.  The thesis of the article is that AI systems could be used indirectly to reduce bias by using machine learning to “be an objective observer to screen for bias patterns.”  In other words, AI systems could act as something of a bias detector, raising alerts when a person or company’s decision-making patterns display signs of bias or prejudice.


Kristian Hammond over at TechCrunch, on the other hand, wrote an article indicating how AI systems can actually generate or reinforce bias.  She goes over five potential sources of bias in AI systems:

  • “Data-driven bias.”  This occurs when a learning AI system that learns from a “training set” of data is fed a skewed or unrepresentative training set.  Think of the Beauty.ai “pageant.”
  • “Bias from interaction.”  This occurs when a machine that learns from interactions with other users ends up incorporating those users’ biases.  Tay the Racist Chatbot is an obvious example of this.
  • “Emergent bias.”  Think of this as self-reinforcing bias.  It’s what happens when Facebook’s news feed algorithms recognize that a particular user likes reading articles from a particular political viewpoint and, because they are programmed to predict what that user might to read next, ends up giving the user more and more stories from that viewpoint.  It seems to me that this is pretty much an extension of the first two types of bias.
  • “Similarity bias.”  Hammond’s description makes this sound very similar to emergent bias, using the example of Google News, which will often turn up similar stories in response to a user search query.  This can often lead to many stories being presented that are written from the same point of view and excluding stories written from a contrary point of view.
  • “Conflicting goals bias.”  I honestly have no idea what this one is about.  The example Hammond provides does not give me a clear sense of what this type of bias is supposed to be.

Hammond ended on a positive note, noting that knowledge of these potential sources of bias will allow us to design around them, stating, “Perhaps we will never be able to create systems and tools that are perfectly objective, but at least they will be less biased than we are.”

I have a feeling Hammond’s piece was meant to be much longer but ultimately was cut down for readability.  I’d be interested to see a longer exploration of this subject because of the obvious legal implications of AI-generated bias…especially given that I will be writing a paper on the subject for this year’s WeRobot conference.

Two Law and AI Quick Hits

Note: Updates will be sporadic on Law and AI during the next few weeks, and there likely will only be 1 to 2 posts before mid-December.  The pace will pick up around the New Year.


By far the biggest story this fall in the world of law and AI was the October 12 release of the White House’s report on the future of artificial intelligence.  The report does not really break any new ground, but that’s hardly surprising given the breadth of the topic and the nature of these types of executive branch reports.   At some point before the New Year, I’ll post a more in-depth analysis of the report’s “AI and Regulation” segment.  For now, it’s worth noting a few of the law-relevant recommendations made in the report:

Recommendation 2: Federal agencies should prioritize open training data and open data standards in AI. The government should emphasize the release of datasets that enable the use of AI to address social challenges. Potential steps may include developing an “Open Data for AI” initiative with the objective of releasing a significant number of government data sets to accelerate AI research and galvanize the use of open data standards and best practices across government, academia, and the private sector.

 

Recommendation 5: Agencies should draw on appropriate technical expertise at the senior level when setting regulatory policy for AI-enabled products. Effective regulation of AI enabled products requires collaboration between agency leadership, staff knowledgeable about the existing regulatory framework and regulatory practices generally, and technical experts with knowledge of AI. Agency leadership should take steps to recruit the necessary technical talent, or identify it in existing agency staff, and should ensure that there are sufficient technical “seats at the table” in regulatory policy discussions.

 

Recommendation 18: Schools and universities should include ethics, and related topics in security, privacy, and safety, as an integral part of curricula on AI, machine learning, computer science, and data science.

 

Recommendation 23: The U.S. Government should complete the development of a single, governmentwide policy, consistent with international humanitarian law, on autonomous and semi-autonomous weapons.


Another story that caught my eye was a survey of consumers published in the Harvard Business Review on AI in society.  Some notable findings:

  • Far more consumers see AI’s impact on society as positive (45%) than negative (7%)
  • AI is on most people’s radar: “Nearly six in 10 (59%) said they had seen or read something about AI or had some personal experience with it in the 30 days prior to taking our survey.”
  • A majority of respondents are open to having AI systems perform a wide variety of service industry tasks, including elder care, health advice, financial guidance, cooking, teaching, policing, driving, and providing legal advice.
  • Unsurprisingly, then, job loss due to AI/automation was the most significant concern noted in the study.
  • “[T]he other great concern was increased opportunity for criminality. Half of our respondents noted being very concerned about cyber attacks (53%) and stolen data or invasion of privacy (52%). Fewer saw AI as having the ability to improve social equality (26%).”

Overall, then, consumers seem to have fairly positive views regarding AI.  Time will tell if that optimism increases or decreases as AI becomes more ubiquitous.

Could autonomous vehicles put personal injury lawyers out of business?


A couple weeks ago, CNBC published a thought-provoking commentary by Vasant Dhar on autonomous vehicles (AVs).  Much of the hype surrounding AVs has focused on their ability to prevent accidents.  This makes sense given that, as Dhar notes, “more than 90 percent of accidents result from human impairment, such as drunk driving or road rage, errant pedestrians, or just plain bad driving.”

But Dhar also points out a somewhat less obvious but equally probable consequence of the rise of AVs: Because the sensors and onboard systems in AVs will collect and record massive amounts of data, AVs will greatly simplify the assignment of liability when accidents occur.  This could eliminate the raison d’etre for no-fault insurance and liability, which continues to be the rule in many states and provinces in the U.S. and Canada.  It would also reduce the uncertainty that leads to costly litigation in jurisdictions where the assignment of liability for a car accident depends on who is found to be at fault.  As Dhar explains:

Big data from onboard systems changes everything because we now have the ability to know the physics associated with accidents . . . .

The ever-increasing numbers of sensors on roads and vehicles move us towards a world of complete information where causes of accidents will be determined more reliably and fault easier to establish. With the detail and transparency that big data provides, no fault accidents will not be an option.

This may strike panic in the hearts of personal injury attorneys, but it would be good news for pretty much everyone else.

In the long-run, increasing automation and the introduction of more sophisticated vehicle sensor systems will also have positive downstream effects. As Dhar notes, “the massive increase in data collection from vehicles . . . will happen irrespective of whether vehicles are ever fully autonomous” as manufacturers continue to load more sophisticated sensors and systems on human-driven cars.  This data “could be used to design incentives and reward desirable driving practices in the emerging hybrid world of human and driverless vehicles. In other words, better data could induce better driving practices and lead to safer transportation with significantly lower insurance and overall costs to society.”

You can read Dhar’s full commentary here.

 

Drone Law Today podcast: Where Artificial Intelligence Goes from Here


I had the pleasure of doing a podcast interview with Steve Hogan of Drone Law Today last week, and we had a fascinating, wide-ranging discussion on the future of artificial intelligence.  The podcast episode is now available on Drone Law Today‘s website.  If you haven’t heard the podcast before, check it out and subscribe–there are not a lot of regularly updated resources out there for people interested in the intersection of law and emerging technologies, and Steve’s podcast is a great one.

California takes AI marketing off Autopilot

54daefbf96e49_-_airplane-auto-pilot


California issued new draft regulations this week regarding the marketing of autonomous vehicles.  It included a not-very-subtle dig at Tesla’s much-ballyhooed “Autopilot” system.

As background, the new California regulations specify in greater detail than before what type of technology counts as “autonomous” in the context of cars. Specifically, it says that an “autonomous vehicle” must qualify as Level 3, Level 4, or Level 5 under the Society of Automotive Engineers (SAE) framework.  That framework classifies vehicles along a 0 to 5 scale, with 5 being fully autonomous and 0 being a standard, fully human-controlled car.

The kicker came in the very last section of the draft regulations:

» Read more

1 2 3 4 5 7