Note: Updates will be sporadic on Law and AI during the next few weeks, and there likely will only be 1 to 2 posts before mid-December. The pace will pick up around the New Year.
By far the biggest story this fall in the world of law and AI was the October 12 release of the White House’s report on the future of artificial intelligence. The report does not really break any new ground, but that’s hardly surprising given the breadth of the topic and the nature of these types of executive branch reports. At some point before the New Year, I’ll post a more in-depth analysis of the report’s “AI and Regulation” segment. For now, it’s worth noting a few of the law-relevant recommendations made in the report:
Recommendation 2: Federal agencies should prioritize open training data and open data standards in AI. The government should emphasize the release of datasets that enable the use of AI to address social challenges. Potential steps may include developing an “Open Data for AI” initiative with the objective of releasing a significant number of government data sets to accelerate AI research and galvanize the use of open data standards and best practices across government, academia, and the private sector.
Recommendation 5: Agencies should draw on appropriate technical expertise at the senior level when setting regulatory policy for AI-enabled products. Effective regulation of AI enabled products requires collaboration between agency leadership, staff knowledgeable about the existing regulatory framework and regulatory practices generally, and technical experts with knowledge of AI. Agency leadership should take steps to recruit the necessary technical talent, or identify it in existing agency staff, and should ensure that there are sufficient technical “seats at the table” in regulatory policy discussions.
Recommendation 18: Schools and universities should include ethics, and related topics in security, privacy, and safety, as an integral part of curricula on AI, machine learning, computer science, and data science.
Recommendation 23: The U.S. Government should complete the development of a single, governmentwide policy, consistent with international humanitarian law, on autonomous and semi-autonomous weapons.
Another story that caught my eye was a survey of consumers published in the Harvard Business Review on AI in society. Some notable findings:
- Far more consumers see AI’s impact on society as positive (45%) than negative (7%)
- AI is on most people’s radar: “Nearly six in 10 (59%) said they had seen or read something about AI or had some personal experience with it in the 30 days prior to taking our survey.”
- A majority of respondents are open to having AI systems perform a wide variety of service industry tasks, including elder care, health advice, financial guidance, cooking, teaching, policing, driving, and providing legal advice.
- Unsurprisingly, then, job loss due to AI/automation was the most significant concern noted in the study.
- “[T]he other great concern was increased opportunity for criminality. Half of our respondents noted being very concerned about cyber attacks (53%) and stolen data or invasion of privacy (52%). Fewer saw AI as having the ability to improve social equality (26%).”
Overall, then, consumers seem to have fairly positive views regarding AI. Time will tell if that optimism increases or decreases as AI becomes more ubiquitous.