The OECD Principles, the AI Initiative Act, and introducing a new guest contributor

There were a couple significant developments in the AI policy world this week. First, the Organization for Economic Co-operation and Development (OECD) adopted and published its “Principles on AI.” That same day, a bipartisan trio of Senators introduced the Artificial Intelligence Initiative Act (AI-IA) (link to PDF of bill), which would establish a national AI strategy in the United States comparable to those adopted by Germany, Japan, France, South Korea, and China.

Not unlike the Principles released at the Asilomar conference in 2017, the OECD Principles are pretty high-level, focusing primarily on the macro-scale ethical and economic implications of AI and the ways in which AI development can be nudged in a beneficial direction for humanity. It doesn’t really get into the nitty-gritty of public risk management for AI. Section 2 of the Principles make a number of broad recommendations for national governments under these broad headings:

  • Facilitate public and private investment in research & development to spur innovation in trustworthy AI.
  • Foster accessible AI ecosystems with digital infrastructure and technologies and mechanisms to share data and knowledge.
  • Ensure a policy environment that will open the way to deployment of trustworthy AI systems.
  • Empower people with the skills for AI and support workers for a fair transition.
  • Co-operate across borders and sectors to progress on responsible stewardship of trustworthy AI.

As for the AI-IA, the US is certainly long overdue to develop a national AI strategy, as the list of countries from the first paragraph suggests. But there have been many false starts on AI policy at the federal level over the past few years that fizzled or have stalled without bearing any substantive legal or policy fruit. We’re two years removed from the Treasury Secretary saying that job losses due to automation are “not even on our radar screen.” It’s promising that the AI-IA has sponsors from both sides of the aisle, but if you’ve been paying any attention to American politics recently (and particularly Tuesday), it’s probably best to keep expectations in check. (Funny typo: The name of the file from the Senate website is “ArtificialIntellifence.” [Cue joke about how we should master human intelligence before turning to AI.])

It was interesting to compare the ways in which each of these documents define artificial intelligence. Here’s the definition from the OECD Principles:

AI system: An AI system is a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.

And from the AI-IA:

(1) ARTIFICIAL INTELLIGENCE.—The term ‘‘artificial intelligence’’ includes the following:
(A) Any artificial system that performs tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from experience and improve performance when exposed to data sets.
(B) An artificial system developed in computer software, physical hardware, or other context that solves tasks requiring human-like perception, cognition, planning, learning, communication, or physical action.
(C) An artificial system designed to think or act like a human, including cognitive architectures and neural networks.
(D) A set of techniques, including machine learning, that is designed to approximate a cognitive task.
(E) An artificial system designed to act rationally, including an intelligent software agent or embodied robot that achieves goals using perception, planning, reasoning, learning, communicating, decision-making, and acting.

*************

(9) MACHINE LEARNING.—The term ‘‘machine learning’’ means a subfield of artificial intelligence that is characterized by giving computers the autonomous ability to progressively optimize performance of a specific task based on data without being explicitly programmed.

Both definitions are obviously quite broad and would be of questionable utility from a legal standpoint. But honestly, that’s going to be the case for any definition of AI right now.


The next post you’ll see will be from Nareissa Smith. Nareissa is a graduate of Spelman College and Howard University School of Law. After completing two judicial clerkships (including one for Judge (Ret.) Gregory M. Sleet, who I clerked for at the start of my own legal career), Nareissa worked as a law professor for over ten years. Her courses included Constitutional Law, Criminal Procedure, and Critical Race Theory. Now, Nareissa works as a freelance journalist. You can reach her at nareissa.smith@gmail.com or contact her via Twitter (@NareissasNotes). Hopefully her next post will be the first of many!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.