The Partnership on AI: A step in the right direction

cartoon6912


Well, by far the biggest AI news story to hit the papers this week was the announcement that a collection of tech industry heavyweights–Microsoft, IBM, Amazon, Facebook, and Google–are joining forces to form a “Partnership on AI”:

The group’s goal is to create the first industry-led consortium that would also include academic and nonprofit researchers, leading the effort to essentially ensure AI’s trustworthiness: driving research toward technologies that are ethical, secure and reliable — that help rather than hurt — while also helping to diffuse fears and misperceptions about it.

 

“We plan to discuss, we plan to publish, we plan to also potentially sponsor some research projects that dive into specific issues,” Banavar says, “but foremost, this is a platform for open discussion across industry.”

There’s no question this is welcome news.  Each of the five companies who formed this group had been part of the “AI arms race” that has played out over the past few years, when major tech companies have invested massive amounts of money in expanding their AI research, both by acquiring other companies and by recruiting talent.  To a mostly-outside observer such as myself, it seemed for a time like the arms race was becoming an end unto itself–companies were making huge investments in AI without thinking about the long-term implications of AI development.  The Partnership is a good sign that the titans of tech are, indeed, seeing the bigger picture.

The Partnership’s website is already live and has a list of tenets that will guide the Partnership’s work.  I’ll highlight the portions that (predictably) made my ears perk up:

  1. We will seek to ensure that AI technologies benefit and empower as many people as possible.
  2. We will educate and listen to the public and actively engage stakeholders to seek their feedback on our focus, inform them of our work, and address their questions.
  3. We are committed to open research and dialog on the ethical, social, economic, and legal implications of AI.
  4. We believe that AI research and development efforts need to be actively engaged with and accountable to a broad range of stakeholders.
  5. We will engage with and have representation from stakeholders in the business community to help ensure that domain-specific concerns and opportunities are understood and addressed.
  6. We will work to maximize the benefits and address the potential challenges of AI technologies, by:
    1. Working to protect the privacy and security of individuals.
    2. Striving to understand and respect the interests of all parties that may be impacted by AI advances.
    3. Working to ensure that AI research and engineering communities remain socially responsible, sensitive, and engaged directly with the potential influences of AI technologies on wider society.
    4. Ensuring that AI research and technology is robust, reliable, trustworthy, and operates within secure constraints.
    5. Opposing development and use of AI technologies that would violate international conventions or human rights, and promoting safeguards and technologies that do no harm.
  7. We believe that it is important for the operation of AI systems to be understandable and interpretable by people, for purposes of explaining the technology.
  8. We strive to create a culture of cooperation, trust, and openness among AI scientists and engineers to help us all better achieve these goals.

That’s a pretty darned good list, if you ask me.

As a general rule, I’m actually skeptical of self-regulation as a long-term solution to risks posed by a new technology or industry.  Almost by definition, industry self-regulation is voluntary.  As a result, while self-regulation tends to work alright in growing industries when profit margins are high, it can easily break down when market forces begin prompting industry participants to push and eventually break through the limits imposed by the industry’s governing body.

But self-regulation does fill a critical role in developing industries where traditional (i.e., governmental) regulators have neither the technical knowledge necessary to identify risks nor the legal framework necessary to effectively manage those risks.  AI fits that description to a T.  Industry-wide cooperation and the development of best practices may well be the only way to push the development and deployment of AI in a direction that is good for society as a whole.

There still are some major names missing from the Partnership–most notably Apple and OpenAI.  The latter entity’s mission seems very much in harmony with the new Partnership, so hopefully they’ll be talking even if OpenAI does not actually join the partnership.  But regardless, the Partnership is an exciting development for geeks like me who spend their time worrying whether the people developing AI systems are keeping broader social, ethical, and legal considerations in mind.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.