IBM’s Response to the Federal Government’s Request for Information on AI
As discussed in a prior post, the White House Office of Science and Technology Policy (OSTP) published a request for information (RFI) on AI back in June. IBM released a response that was the subject of a very positive write-up on TechCrunch. As the TechCrunch piece correctly notes, most of IBM’s responses were very informative and interesting. They nicely summarize many of the key topics and concerns that are brought up regularly in the conferences I’ve attended.
But their coverage of the legal and governance implications of AI was disappointing. Perhaps IBM was just being cautious because they don’t want to say anything that could invite closer government regulation or draw the attention of plaintiff’s lawyers, but their write-up on the subject was quite vague and somewhat off-topic.
Here’s what IBM wrote:
[C]ognitive systems are being designed to boost the creativity and productivity of scientists and researchers to make new discoveries. As with these beneficial developments, society’s guiding principle should be to approach cognitive computing with appropriate checks and balances that encourage responsible innovations, reaping the benefits while protecting society. Right now, policymakers can take these concrete steps and seize the opportunities at hand.
Elevate The Dialogue: The business and societal impact of cognitive systems is large and growing, and taking responsibility must be the foundation of dialogue leading to a policy agenda. Topics include:
* Algorithmic Responsibility – establishing practices and protocols to build understanding and trust in the construction and workings of fundamental algorithms in software code, while preserving proprietary and confidential business information.
* Individual Privacy – establishing strong, sensible protections for individual privacy.
* Jobs and Workforce Transformation – new job creation, and workers with skills to fill them.
* Safety – protecting decision making based on morals and ethics, and establishing controlling principles for autonomous systems.
Learn Beyond the Headlines: Given the hype around AI, papers such as the one published recently by ITIF can be a tremendous resource for policymakers looking to understand the reality of the technology, how it is progressing, how it is being applied and how social concerns have been recognized and can be addressed.
Focus on Skills: We must educate and train people with the high-tech skills that will be required in a new era of data intense jobs. Today’s education systems are fundamentally misaligned from the needs of the labor market, and society needs refreshed curriculum and career training programs to fill the new and better paying jobs that will become available as a result of advances in cognitive systems.
Really, this doesn’t seem to be about AI’s legal and governance implications so much as improving the way that members of the public receive information and discuss AI. It reads more like a media pamphlet than an effort to advance the still-nascent discussion on how legal and regulatory institutions across the world should respond to AI. Don’t get me wrong, I totally agree that educating the media and the public about the true benefits and risks of AI is an important and worthy goal. It’s just not what I think the OSTP had in mind when it asked for public comments on the legal and governance implications of AI. Given IBM’s influence in the world of technology in general and AI in particular, this feels like an unfortunate missed opportunity.
That being said, IBM’s response as a whole is definitely worth a read if you want a primer on the current state and near-term future of AI.
(Not to get on a soapbox, but had I written a comment in response to the RFI (life got in the way of me submitting one before the July 22 deadline), I would have focused on the need to (1) ensure human accountability for AI systems and (2) create clear lines of legal responsibility when AI systems cause harm. That particularly concerns me for systems that are heavily reliant on machine learning. For systems whose long-term operations will largely be a function of what they “learn” from end users, application of the usual rules of products liability would seem harsh because manufacturers could be held responsible for harms that are due primarily to what end users and other humans “taught” the system rather than any “defect” in design or manufacturing. But in the absence of strict liability, victims of AI-caused harm may be left without a source of compensation. Thus the dilemma.)