Could we be entering an AI-powered arms race in cyberwarfare?

Soon to be obsolete?


Much has been made about the possibility of AI-powered autonomous weapons becoming a factor in conventional warfare in the coming years.  But in the sphere of cyber-warfare, AI is already starting to play a major role, as laid out in an article in this week’s Christian Science Monitor.

Many nations–most notably Russia and China–already employ armies of hackers to conduct operations in the cybersphere against other countries.  The US Department of Defense’s response might be a harbinger of things to come:

[T]he allure of machines quickly fixing vulnerabilities has led the Defense Advanced Research Projects Agency (DARPA), the Defense Department’s technology lab, to organize the first-ever hacking competition that pits automated supercomputers against each other at next month’s Black Hat cybersecurity conference in Las Vegas.

With the contest, DARPA is aiming to find new ways to quickly identify and eliminate software flaws that can be exploited by hackers, says DARPA program manager Mike Walker.

“We want to build autonomous systems that can arrive at their own insights, do their own analysis, make their own risk equity decisions of when to patch and how to manage that process,” said Walker.

One of the big concerns about deploying autonomous weapon systems (AWSs) in the physical world is that it will lead to an arms race.  Starting in the Cold War, the development of more advanced missile defense systems spurred the development of more advanced missiles, which in turn led to the development of even more advanced missile defense systems, and so on.  It is easy to see how the same dynamic would play out with AWSs: because AWSs would be able to react on far shorter timescales than human soldiers, the technology may quickly reach a point where the only effective way to counter an enemy’s offensive AWS would be to deploy a defensive AWS, kickstarting a cycle of ever-more-advanced AWS development.

The fear with AWSs is that it might make human military decisionmaking obsolete, with human commanders unable to intervene quickly enough to meaningfully affect combat operations between AWSs.

The cyberwarfare arena might be a testing ground for that “AI arms race” theory.  If state-backed hackers respond to AI-powered cybersecurity systems by developing new AI-powered hacking technologies, what happens next might prove an ominous preview of what could happen someday in the world of physical warfare.

Selective Revelation: Should we let robojudges issue surveillance and search warrants?

Credit: SimplySteno Court Reporting Blog

Credit: SimplySteno Court Reporting Blog


AI systems have an increasing ability to perform legal tasks that used to be within the exclusive province of lawyers.  Anecdotally, it seems that both lawyers and the general public are getting more and more comfortable with the idea that legal grunt work–the drafting of contracts, reviewing voluminous documents, etc–can be performed by computers with varying levels of (human) lawyer oversight.  But the idea of a machine acting as a judge is another matter entirely; people don’t seem keen on the idea of assigning to machines the task of making subjective legal decisions on matters such as liability, guilt, and punishment.

Consequently, I was intrigued when Thomas Dieterrich pointed me to the work of computer scientist Dr. Latanya Sweeney on “selective revelation.”  Sweeney, a computer scientist by trade who serves as Professor of Government and Technology in residence at Harvard, came up with selective revelation as a method of what she terms “privacy-preserving surveillance,” i.e., balancing privacy protection with the need for surveillance entities to collect and share electronic data that might reveal potential security threats or criminal activity.

She proposes, in essence, creating a computer model that would mimic, albeit in a nuanced fashion, the balancing test that human judges undertake when determining whether to authorize a wiretap or issue a search warrant:

» Read more

AI in the Legal Workplace: Collaboration or Competition?

Source: Dilbert


As AI systems become more widespread and versatile, they will undoubtedly have a major impact on our workforce and economy.  On a macro scale–that is, across the labor market as a whole–whether AI’s impact will be positive or negative is very much an open debate.  The same is true of the impact of AI on many specific occupations.  Roughly half of jobs in the United States are “vulnerable” to automation, according to a 2013 study.  But whether AI systems will prove “good” or “bad” for workers in a specific profession will depend in large part on whether AI serves as complement to human workers or acts as a replacement for them.

In the legal profession, for instance, the rise of predictive coding and improved scan-and-search software has given law firms the option of automating some of the most time-consuming (and therefore expensive) aspects of identifying relevant documents during litigation, a.k.a. document review.  Document review has long been bread-and-butter work for young lawyers, especially at law firms that handle complex litigation cases, which can require sifting through and poring over thousands or even millions of pages of documents.

» Read more