Selective Revelation: Should we let robojudges issue surveillance and search warrants?

Credit: SimplySteno Court Reporting Blog

Credit: SimplySteno Court Reporting Blog


AI systems have an increasing ability to perform legal tasks that used to be within the exclusive province of lawyers.  Anecdotally, it seems that both lawyers and the general public are getting more and more comfortable with the idea that legal grunt work–the drafting of contracts, reviewing voluminous documents, etc–can be performed by computers with varying levels of (human) lawyer oversight.  But the idea of a machine acting as a judge is another matter entirely; people don’t seem keen on the idea of assigning to machines the task of making subjective legal decisions on matters such as liability, guilt, and punishment.

Consequently, I was intrigued when Thomas Dieterrich pointed me to the work of computer scientist Dr. Latanya Sweeney on “selective revelation.”  Sweeney, a computer scientist by trade who serves as Professor of Government and Technology in residence at Harvard, came up with selective revelation as a method of what she terms “privacy-preserving surveillance,” i.e., balancing privacy protection with the need for surveillance entities to collect and share electronic data that might reveal potential security threats or criminal activity.

She proposes, in essence, creating a computer model that would mimic, albeit in a nuanced fashion, the balancing test that human judges undertake when determining whether to authorize a wiretap or issue a search warrant:

When a law officer wants to intrude on a person’s private life or affairs, she needs a search warrant, which may be issued by a human judge. In the general case, an officer appears before the judge and reports either facts for which she has first-hand knowledge or facts that she was told through an informant. Typically, the judge in making a decision uses a two-prong test to answer: (1) what is the basis of the knowledge; and (2) is the source believable. This process can be modeled in technology by replacing the officer with anomaly or data mining algorithms, and the informant with data provided from various data sources. The human judge is replaced with a combination of contracts with the original data collectors and a technologically enforceable policy statement having preset levels to match the identifiability of the provided information with the minimal information needed by the algorithm.

The system would work something like this: during normal operation, only fully anonymized data can be revealed.  If the data sources available to the system indicate the presence of unusual activity, slightly less anonymous/more identifiable data is revealed.  The higher the system’s confidence that suspicious or criminal activity is afoot, the more details and personally identifiable information can be revealed.

As this description suggests, “Selective Revelation differs from the probable cause predicate in that decisions are based on a sliding scale of identifiability and not a binary one”:

When a human judge makes a search warrant decision, the result is typically binary–access is granted or not. But under Selective Revelation, the result is nuanced. The decision determines which version of the data will be provided (from anonymous to explicitly identifiable), not whether data will be provided at all.

Why might this be a desirable system for issuing surveillance warrants?  Well, one of the key shortcomings of the human legal system in the digital age is that computers–and the humans operating them–can act and react on timescales far smaller than even the most diligent human law enforcement officials and regulators.  That is one reason that financial regulators have increasingly called for an increased use of stock exchange trading curbs (colloquially known as “circuit breakers”) that automatically suspend trading when unusual changes in the market are detected–human regulators simply lack the capacity intervene quickly enough in an era of high-speed algorithmic trading.  As algorithm-driven economic activity becomes more prevalent, we might need algorithm-driven law enforcement tools to play a role in preventing illegal activity–and algorithmic “judges” to make quick decisions on whether and when those tools can be deployed.

This would, of course, raise all sorts of interesting constitutional issues in the United States, not to mention moral and ethical concerns the world over on whether these are the sorts of decisions that we want machines to make.  But who knows?  Given the furore that arose over incidents such as the Edward Snowden leaks, maybe people will some day prefer dispassionate computers to human judges when it comes to making decisions about whether to collect and reveal personal information.

One comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.