On subjectivity, AI systems, and legal decision-making

Source: Dilbert


The latest entry in my series of posts on autonomous weapon systems (AWSs) suggested that it would be exceedingly difficult to ensure that AWSs complied with the laws of war.  A key reason for this difficulty is that the laws of war depend heavily on subjective determinations.  One might easily expand this point and argue that AI systems cannot–or should not–make any decisions that require interpreting or applying law because such legal determinations are inherently subjective.

Ever the former judicial clerk, I can’t resist pausing for a moment to define my terms.  “Subjective” can have subtly different meanings depending on the context.  Here, I’m using the term to mean something that is a matter of opinion rather than a matter of fact.  In law, I would say that identifying what words are used in the Second Amendment is an objective matter; discerning what those words mean is a subjective matter.  All nine justices who decided DC v. Heller (and indeed, anyone with access to an accurate copy of the Bill of Rights) agreed that the Second Amendment reads: “A well regulated militia, being necessary to the security of a free state, the right of the people to keep and bear arms, shall not be infringed.”  They disagreed quite sharply about what those words mean and how they relate to each other.  (Legal experts even disagree on what the commas in the Second Amendment mean).

Given that definition of “subjective,” here are some observations.

First, reducing laws to programmable computer code would, in and of itself, represent an important value choice regarding what the law is.  Many lawyers, judges, and legal scholars–including Ronald Dworkin, probably the most important legal philosopher of the past 50 years–dispute the notion that fixed textual sources of law such as statutes and judicial opinions can be separated from broader policy and ethical considerations.  To that sizable group of legal professionals, there is no effective way to formalize and encode law unless you first can formalize and encode concepts like justice, fairness, and morality–and good luck getting any sort of general agreement on how to formalize those concepts.  So if AI designers simply reduced statutes and judicial opinions to 1s and 0s (or qubits, if that’s your thing) and encoded them into an AI system, the AI programmers would have implicitly decided that the specific content of those statutes and opinions should guide the AI’s legal decisions rather than broader ethical and policy considerations.  Put another way, the AI designers would have made a subjective choice regarding what the law is.

A related issue in common law legal systems is the difficulty in determining exactly which parts of a judicial opinion are legally binding and which are not.  Portions of a judicial opinion that discuss the law but may not be binding include background information on how a law developed, assumptions for the sake of argument (a distinction that the news media really needs to learn), and obiter dicta (a fuzzy term that basically means something like “a side comment” or “said in passing”).  Lawyers often squabble over whether a particular portion of a judicial opinion is binding or the degree to which an opinion’s holding depended on the specific facts of the case.  Here too, the very question of “what the law is” can be highly subjective and fluid.

Also, the very act of interpreting the text of a statute, case, or constitutional provision would seem to be subjective because how that text should be interpreted–and what outside sources (dictionaries, earlier versions of the law, etc) are relevant to that interpretation–is a matter of opinion.  Even if virtually all judges and lawyers agree on how to interpret a particular law (and please, wake me up when that happens), that interpretation would still be a matter of opinion, and the plural of “opinion” is not “fact.”  Consequently, the meaning of even the most straightforward of laws is “subjective” to some degree.


Despite this, many laws are fairly straightforward, which is why many simple legal transactions can be conducted using fill-in-the-blank forms by people with no legal education. Consequently, there probably is a way to create AI systems that can make reasonably reliable legal determinations in many situations.  Recent news stories about the chatbot that can file parking ticket appeals by asking a series of yes/no questions illustrate how such simple legal tasks can be automated.  People might scoff at the chatbot’s 47% success rate, but from my admittedly limited experience sitting through traffic court proceedings, 47% is probably not much different than the success rate of human lawyers on traffic appeals.

For murkier and/or more subjective areas of law, the use of predictive coding to identify relevant documents during document review already provides a glimpse of how AI systems can be programmed to make subjective determinations.  Predictive coding programs are designed to search through electronically stored sets of documents and identify those that might be relevant to an ongoing lawsuit.  Before going through the entire set of documents, which can be millions of pages in complex lawsuits, the predictive coding program provides a human lawyer with a “seed set” of sample documents and then engages in a form of machine learning:

This sample set of documents is reviewed by subject matter experts. The determinations made on the seed set comprise the primary reference data to teach the predictive coding machine how to recognize patterns of relevance in the larger document set. Based on the calls made in the documents in the seed set, the computer will be able to predict categorizations for the remaining documents in the larger universe.

It is not difficult to imagine similar machine learning techniques being applied to other subjective legal determinations, such as whether a particular statement in an opinion is legally binding.  If advances in natural language processing and machine learning continue apace, an AI system given access to a broad enough “data set” of black-letter law and an opportunity to go through many iterations of “seed set” review by human legal experts might be able to distinguish the binding portions of judicial opinions from the non-binding portions with a degree of accuracy comparable to human lawyers.


The above points largely relate to whether AI systems can be programmed to make reliable legal determinations given the challenges presented by the subjective aspects of law.  That does not really answer the question of whether AI programs should be making such determinations.

For example, the “subjectivity” objection to autonomous weapon systems does not stem simply from a belief that there would be no way for a machine to “learn” the concept of proportionality.  It also–and maybe primarily–stems from a belief that delegating the decision to kill to a machine would represent an unacceptable abdication of human moral responsibility.  Think of what the human review of a “seed set” of proportionality would have to look like.  Would the deaths of 100 civilians be a “proportional” price for capturing a key bridge?  Would the destruction of an entire block of civilian houses be acceptable if one (but only one) of those houses serves as an enemy weapons depot?  We may prefer to call the proportionality decision “subjective” and leave it at that just to avoid the grim calculus that would be required to reduce concepts like proportionality to computer code.

The same concerns would arise if we were to allow AI systems to make legal decisions in other areas of law.  One post in the not-too-distant future [update: linked here] will discuss the concept of selective revelation, a process by which an AI system would be programmed to make probable cause determinations and authorize the collection of data for surveillance purposes.  Do we want to reduce the concept of probable cause to a computer model so that a machine can, in effect, issue search warrants and invade people’s privacy?

These questions tie in closely to the research of Vasant Dhar, whose work focuses on identifying when computers make better decisions than humans.  When it comes to law, I suspect the degree to which a legal decision depends on “subjective” considerations will play a large role in determining whether that decision is one that can be made by an AI system–and how comfortable we are with delegating the decision to a machine.

5 comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.