Doctors and Lawyers: There’s an AI app for that (but not really)

Source: twentysomethinglawyer.wordpress.com

Over the past few weeks, stories have broken suggesting that AI is breaking through into two of the world’s most venerable professions: law and medicine. A couple weeks ago, stories reported that a major law firm had hired an AI-based “lawyer,” and the Daily Mail ran a story this weekend on a new health app called Check, declaring: “It’s man versus robot in the battle of the doctors: World’s first ‘artificial intelligence’ medic set to be pitted against the real thing in landmark experiment for medicine.”  As always, the media headlines make these technologies sound much more impressive than they actually are.  Both of these technologies sound like more convenient versions of existing tools that doctors, lawyers, and non-professionals alike already use on a daily basis.

As the Daily Mail headline’s shift from “doctor” to “medic” suggests, the actual function of the Check app is not equivalent to that of trained physicians–and if you read the first two paragraphs of the article, it is clearly not even close.  Check will not convert your iPhone into Baymax, much less House:

The smartphone app has been designed to act like a triage nurse, asking a series of questions to advise users whether their problem is nothing to worry about, something they should consult their GP about, or a matter that requires calling 999*.

* Dialing 999 in the UK is, I take it, like dialing 911 in the US.

So basically, Check is a more user-friendly version of the WebMD Symptom Checker, which was itself an easier-to-use version of the symptom-based decision trees that can be found in home medical companion books. (I wore out my family’s copy of the AMA’s Home Medical Encyclopedia when I was a teenage hypochondriac.)  It offers no recommended treatment or even, as far as I can tell, a possible diagnosis–which actually makes it sound less useful than its internet and book-based predecessors.

It can, of course, offer an edge in speed because computers can process information far faster than any human and because AI systems can analyze more complex and nuanced information than a decision tree.  That is useful, but it’s not akin to what a medic or triage nurse can do–and it certainly does not have functions that make it remotely comparable to a trained physician.

The supposed AI “lawyer,” dubbed “ROSS” in honor of the company that designed it, has somewhat more impressive capabilities:

Ross, “the world’s first artificially intelligent attorney” built on IBM’s cognitive computer Watson, was designed to read and understand language, postulate hypotheses when asked questions, research, and then generate responses (along with references and citations) to back up its conclusions.

Ross also learns from experience, gaining speed and knowledge the more you interact with it.

“You ask your questions in plain English, as you would a colleague, and ROSS then reads through the entire body of law and returns a cited answer and topical readings from legislation, case law and secondary sources to get you up-to-speed quickly,” the website says. “In addition, ROSS monitors the law around the clock to notify you of new court decisions that can affect your case.”

In other words, ROSS is a legal research tool that has the ability to learn with the added ability to provide “answers” to natural language questions, presumably by culling key quotes from relevant sources of law and providing citations to those sources.  I strongly suspect that the supporting citations would be far more useful than the actual “answers” that ROSS provides because context is everything when assessing the significance and meaning of a particular portion of a case or statute–as stated in a prior post, the very nature of law is subjective, so there often is a high degree of uncertainty in any statement of “what the law is.”  It does not sound like ROSS attempts to identify and explain such sources of legal uncertainty or formulate the best way to frame the law for use in a particular case, which are the main things lawyers are paid to do.  So while ROSS seems like a pretty nifty step up from Westlaw and Lexis, its skillset still sounds far short of a decent paralegal’s, much less a lawyer’s.

And Lexis and Westlaw will probably catch up with ROSS before too long (or maybe they have already if ROSS’ capabilities are less impressive than they sound).  Westlaw and Lexis both already have search tools that can highlight the relevant portions of cases in response to a user’s natural language questions.  WestlawNext also has an algorithmic “Research Recommendations” tool that directs users to relevant legal sources based on prior searches.  These various tools are still clunky for now (boolean and Key Number searches still provide a far faster way of finding relevant cases) but I would imagine Westlaw and Lexis will improve them over time and give them machine learning capabilities if they haven’t done so already. That will make Westlaw and Lexis more useful to lawyers and reduce the amount of time lawyers spend conducting research, but it would not convert Westlaw and Lexis into robo-lawyers themselves.

I have little doubt that AI systems will eventually get to a point where than can meet or exceed the abilities of human professionals in law and medicine–maybe even in our lifetimes.  But be skeptical of any news stories claiming that we’re already there.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.