AI systems have an increasing ability to perform legal tasks that used to be within the exclusive province of lawyers. Anecdotally, it seems that both lawyers and the general public are getting more and more comfortable with the idea that legal grunt work–the drafting of contracts, reviewing voluminous documents, etc–can be performed by computers with varying levels of (human) lawyer oversight. But the idea of a machine acting as a judge is another matter entirely; people don’t seem keen on the idea of assigning to machines the task of making subjective legal decisions on matters such as liability, guilt, and punishment.
Consequently, I was intrigued when Thomas Dieterrich pointed me to the work of computer scientist Dr. Latanya Sweeney on “selective revelation.” Sweeney, a computer scientist by trade who serves as Professor of Government and Technology in residence at Harvard, came up with selective revelation as a method of what she terms “privacy-preserving surveillance,” i.e., balancing privacy protection with the need for surveillance entities to collect and share electronic data that might reveal potential security threats or criminal activity.
She proposes, in essence, creating a computer model that would mimic, albeit in a nuanced fashion, the balancing test that human judges undertake when determining whether to authorize a wiretap or issue a search warrant: