Law and AI Quick Hits: Canada Day / Fourth of July edition

Credit: Randy Glasbergen


Here’s a quick roundup of law- and policy-relevant AI stories from the past couple weeks.

A British privacy watchdog ruled that a group of London hospitals violated patient privacy laws in sharing information with Google DeepMind.  Given the constant push for access to data that all the major tech companies are making (in no small part because access to more data is crucial in the age of learning AI systems), expect to see many more data privacy disputes like this in the future.


Canada’s CTV reports on the continued push by some AI experts for “explainable” and “transparent” AI systems, as well as the skeptical response of other AI experts about the feasibility of building AI systems that can “show their work” in a useful way.  Peter Norvig points to a potentially interesting workaround:

“[W]hat cognitive psychologists have discovered is that when you ask a human [about how and why they made a decision,] you’re not really getting at the decision process. They make a decision first, and then you ask, and then they generate an explanation and that may not be the true explanation,” he said at an event in June in Sydney, Australia.

 

 

“So we might end up being in the same place with machine learning where we train one system to get an answer and then we train another system to say – given the input of this first system, now it’s your job to generate an explanation.”

 

 

Norvig suggests looking for patterns in the decisions themselves, rather than the inner workings behind them.


Wired ran a story on how AI could make it far easier to forge virtually anything–be it handwriting, voices, and even video footage:

In the future, realistic-looking and -sounding fakes will constantly confront people. Awash in audio, video, images, and documents, many real but some fake, people will struggle to know whom and what to trust.

 

***

At the current pace of progress, it may be as little as two or three years before realistic audio forgeries are good enough to fool the untrained ear, and only five or 10 years before forgeries can fool at least some types of forensic analysis. When tools for producing fake video perform at higher quality than today’s CGI and are simultaneously available to untrained amateurs, these forgeries might comprise a large part of the information ecosystem.

Needless, the widespread availability of these technologies would “transform the meaning of evidence and truth in domains across journalism, government communications, testimony in criminal justice, and, of course, national security.”  As if fake news wasn’t bad enough.


The Independent reports on another area where AI systems are starting to outperform human experts–choosing which embryos to implant in in vitro fertilization:

During the process, AI was “trained” in what a good embryo looks like from a series of images.

 

 

AI is able to recognise and quantify 24 image characteristics of embryos that are invisible to the human eye.

 

 

These include the size of the embryo, texture of the image and biological characteristics such as the number and homogeneity of cells.

The tests were on cattle embryos rather than human, a potentially significant fact that the headline failed to mention.  Still, it’s another reminder that machine learning technology opens up nearly limitless possibilities for automating tasks that have always been the exclusive domain of highly educated humans.


The headline of this story is that NASA’s Frontier Development Lab is using AI to study potential methods for defending earth against asteroid and comet strikes.  Not prominently mentioned is that the FDL is looking at many other uses of AI that (hopefully) will be used far more often, such as searching for water sources on the moon and providing warnings for solar storms.

Leave a Reply

Your email address will not be published. Required fields are marked *