On AI, prescription drugs, and managing the risks of things we don’t understand

Source: IWSMT
Last month, Technology Review published a good article discussing the “dark secret at the heart of AI”–namely, that “[n]o one really knows how the most advanced algorithms do what they do.” The opacity of algorithmic systems is something that has long drawn attention and criticism. But it is a concern that has broadened and deepened in the past few years, during which breakthroughs in “deep learning” have led to a rapid increase in the sophistication of AI. These deep learning systems operate using deep neural networks that are designed to roughly simulate the way the human brain works–or, to be more precise, to simulate the way the human brain works as we currently understand it.
Such systems can effectively “program themselves” by creating much or most of the code through which they operate. The code generated by such systems can be very complex. It can be so complex, in fact, that even the people who built and initially programmed the system may not be able to fully explain why the systems do what they do: