The Boston-based Future of Life Institute, backed by a $10 million donation from Elon Musk, recently announced its list of 37 winners of research grants in the field of artificial intelligence. Spurred by concerns from luminaries such as Musk, Stephen Hawking and Bill Gates that we’re ill-prepared for the coming age of machine super-intelligence, the grants — ranging in size from $20,000 to $1.5 million — are part of a bigger plan to prevent AI from wrecking the planet.
At the very least, one hopes, the ideas and concepts being explored in these winning AI grants might help prevent some of the “unintended” and “disastrous” consequences hinted at by the Future of Life Institute earlier — such as robot homicides in factories or road collisions involving self-driving cars.
1. Keeping super-smart weapons systems under human control
When most people think about killer AI taking over the planet, they usually think of a “Terminator”-like scenario populated by rogue cyborgs, Skynet and an epic battle between man and machine. While even the Future of Life Institute admits that a “Terminator” future for AI confuses fact and reality, there is a real need to make sure that super-smart autonomous weapons systems don’t start overriding their human masters in the future.
Which might be why one of the grants highlighted by the Future of Life Institute was a $136,918 grant to University of Denver visiting professor Heather Roff Perkins, who is studying the links between “Lethal Autonomous Weapons, AI and Meaningful Human Control.” According to the project’s summary, once autonomous weapons systems (think military drones and battlefield bots) start to become superintelligent, there’s always a risk that they will start to slip the bonds of human control, and in so doing, “change the future of conflict.”
2. Making AI systems explain their decisions to humans in excruciating detail
At some point, computers are going to far surpass the intellectual capacity of their human operators. When that day happens, we’re going to need to know how they think and all the little assumptions, inferences and predictions that go into their final decisions. That’s especially true for complex AI autonomous systems that integrate sensors, computers and actuators – all of these systems will be able to process and make decisions about much more data than humans are capable of analyzing by themselves.
As a result, Professor Manuela Veloso of Carnegie Mellon University received a $200,000 grant to find ways to make complex AI systems explain their decisions to humans. As she suggests, the only way to make them truly accepted and trusted is if we make these AI systems completely transparent in their decision-making process. This may not be a big deal if it’s a matter of challenging your Internet of Things device why it turned off the lights at home, but a much bigger deal if you’re relying on AI medical assistants to prescribe medications or treatments.