Thursday, 7 March 2019

A new tool from Google and OpenAI lets us better see through the eyes of artificial intelligence

What does the world look like to AI?
Researchers have puzzled over this for decades, but in recent years, the question has become more pressing. Machine vision systems are being deployed in more and more areas of life, from health care to self-driving cars, but “seeing” through the eyes of a machine — understanding why it classified that person as a pedestrian but that one as a signpost — is still a challenge. Our inability to do so could have serious, even fatal, consequences. Some would say it already has due to the deaths involving self-driving cars.
New research from Google and nonprofit lab OpenAI hopes to further pry open the black box of AI vision by mapping the visual data these systems use to understand the world. The method, dubbed “Activation Atlases,” lets researchers analyze the workings of individual algorithms, unveiling not only the abstract shapes, colors, and patterns they recognize, but also how they combine these elements to identify specific objects, animals, and scenes.
Google’s Shan Carter, a lead researcher on the work, told The Verge that if previous research had been like revealing individual letters in algorithms’ visual alphabet, Activation Atlases offers something closer to a whole dictionary, showing how letters are put together to make actual words. “So within an image category like ‘shark,’ for example, there will be lots of activations that contribute to it, like ‘teeth’ and ‘water,’” says Carter.
The work is not necessarily a huge breakthrough, but it’s a step forward in a wider field of research known as “feature visualization.” Ramprasaath Selvaraju, a PhD student at Georgia Tech who was not involved in the work, said the research was “extremely fascinating” and had combined a number of existing ideas to create a new “incredibly useful” tool.
Selvaraju told The Verge that, in the future, work like this will have many uses, helping us to build more efficient and advanced algorithms as well as improve their safety and remove bias by letting researchers peer inside. “Due to the inherent complex nature [of neural networks], they lack interpretability,” says Selvaraju. But in the future, he says, when such networks are routinely used to steer cars and guide robots, this will be a necessity.
Read Full Article

No comments:

Post a Comment