What does the world look like to AI?
Researchers have puzzled over this for decades, but in recent years, the question has become more pressing. Machine vision systems are being deployed in more and more areas of life, from health care to self-driving cars, but “seeing” through the eyes of a machine — understanding why it classified that person as a pedestrian but that one as a signpost — is still a challenge. Our inability to do so could have serious, even fatal, consequences. Some would say it already has due to the deaths involving self-driving cars.
New research from Google and nonprofit lab OpenAI hopes to further pry open the black box of AI vision by mapping the visual data these systems use to understand the world. The method, dubbed “Activation Atlases,” lets researchers analyze the workings of individual algorithms, unveiling not only the abstract shapes, colors, and patterns they recognize, but also how they combine these elements to identify specific objects, animals, and scenes.
Google’s Shan Carter, a lead researcher on the work, told The Verge that if previous research had been like revealing individual letters in algorithms’ visual alphabet, Activation Atlases offers something closer to a whole dictionary, showing how letters are put together to make actual words. “So within an image category like ‘shark,’ for example, there will be lots of activations that contribute to it, like ‘teeth’ and ‘water,’” says Carter.
The work is not necessarily a huge breakthrough, but it’s a step forward in a wider field of research known as “feature visualization.” Ramprasaath Selvaraju, a PhD student at Georgia Tech who was not involved in the work, said the research was “extremely fascinating” and had combined a number of existing ideas to create a new “incredibly useful” tool.
Selvaraju told The Verge that, in the future, work like this will have many uses, helping us to build more efficient and advanced algorithms as well as improve their safety and remove bias by letting researchers peer inside. “Due to the inherent complex nature [of neural networks], they lack interpretability,” says Selvaraju. But in the future, he says, when such networks are routinely used to steer cars and guide robots, this will be a necessity.
Read Full Article
Researchers have puzzled over this for decades, but in recent years, the question has become more pressing. Machine vision systems are being deployed in more and more areas of life, from health care to self-driving cars, but “seeing” through the eyes of a machine — understanding why it classified that person as a pedestrian but that one as a signpost — is still a challenge. Our inability to do so could have serious, even fatal, consequences. Some would say it already has due to the deaths involving self-driving cars.
New research from Google and nonprofit lab OpenAI hopes to further pry open the black box of AI vision by mapping the visual data these systems use to understand the world. The method, dubbed “Activation Atlases,” lets researchers analyze the workings of individual algorithms, unveiling not only the abstract shapes, colors, and patterns they recognize, but also how they combine these elements to identify specific objects, animals, and scenes.
Google’s Shan Carter, a lead researcher on the work, told The Verge that if previous research had been like revealing individual letters in algorithms’ visual alphabet, Activation Atlases offers something closer to a whole dictionary, showing how letters are put together to make actual words. “So within an image category like ‘shark,’ for example, there will be lots of activations that contribute to it, like ‘teeth’ and ‘water,’” says Carter.
The work is not necessarily a huge breakthrough, but it’s a step forward in a wider field of research known as “feature visualization.” Ramprasaath Selvaraju, a PhD student at Georgia Tech who was not involved in the work, said the research was “extremely fascinating” and had combined a number of existing ideas to create a new “incredibly useful” tool.
Selvaraju told The Verge that, in the future, work like this will have many uses, helping us to build more efficient and advanced algorithms as well as improve their safety and remove bias by letting researchers peer inside. “Due to the inherent complex nature [of neural networks], they lack interpretability,” says Selvaraju. But in the future, he says, when such networks are routinely used to steer cars and guide robots, this will be a necessity.
Read Full Article
Excellent post however I was wondering if you could write a little more on this subject? I’d be very thankful if you could elaborate a little bit more. Many thanks! If You can make your fake Facebook post by making use of Fake Facebook Post Generator and prank friends.
ReplyDeleteI don't know how I can tell you how much I liked your article other than that I enjoyed reading it. nice article. I really liked a new article which I would be happy to share with you. Easy Ways To Make Roblox Hats
ReplyDeleteThanks for sharing it. I really enjoyed reading it. Very good. It is interesting as well as very attractive. I couldn't stop until I finished it. Thanks a lot. I have article posted What is a Random Credit Card Generator.
ReplyDeleteI appreciate the detailed analysis—super helpful and inspiring. Explore this profile wordle Unlimited for insightful information and updates. Wordle has quickly become my daily ritual. It’s a fun brain exercise that keeps me coming back each day. Plus, I love competing with friends.
ReplyDelete