Our Work
Story

Explaining the magic

Watching artificial intelligence (AI) work at scale is impressive. Algorithms turn text into speech that is nearly indistinguishable from a human, recognize cats in billions of images, and generate “deep fake” videos that sneakily place famous faces onto imposter bodies.

However, mapping the workings of neural networks to human-understandable terms is a notoriously difficult problem. An image classifier can recognize stopsigns, but isn’t using the same mechanisms that humans use – octagonal shape, red color, four letter word, placement by the side of the road – to make this judgement call. Their statistical innards deliver great results at scale, but can sometimes deliver confounding mistakes, or be duped by clever manipulation.

“In an era of free-flowing information and misinformation, using more-explainable AI to help sort through truth is a powerful tool.”

At STR, we are using our artificial intelligence platform to deliver real-world results while pushing the frontiers of research. One capability that we have developed provides interpretable reasoning for a face recognition system. First, we built a platform that can identify faces and classify them as real or imposter, as shown in this image of a real Obama and an imposter. Then, we built a tool that can identify just what makes that imposter image so fishy, finding, for example, that something definitely isn’t so right with those eyes. We even made a short video and source code available for those who are particularly curious.

In an era of free-flowing information and misinformation, using more-explainable AI to help sort through truth is a powerful tool.