Teaching machines to see illusions may help computer vision get smarter | Computing

Do you remember the kind of optical illusions you probably first saw as a kid, which use some combination of color, light, and patterns to create images that prove deceptive or misleading to our brains? It turns out that such illusions — where perception doesn’t match up with reality — may, in fact, be a feature of the brain, rather than a bug. And teaching a machine to recognize the same kind of illusions may result in image recognition.

This is what computer vision experts from Brown University have been busy working on. They are teaching computers to see context-dependent optical illusions, and thereby to hopefully create smarter, more brain-like artificial vision algorithms that will prove more robust in the real world.

“Computer vision has become ubiquitous, from self-driving cars parsing a stop sign to medical software looking for tumors in an ultrasound,” David Mely, one of the Cognitive Science researchers who worked on the project, now working at artificial intelligence company Vicarious, told Digital Trends. “However, those systems have weaknesses stemming from the fact that they are modeled after an outdated blueprint of how our brains work. Integrating newly understood mechanisms from neuroscience like those featured in our work may help making those computer vision systems safer. Much of the brain remains poorly understood, and further research at the confluence of brains and may help unlock further fundamental advances in computer vision.”

In their work, the team used a computational model to explore and replicate the ways that neurons interact with one another when viewing an illusion. They created a model of feedback connections of neurons, which mirrors that of humans, that responds differently depending on the context. The hope is that this will help with tasks like color differentiation — for example, helping a robot designed to pick red berries to identify those berries even when the scene is bathed in red light, as might happen at sunset.

“A lot of intricate brain circuitry exists to support such forms of contextual integration, and our study proposes a theory of how this circuitry works across receptive field types, and how its presence is revealed in phenomena called optical illusions,” Mely continued. “Studies like ours, that use computer models to explain how the brain sees, are necessary to enhance existing computer vision systems: many of them, like most deep neural networks, still lack the most basic forms of contextual integration.”

While the project is still in its relative infancy, the team has already translated the neural circuit into a modern machine learning module. When it was tested on a task related to contour detection and contour tracing, the circuit vastly outperformed modern computer vision technology.

You might also like
Leave A Reply

Your email address will not be published.