The filmgoers didn’t flinch at the scene of the dapper man planting a time bomb in the trunk of the convertible, or tense up as the unsuspecting driver and his beautiful blonde companion drove slowly through the town teeming with pedestrians, or jump out of their seats when the bomb exploded in fiery carnage. And they sure as heck weren’t wowed by the technical artistry of this famous opening shot of Orson Welles’ 1958 noir masterpiece, “Touch of Evil,” a single three-minute take that ratchets up the suspense to 11 on a scale of 1 to 10. In fairness, lab mice aren’t cineastes. But where the rodents fell short as film critics they more than delivered as portals into the brain. As the mice watched the film clip, scientists eavesdropped on each one’s visual cortex. By the end of the study, the textbook understanding of how the brain “sees” had been as badly damaged as the “Touch of Evil” convertible, scientists reported on Monday. The new insights into the workings of the visual cortex, they said, could improve technologies as diverse as self-driving cars and brain prostheses to let the blind see. “Neuroscience lets us make better object recognition systems” for, say, self-driving cars and artificial intelligence-based diagnostics, said Joel Zylberberg of York University, an expert on machine learning and neuroscience who was not involved in the new research. “But computer vision has been hampered by an insufficient understanding of visual processing in the brain.” The “unprecedented” findings in the new study, he said, promise to change that. The textbook understanding of how the brain sees, starting with streams of photons landing on the retina, reflects research from the 1960s that won two of its pioneers a Nobel prize in medicine in 1981. It basically holds that neurons in the primary visual cortex, where the signals go first, respond to edges: vertical edges, horizontal edges, and every edge orientation in between, moving and static. We see a laptop screen because of how its edges abut what’s behind it, sidewalks because of where their edges touch the curb’s. Higher-order brain systems take these rudimentary perceptions and process them into the perception of a scene or object. It’s been known for more than a decade that this textbook model is partly wrong and largely incomplete, said neurobiologist Saskia de Vries of the Allen Institute for Brain Science, who led the mouse-vision study. To see if she could do better, she and her colleagues showed mice simple gratings (lots of edges), moving gratings, 118 photos, and the “Touch of Evil” opening, recording the resulting electrical activity from hundreds of neurons in six regions of each mouse’s visual cortex. Which visual features neurons responded to showed that the textbook model “doesn’t hold up very well,” de Vries said. Only about 10% of the mice’s visual neurons responded to specific kinds of edges (straight or tilted, horizontal or vertical, sharp or blurry, fat or slender) as per the textbook version of the visual cortex, she and her colleagues reported in Nature Neuroscience. Instead, some responded only to movements of facial muscles, others to several features rather than to a single kind of edge. Yet others, they speculate, might even respond to sounds. “Touch of Evil” elicited responses from the greatest number of neurons. That makes sense. In Welles’ opening scene, the camera zooms in and pans out, it sweeps across the scene, and different people and objects move into and out of the frame, a smorgasbord of imagery that should cover just about everything a visual cortex might need to process. But textbooks say that fewer neurons respond to complex visual scenes than to the simpler, edge-based elements the scenes are made of. The Allen Institute team found the opposite: Static gratings interested the fewest neurons; Welles had a much bigger fan base. All told, 77% of neurons throughout the mice’s visual cortex responded to at least one thing the scientists showed them. But in some neighborhoods, only 33% did. The rest seemed to be on strike. That’s not supposed to happen either. “That’s a huge finding,” said neuroscientist Bruno Olshausen of the University of California, Berkeley, who has argued that neuroscience understands no more than 20% of how the visual cortex actually operates. Every visual neuron supposedly responds to some kind of edge, “so what are these silent neurons doing?” Olshausen asked. “Assuming the finding isn’t an artifact, that’s a huge population of [visual] neurons that aren’t doing vision. This should be a wake up call to everyone in the field. Something is dramatically wrong with the standard model.” The surprise finding, he added, makes this “a tour de force and a first in neuroscience, to systematically characterize such a large population of neurons across different layers, areas and using different stimuli. The data will be invaluable to theorists and modelers for years to come.” It could be that the scientists didn’t show the mice images with the particular feature that these unresponsive visual neurons notice. But that seems unlikely, given the diversity of images: butterflies, leopards, fences, mountains, trees, leaves, rocks, sidewalks, windows, staircases, pencils, and more. Instead, de Vries said, “I think it’s a reflection that other things are going on in the visual cortex,” like “visual” neurons processing sound or something else non-visual. Since machine-vision developers take their cues from how brains see, the Allen Institute results, if confirmed, carry an important message, York’s Zylberberg said. “It shows that there isn’t a mess of [undifferentiated neurons] … doing all the same thing, which is what we put into our systems now. Instead, there’s at least 10 different types of visual neurons” that respond to specific aspects of the visual world—a complexity that computerized object-recognition systems might profitably emulate. As for the scientists’ choice of flicks, “we picked ‘Touch of Evil’ because we were looking for a movie clip that had a lot of diverse motion without camera cuts,” de Vries said. Republished with permission from STAT. This article originally appeared on December 16 2019

The filmgoers didn’t flinch at the scene of the dapper man planting a time bomb in the trunk of the convertible, or tense up as the unsuspecting driver and his beautiful blonde companion drove slowly through the town teeming with pedestrians, or jump out of their seats when the bomb exploded in fiery carnage. And they sure as heck weren’t wowed by the technical artistry of this famous opening shot of Orson Welles’ 1958 noir masterpiece, “Touch of Evil,” a single three-minute take that ratchets up the suspense to 11 on a scale of 1 to 10.

In fairness, lab mice aren’t cineastes. But where the rodents fell short as film critics they more than delivered as portals into the brain. As the mice watched the film clip, scientists eavesdropped on each one’s visual cortex. By the end of the study, the textbook understanding of how the brain “sees” had been as badly damaged as the “Touch of Evil” convertible, scientists reported on Monday.

The new insights into the workings of the visual cortex, they said, could improve technologies as diverse as self-driving cars and brain prostheses to let the blind see.

“Neuroscience lets us make better object recognition systems” for, say, self-driving cars and artificial intelligence-based diagnostics, said Joel Zylberberg of York University, an expert on machine learning and neuroscience who was not involved in the new research. “But computer vision has been hampered by an insufficient understanding of visual processing in the brain.” The “unprecedented” findings in the new study, he said, promise to change that.

The textbook understanding of how the brain sees, starting with streams of photons landing on the retina, reflects research from the 1960s that won two of its pioneers a Nobel prize in medicine in 1981. It basically holds that neurons in the primary visual cortex, where the signals go first, respond to edges: vertical edges, horizontal edges, and every edge orientation in between, moving and static. We see a laptop screen because of how its edges abut what’s behind it, sidewalks because of where their edges touch the curb’s. Higher-order brain systems take these rudimentary perceptions and process them into the perception of a scene or object.

It’s been known for more than a decade that this textbook model is partly wrong and largely incomplete, said neurobiologist Saskia de Vries of the Allen Institute for Brain Science, who led the mouse-vision study. To see if she could do better, she and her colleagues showed mice simple gratings (lots of edges), moving gratings, 118 photos, and the “Touch of Evil” opening, recording the resulting electrical activity from hundreds of neurons in six regions of each mouse’s visual cortex.

Which visual features neurons responded to showed that the textbook model “doesn’t hold up very well,” de Vries said. Only about 10% of the mice’s visual neurons responded to specific kinds of edges (straight or tilted, horizontal or vertical, sharp or blurry, fat or slender) as per the textbook version of the visual cortex, she and her colleagues reported in Nature Neuroscience. Instead, some responded only to movements of facial muscles, others to several features rather than to a single kind of edge. Yet others, they speculate, might even respond to sounds.

“Touch of Evil” elicited responses from the greatest number of neurons. That makes sense. In Welles’ opening scene, the camera zooms in and pans out, it sweeps across the scene, and different people and objects move into and out of the frame, a smorgasbord of imagery that should cover just about everything a visual cortex might need to process. But textbooks say that fewer neurons respond to complex visual scenes than to the simpler, edge-based elements the scenes are made of. The Allen Institute team found the opposite: Static gratings interested the fewest neurons; Welles had a much bigger fan base.

All told, 77% of neurons throughout the mice’s visual cortex responded to at least one thing the scientists showed them. But in some neighborhoods, only 33% did. The rest seemed to be on strike.

That’s not supposed to happen either. “That’s a huge finding,” said neuroscientist Bruno Olshausen of the University of California, Berkeley, who has argued that neuroscience understands no more than 20% of how the visual cortex actually operates. Every visual neuron supposedly responds to some kind of edge, “so what are these silent neurons doing?” Olshausen asked. “Assuming the finding isn’t an artifact, that’s a huge population of [visual] neurons that aren’t doing vision. This should be a wake up call to everyone in the field. Something is dramatically wrong with the standard model.”

The surprise finding, he added, makes this “a tour de force and a first in neuroscience, to systematically characterize such a large population of neurons across different layers, areas and using different stimuli. The data will be invaluable to theorists and modelers for years to come.”

It could be that the scientists didn’t show the mice images with the particular feature that these unresponsive visual neurons notice. But that seems unlikely, given the diversity of images: butterflies, leopards, fences, mountains, trees, leaves, rocks, sidewalks, windows, staircases, pencils, and more. Instead, de Vries said, “I think it’s a reflection that other things are going on in the visual cortex,” like “visual” neurons processing sound or something else non-visual.

Since machine-vision developers take their cues from how brains see, the Allen Institute results, if confirmed, carry an important message, York’s Zylberberg said. “It shows that there isn’t a mess of [undifferentiated neurons] … doing all the same thing, which is what we put into our systems now. Instead, there’s at least 10 different types of visual neurons” that respond to specific aspects of the visual world—a complexity that computerized object-recognition systems might profitably emulate.

As for the scientists’ choice of flicks, “we picked ‘Touch of Evil’ because we were looking for a movie clip that had a lot of diverse motion without camera cuts,” de Vries said.

Republished with permission from STAT. This article originally appeared on December 16 2019