It’s a miracle that people aren’t constantly getting into car accidents.
Whizzing by at 65 miles per hour in a car, the brain rapidly decodes millions of photons worth of information from the eyes, and then must use that information to instantly figure out where it is and where it needs to go. Is that a pedestrian approaching the sidewalk or a mailbox? Do I need to take this offramp or the next one? What color is the traffic light up ahead?
Most motorists, miraculously, get to work or school without a scratch.
After nearly a decade worth of research, Duke scientists have figured out how the brain juggles all of this so effortlessly and tirelessly in a surprisingly inefficient way: by making quick, low-level models of the world to help form a clear view of the road ahead. The new findings expand the understanding of how the brain sees the world, and might one day help clinicians better understand what goes awry in people with psychiatric issues defined by perceptual problems, like schizophrenia.
Most neuroscientists think our brain cells figure out what we’re looking at by quickly comparing what’s in front of us to past experience and prior knowledge. Like a biological detective, they might determine you are looking at a house by using past experiences of neighborhoods you have been in and houses you have lived in. Enthusiasts of this Bayesian theory have long reasoned that these quick, probability-based analyses are what help people see a stable world despite sensory and motor noise from eye movement and constant environmental uncertainties, like a glare from the sun or a backdrop of a moving crowd.
A recent paper in the online journal eNeuro, however, suggests neuroscientists have overlooked a simpler explanation: that brain cells are also rapidly decoding a constant stream of information from the eyes using simple pattern recognition, like determining you’re looking at a house from the visual evidence of windows, a tall rectangular opening, and a manicured lawn.
“That discriminative model has some advantages because it’s really quick, logical, and flexible,” said Marc Sommer, Ph.D., a professor of biomedical engineering at Duke and senior author of the new study. “You can learn the boundaries between decisions, and you can apply all sorts of statistical pattern-matching at a very low level. You don’t have to create a model of the world, which is a big task for a brain.”
Sommer initially hoped to confirm the general consensus in neuroscience—that the brain builds on a working model of the world instead of recognizing patterns from the ground up. But after putting the Bayesian theory to the test with Duke neurobiology alumna Divya Subramanian, Ph.D., now a postdoctoral researcher at the National Institutes for Health, he’s hoping to extend their newfound results to other processes in the brain.
To ferret out which theory would hold up, Sommer and Subramanian recruited 45 adults for an eye test. Participants looked at a computer screen and were quizzed about where a shape on the screen moved to, or if it moved at all. Throughout the test, Subramanian subtly made movements trickier and less obvious to tease out how the brain compensates when there is increasing uncertainty, from changing the contrast of the shape to the shape itself.
After scoring the eye exams, Sommer and Subramanian were surprised to find that the brain didn’t solely rely on a Bayesian approach.
People scored worse when the visual noise was dialed up, but only when they were asked where the target moved to. Test scores were mostly unaffected with noisier scenes when people were asked if a shape moved on the screen, suggesting that—to the team’s surprise—people don’t always use prior experiences when they are more uncertain about what they are seeing, like our biological detective would.
The team spent the next several years parsing through results and replicating their findings “three times to believe it,” Subramanian said, but it always led them to the same conclusion: for some forms of perception, brain cells stick to low-level patterns to draw conclusions about the world around them.
“You can collect data forever and ever. And at some point, you just realize you have enough,” Sommer said.
Sommer now plans to disrupt the dogma for other sensory systems, like spoken language, to see if beloved theories hold up to the scrutiny of testing.
The hope is that by understanding how the brain solves other perceptual problems, Sommer and others can better understand psychiatric and motor disorders, like Parkinson’s disease, schizophrenia, or obsessive-compulsive disorder, and develop more effective treatments as a result.
“There are some sub-circuits of the brain that are probably pretty well-understood to be involved with these disorders. That’s a biological description,” Sommer said. “And there’s also neurotransmitter deficits, like lacking dopamine in Parkinson’s. That’s a chemical explanation. But there are very few big-picture, explanations of why people have certain psychiatric or motor disorders.”
More information:
Divya Subramanian et al, Bayesian and Discriminative Models for Active Visual Perception across Saccades, eNeuro (2023). DOI: 10.1523/ENEURO.0403-22.2023
Journal information:
eNeuro
Source: Read Full Article