I Am, therefore I See: How The Brain Edits Reality on The Fly
- Le Nguyen
- Oct 9
- 9 min read
Through novel experiments and cutting-edge brain imaging, Dr. Sarah Shomstein reveals how our mind silently sorts the sensory chaos around us—and what it means for everyday attention.
Imagine walking through Lafayette Square in Washington, DC, on a chilly fall morning. You see the crisp blue sky hanging above, watch vivacious commuters in heels hurry past or glide by on scooters in flannel suits, and notice glossy plumes of goldenrod unfurling as they spiral upward. You hear impatient cars honking from afar, catch the chirp of hungry fledglings in their nest high up in the branches, and overhear an elderly man behind his newspaper on a bench grumbling that a hot dog now costs five dollars. Filling your nostrils is the aroma of freshly brewed French-roast coffee from a corner café, mingled with buttery croissants and, if you’re unlucky, the reek of a nearby garbage truck. Despite the chaos, your entire being is locked onto one thing—the Andrew Jackson statue you came for—so you completely don’t register a guy in an inflatable T.rex strolling through the crosswalk.
Find it hard to believe? See for yourself in the video below.
Did you miss the gorilla? If so, you’re not alone. In a 1999 psychological experiment (where this video originates) by psychologists Christopher Chabris and Daniel Simons, roughly half of the viewers failed to notice the gorilla despite its continuous presence for nearly nine seconds. The experiment, famously known as “The Invisible Gorilla Experiment,” tested whether a tightly-focused goal, such as counting passes, could cause people to overlook something so blatantly obvious right in front of them. What Chabris and Simons found ran counter to conventional expectations. Rather than demonstrating an all-encompassing awareness, the findings challenged long-held assumptions about attention, upending the notion that we perceive everything around us and exposing how narrow attention can be. And while the clip is visual, the same selectivity operates in hearing and smell.
Think, for example, of the croissant smell from the café as you’re standing in Lafayette Square. With your attention on the statue, your mind automatically dials down irrelevant signals, even the lure of a buttery breakfast. Psychologists refer to this phenomenon as “attentional selection”, a process by which the mind prioritizes certain sensory inputs and suppresses others, thereby allocating limited processing resources where they matter least. In other words, what we focus on, and simultaneously ignore, reflects our task priorities and thus strongly shapes what we perceive and remember. This partly unconscious filtering has been a focus of research for decades. Some research has utilized behavioral and eye-tracking measures to show that people can be steered by cues they don’t consciously register.1,2 Recent studies combining electroencephalograms (EEGs) and pupillometry (the measurement of pupil size) have begun to track such unconscious filtering more directly.3,4
At the George Washington University, Professor of Cognitive Neuroscience Dr. Sarah Shomstein takes a complementary approach. As director of the Attention and Cognition Laboratory, she pairs behavioral tasks and standardized questionnaires with functional magnetic resonance imaging (fMRI), a method that tracks blood-oxygen-level-dependent signals as an indirect index of neural activity. This combination enables her to map out the brain’s labyrinth pathways that underlie behavior, providing a deeper understanding of not only the psychological tendencies but also the neurological mechanisms of attentional selection.

From Medicine to the Frontier of Attention Research
Dr. Sarah Shomstein did not begin her journey intending to become a neuroscientist. As an undergraduate freshman at Carnegie Mellon University in the mid-1990s, she found herself on the pre-med track. In addition to taking courses in biology, chemistry, physics, and math, she worked part-time in a psychology lab, where she studied the frontal cortex of patients with schizophrenia in a psychiatric hospital. The lab was among the first few in the U.S. to use fMRI to study cognition, and Shomstein’s role as a research assistant gave her a front-row seat to the brain’s wonders and vulnerabilities. Fascinated but realizing she didn’t want to build a career solely around treating clinical populations, she pivoted toward understanding how a “normal” nervous system operates.
That pivot quickly became a calling. By her sophomore year, Shomstein had committed to studying cognitive neuroscience and joined a lab investigating attention and perception in stroke patients with localized brain damage, some of whom presented with visual agnosia—the irreversible loss of object recognition despite intact vision. For a curious, enthusiastic young Shomstein, “that was the most interesting thing ever.” Following a summer of intense research at NYU’s Center for Neuroscience through the Howard Hughes Fellowship, she returned to her final year of college convinced that graduate school would be her next path.
By the time she completed her PhD in Cognitive and Brain Sciences at Johns Hopkins University in 2003, Dr. Shomstein had been at the forefront of attention research for nearly a decade, publishing a plethora of high-impact papers that spanned psychology, neuroimaging, and computation. That vantage point gave her a panoramic view of the evolution of cognitive neuroscience. “The pace of discovery has accelerated almost exponentially. I think we now have a better handle on the basic cognitive function. And now is a really exciting time where we can ask more complex questions about the brain,” Dr. Shomstein said, noting that recent breakthroughs have pushed the boundary of inquiry beyond the basics toward higher-order cognitive functions. Among the most intensively studied domains is decision-making, as it is central to identity, agency, and survival. With the advent of new technologies and tools that enable scientists to probe the brain with greater precision and scale, questions about consciousness and the mechanisms that support decision-making are coming into sharper focus.
One key mechanism driving decision-making is attention. “For the past 30 years, we’ve tried to map the regions involved in attention,” Dr. Shomstein said. As we focus on places, objects, or patterns in our environment, different circuits in the brain compute those signals before fusing them into what Dr. Shomstein calls an “attentional priority map,” a running shortlist of what matters now. Without that triage, she added, “it’s too much information. Our brain can’t process it all.” In other words, by narrowing the flood of sensory and internal information, attention provides us with a manageable, prioritized set of options. This filtering system ultimately forms the basis on which we make a choice. Without that selective gatekeeping, decision-making would be slower, noisier, and far less accurate.
Mapping a Labyrinth
Dr. Shomstein’s early fMRI work helped pin down how the brain allocates attention across senses and space. In one foundational study, she and her team cued people to shift attention between vision and audition and found increased activity in the auditory cortex when attention moved to sound while the visual cortex quieted, and vice versa.5 This showed that the brain doesn’t run everything at full blast all the time; it turns dials up and others down depending on what you’re trying to notice. Building on this discovery, Dr. Shomstein published another paper showing that the posterior parietal cortex supports two kinds of attention at once: “where” something is and “what” something is. It works with a wider control system to switch between those priorities as the task changes. Think of it like a smart scoreboard that constantly ranks what’s most important and sends your attention there when several things compete for your eyes and mind.
A decade later, Dr. Shomstein and her colleagues pulled together results from brain scans in people and physiology studies in animals and proposed a simple case: the brain’s “priority map” in parietal cortex doesn’t just track where things are.6 It also tracks what matters (like a face, a color, or a moving object). Even those “what” priorities end up being handled in a way the brain can use for aiming attention in space. This helps explain why goals like “find the red jacket” still show up as location-based patterns in the brain during real tasks.

Bringing Context Back Into Focus
In her recent line of research, Dr. Shomstein and her team set out to challenge one of attention science’s bedrock assumptions: that “task-irrelevant” information is simply ignored by the brain. In their recent Nature paper titled “Task-Irrelevant Semantic Relationship Between Objects and Scene Influences Attentional Allocation,”7 they showed that objects semantically consistent with a scene attract attention, even when they’re irrelevant to the task, and that this “fits-the-scene” advantage is detectable in early visual brain activity. Imagine you’re waiting at a crosswalk, focused on the “walk” signal. You think you’re tuning out everything else—the parked cars, trees, shop signs, chatter of passers-by—but your visual system is still quietly organizing all of that. In their study, participants viewed scenes like offices, bathrooms, and kitchens while performing a simple visual task. When the target appeared on an object that ‘fit’ the scene (like a computer mouse in an office or toothpaste in a bathroom), responses were faster than when it appeared on an object that didn’t belong. The finding suggests there may be no such thing as truly “irrelevant” information; even background details we’re not acting on still shape the early stages of perception and, ultimately, what we become aware of.
For Dr. Shomstein, the driver isn’t about words so much as lived experience. “It’s about you knowing that these things are related,” she said. “It has nothing to do with language itself. It has to do with your knowledge of that.” In other words, it isn’t about words, but learned pairings: we know a mouse goes with a computer, a plant with its pot, a brush with lipstick. Attention rides on that conceptual knowledge, so even without language, anyone can have the same experience as long as they have encountered two objects that co-occur often together.
Dr. Shomstein and her team pushed this idea further in a second study, “Predicting Attentional Allocation in Real-World Environments: The Need to Investigate Semantic Guidance.”8 Across four experiments, her lab demonstrated that the semantic associations we carry across sight, sound, and even smell have a compounding bias effect on attention in real-world settings. When all of those signals line up, we’re quick to recognize and pay attention to a target object. “Our environments are multi-sensory,” Dr. Shomstein said. “It’s not just a picture. When you walk into a café, it’s the sound, the smells, and other environmental signals that impinge on you that help you find the cookie or understand where your friend might be sitting.” Dr. Shomstein argued that we can’t understand a cognitive function by studying vision, or any other sense, in isolation. Instead, we need to expand the current tidy, single-sensory experiments that dominate traditional research by integrating multiple senses to better capture naturalistic behavior and develop multisensory accounts of attention.
Looking Ahead: The Human-AI Frontier
After 25 years in the field, Dr. Shomstein sees the next phase of attention research as delivering a fuller account of how attention works and turning those insights into better training, smarter interfaces, and practical habits for focusing in an age of constant digital distractions. As machine learning has surged in recent years, AI’s presence is unmistakable, and so are its effects on cognitive performance and attention spans. In medicine, security, and other high-stakes fields, AI is being used for monotonous, time-consuming tasks, from medical imaging to surveillance footage, to flag abnormalities for human review.

One question that arises is how this would shape, sharpen, or skew our focus. “The person's attention is affected by the help from the intelligent agent,” Dr. Shomstein said. In some cases, she noted, trusting the AI can yield amazing results, such as catching a subtle cancerous lesion or a flicker in a dark corner of a video. In other cases, over-trusting the AI can tunnel attention to its cues, causing unflagged problems to be missed. In her lab, Dr. Shomstein and her team are adapting measures from social psychology, such as pupillometry, to study whether a person’s body physiological state reveals when they’re over- or under-relying on machine suggestions.
“What we're trying to understand right now in the lab is whether we can measure the size of the pupil as a measure of trust.”
A Collective Endeavor
Dr. Shomstein emphasizes that science is a team sport. “We do it together,” she said. In her lab, graduate students and postdoctoral fellows are co-investigators, not just assistants, training to ask their own complex questions and developing novel approaches to find the answers. This collaborative ethos mirrors the very phenomenon she studies: separate systems working together, integrating multiple signals to produce something greater than the sum of its parts. By tracing how the brain edits reality on the fly, Dr. Shomstein is pushing the science of attention toward multi-sensory environments where we actually live. In doing so, she is helping to redraw the map of how we see and what we miss in an age when our perceptual worlds are more crowded, more mediated, and more augmented than ever before.






 
       
       
      
Comments