From field to lab: How people with vision loss judge if an approaching car will reach them
Patricia DeLucia has dedicated decades to studying something most people never notice—how we judge when a potential collision is imminent, a skill essential for staying safe. Yet the questions that drive her work began long before she joined Rice University as a professor of psychological sciences.
“I grew up playing sports, and on the field, collision judgment is everything—knowing if a ball is heading your way, whether a teammate is about to collide with you, or if there’s enough time to move,” she explains. “I didn’t realize it at the time, but those experiences shaped the entire direction of my research.”
That enduring curiosity eventually led DeLucia to explore collision judgments among people with visual impairment, focusing on age-related macular degeneration (AMD).
Published in PLOS ONE, the study used a virtual reality setup to assess how adults with AMD and those with normal vision predicted when an approaching vehicle would reach their position. The VR system, adapted from a setup at Johannes Gutenberg University Mainz, combined visual simulations with realistic car sounds. Participants experienced an approaching vehicle through sight alone, sound alone, or both, until it disappeared, then pressed a button to indicate the estimated arrival time. The research team spanned several sites in the United States and Europe and received funding from the National Eye Institute at the National Institutes of Health.
The researchers aimed to determine whether individuals with impaired vision lean more on auditory cues and whether combining sight and sound offers an advantage over vision alone. Although there are few studies on collision judgments in visually impaired populations, tasks like crossing a street or navigating busy environments clearly depend on this ability.
The team anticipated that even with reduced central vision, people with AMD would still rely at least partially on their remaining visual information rather than depending solely on sound.
To researchers’ surprise, participants with AMD in both eyes performed similarly to those with normal vision when estimating the vehicle’s arrival time. They could achieve comparable accuracy, though they placed greater emphasis on cues that were less reliable.
Even when central vision was compromised, participants continued to use visual information and, when available, integrated both modalities.
“Individuals with impaired vision didn’t rely on auditory information alone; they combined visual and auditory cues,” DeLucia notes.
When only sight or only sound was presented, both groups displayed perceptual biases identified in prior work by DeLucia and Oberfeld: louder vehicles were perceived to arrive sooner than quieter ones, and larger vehicles seemed to reach them more quickly than smaller ones. These heuristics appeared somewhat more often in the AMD group, likely due to limited access to fine details, though the effect sizes remained small.
Using an advanced audiovisual simulation and tailored data analysis, the researchers gained a near-intrinsic understanding of how pedestrians use auditory and visual information to judge the timing of an approaching vehicle—deeper than previous studies could reveal.
The study also predicted that combining sight and sound would improve accuracy, but that anticipated multimodal advantage did not materialize for either group. Having both vision and hearing was not more effective than vision alone.
DeLucia stresses that clinical measures such as visual acuity do not always predict real-world performance.
“There isn’t a one-to-one relationship between how severe eye disease is and daily functional vision,” she explains. “Some individuals may have substantial retinal damage yet retain fairly good visual acuity, while still facing challenges in everyday tasks that rely on vision.”
This disconnect may help explain why some AMD participants performed nearly as well as those without impairment.
The researchers caution that these results do not imply that people with vision loss can safely navigate traffic as if they were sighted. The study used VR scenes with a single vehicle on a straightforward road, a simplification of real traffic. Further work is needed to see if findings hold in more complex situations—multiple cars that accelerate or decelerate, or quieter electric vehicles.
Nevertheless, the researchers hope the findings will guide future mobility, rehabilitation, and safety efforts, ultimately supporting greater independence for people with visual impairment.
This project was supported by the National Eye Institute of the NIH under grant number R01EY030961. The authors’ views are their own and do not necessarily reflect those of the NIH.