Scientific

Dr. Shrikant Bharadwaj of the L V Prasad Eye Institute, Hyderabad, India

Temporal instabilities in the human eye’s auto-focus mechanism: characteristics, source and impact on vision

Our eyes are never at rest. Between the microsaccadic eye movements and microfluctuations of the eye’s autofocus mechanism (ocular accommodation), our visual system constantly encounters time-varying information even during a supposed “Steady state” fixation epoch. This talk will focus on the temporal instability of the eye’s accommodation, as observed under physiological conditions and in a condition of binocular vision dysfunction. The talk will be divided into two parts: the first part will describe the characteristics of these instabilities and their putative source in the neural control of accommodation and the second part will describe their impact on vision and a modeling exercise that was undertaken to decode putative decision strategies to optimize vision during such epochs. https://www.lvpei.org/about-us/our-team/research/shrikant-bharadwaj 

Brandon Biggs Engineer MDes, Inclusive Design, OCAD University BA, Music, CSU East Bay AA, Music, Foothill College

The Digital Drawing WYSIWYG Editor for Blind Users is Here

Digital drawing has historically been a significant challenge for blind individuals, with previous solutions requiring extensive imagination and technical skills. However, the Coughlan lab has been developing a groundbreaking “what you hear is what you get” drawing tool, revolutionizing this field. This innovative tool, powered by Audiom, enables blind users to create and edit complex shapes, maps, and art through auditory feedback. Users navigate a canvas using arrow keys, dropping points to form shapes and lines, which are then audibly represented, allowing for an intuitive and accessible drawing experience. Additionally, the tool supports the addition of sounds to objects, enhancing the creative process. This advancement not only facilitates artistic expression among the blind community but also offers educational applications, such as geometry assignments. With the ability to instantly share or export creations, this shape editor represents a significant leap forward in making digital drawing accessible to blind individuals.  Smith-Kettlewell Eye Research Institute is Listening to the Future of Navigation – Meet Engineer, Brandon Biggs (blindabilities.com)

Andrea Narcisi Research Scholar

Point-and-Tap Interaction for Acquiring Detailed Information about Tactile Graphics and 3D Models

I will present a system based on an iPhone app developed in the Coughlan Lab. This system is a novel “Point-and-Tap” interface that enables people who are blind or visually impaired (BVI) to easily acquire multiple levels of information about tactile graphics and 3D models. The interface uses an iPhone’s depth and color cameras to track the user’s hands while they interact with a model. To get basic information about a feature of interest on the model read aloud, the user points to the feature with their index finger. For additional information, the user lifts their index finger and taps the feature again. This process can be repeated multiple times to access additional levels of information. No audio labels are triggered unless the user makes a pointing gesture, which allows the user to explore the model freely with one or both hands. In addition, multiple taps can be issued in rapid succession to skip through to the desired information (an utterance in progress is halted whenever the fingertip is lifted off the feature), which is much faster than having to listen to all levels of information being played aloud in succession to reach the desired level. Experiments with BVI participants demonstrate that the approach is practical, easy to learn, and effective.

Post-Doctoral Research Fellow

Investigating natural head movement and its role in spatial orientation perception: Insight from 50 hours of data collection

Movement is ubiquitous in everyday life, and accounting for our physical position as we move through the world is a constant process. Over the lifespan, experience in estimating one’s position accumulates, and the nervous system’s representation of this prior experience is thought to inform current perception of spatial orientation. Broadly, spatial orientation perception is a multimodal sensory process. The nervous system rapidly monitors, interprets, and integrates sensory information from various sources in an efficient, statistically optimal manner to estimate an organism’s position in its environment. In humans, key information in this process comes from the visual and vestibular systems, which use head-based sense organs. While statistics of natural visual and vestibular stimuli have been characterized, unconstrained head movement and position, which may drive correlated dynamics across these head-based senses in the real world, have not. Furthermore, head-based sensory cues essential to human spatial orientation perception, like estimation of one’s head orientation relative to gravity and heading (the direction of linear-velocity in a head-based coordinate system), have not been robustly measured in unconstrained, natural behaviors. Measurement of these head-based sensory cues in naturalistic settings, even if incomplete, will likely comprise a portion of the behaviors that make up ones’ total prior experience, and the quantitative characteristics of these behaviors may explain previously observed patterns of bias in verticality and heading perception.  In this brown bag, I will discuss methods of motion tracking in and out of the lab, my previous work to characterize natural statistics of head orientation and heading over 50 hours of human activity using these methods, work to use these natural statistics to constrain Bayesian models of sensory processing, and future research and applications that might leverage these data and approaches. Christian Sinnott | Smith-Kettlewell (ski.org) 

Emily Cooper, Assistant Professor at UC Berkeley School of Optometry

A real-world visual illusion

I will describe our research into a surprising visual illusion in which humans misperceive the shape of a highly familiar object in a highly familiar context: their own mobile phone while they hold it in their hand. Unlike many other illusions that rely on controlling visual information, this shape illusion is robust in fully natural conditions, and it requires only that one eye’s retinal image is slightly minified. Our investigations indicate that this illusion results from a failure of the visual system to discard distorted binocular cues for object slant, even if the distorted slant does not reach awareness. This failure challenges our current understanding of sensory cue combination and informs our practical insights into the perceptual effects of prescription spectacles. https://vcresearch.berkeley.edu/faculty/emily-cooper

Adrien Chopin, Postdoctoral Researcher Ph.D.

Unraveling Binocular Rivalry Dynamics for Amblyopia Detection: From Pilot Study to R21 Proposal

Unilateral amblyopia, a leading cause of childhood vision loss, demands early intervention for optimal treatment outcomes. Unfortunately, accurate screening tools specifically designed for preschoolers remain elusive. This talk explores the potential of binocular rivalry dynamics as a novel biomarker for amblyopia detection, bridging the gap between a promising pilot study and a future R21 proposal. Pilot Study: Our initial investigation revealed distinct binocular rivalry patterns in amblyopic adults compared to controls. Amblyopic individuals exhibited fewer reversals between rivalrous images presented to each eye, with a higher prevalence of incomplete reversals suggesting disrupted binocular processing. These findings highlight the potential of rivalry dynamics as a diagnostic tool. R21 Proposal: Building upon these initial findings, our R21 proposal aims to: Validate and refine rivalry-based diagnostics: We will expand the study population to include more amblyopia subtypes and controls, exploring both natural viewing conditions and balanced interocular contrast presentations. Additionally, we will optimize the minimum data collection duration for accurate classification, aiming for a sub-5-minute test.Develop an objective eye-movement measure: We will introduce optokinetic nystagmus (OKN) as an objective indicator of rivalry states, potentially eliminating the need for subjective reports, a challenge in preschoolers. By investigating signatures of abnormal rivalry dynamics through both subjective perception and objective OKN measures, we seek to establish a reliable, non-invasive method for amblyopia detection.This combined approach holds significant promise for developing a rapid, objective vision screening tool specifically designed for preschoolers. Early detection of amblyopia may pave the way for timely intervention, ultimately improving visual outcomes and enhancing children’s quality of life. https://yua.jul.mybluehost.me/directory/adrien-chopin

Dr. Fletcher speaking in front of a screen

The 19th Annual Meeting of the Low Vision Rehabilitation Study Group

Sponsored by The Smith-Kettlewell Eye Research Institute (SKERI) & the CPMC Dept. of Ophthalmology Hosted by Don Fletcher, Ron Cole, Gus Colenbrander, Tiffany Chan, and Annemarie Rossi The meeting will be held at SKERI – 2318 Fillmore St., San Francisco, CA 94115 Agenda: Doors open at 8 AM for breakfast and settling in. 3-hour discussion groups on both days from 9 AM to noon & 1:30 to 4:30 PM. Lunch is on your own. Purpose: An informal gathering of clinicians/clinical researchers in low vision rehab. · Discuss problem cases · Share techniques · Brainstorm ideas for new treatments or investigations · Enjoy collegiality Participants: Anyone actively involved in vision rehabilitation. Registration: Contact Don Fletcher at floridafletch@msn.com to save a spot for Friday/Saturday Format: Informal · no invited speakers · bring a case or technique to discuss · no set agenda – we will divide the time between all comers · if time allows, we can discuss and solve all the problems facing the field Promise: We won’t always agree but we’ll have a good time as a group that has a common interest/passion.

Kassandra Lee, Ph.D., Neuroscience Department at UNR

Improving recognition of cluttered objects using motion parallax in simulated prosthetic vision

The efficacy of visual prostheses in object recognition is limited. While various limitations exist, here we focus on reducing the impact of background clutter on object recognition. We have proposed the use of motion parallax via head-mounted camera lateral scanning and computationally stabilizing the object of interest (OI) to support neural background decluttering. We mimicked the proposed effect using simulations in a head-mounted display (HMD), and tested object recognition in normally sighted subjects. Images (24° field of view) were captured from multiple viewpoints and presented at a low resolution (20´20). All viewpoints were centered on the OI. Experimental conditions (2´3) included: clutter (with or without) ´ head scanning (single viewpoint, nine coherent viewpoints corresponding to subjects’ head positions, and nine randomly associated viewpoints). Subjects utilized lateral head movements to view OIs on the HMD and report what they thought the OI was. The median recognition rate without clutter was 40% for all head scanning conditions. Performance with clutter dropped to 10% in the static condition, but it was improved to 20% with the coherent and random head scanning (corrected p = 0.005 and p = 0.049, respectively). Background decluttering using motion parallax cues but not the coherent multiple views of the OI improved object recognition in low-resolution images. The improvement did not fully eliminate the impact of background. Motion parallax is an effective but incomplete decluttering solution for object recognition with visual prostheses. https://www.unr.edu/neuroscience/people/students/kassandra-lee

Graduate student Brian Szekely

Vision across the gait cycle

Walking, a fundamental human activity, presents challenges to visual stability due to motion blur induced by eye movements. The goal of my research is to elucidate the intricate relationship between locomotion, visual function, and oculomotor behavior. These physiological processes, coupled with neural oscillations arising from the periodic nature of walking, may influence visual function. To assess the impact of these behavioral and physiological aspects across the gait cycle, I use various psychophysical techniques, including binocular rivalry and contrast sensitivity, to discern differences in visual processes. With the use of virtual reality, optical tracking systems, and eye tracker technology, I investigate these visual functions at different locomotor phases: heel strike, single stance, double stance, and toe-off. Additionally, I am actively developing saccadic detection techniques specific to natural locomotion. Current saccadic techniques, designed for stationary behavior, exhibit poor performance during unconstrained head and body movements typical during natural walking. These comprehensive investigations aim to provide valuable insights into the interplay between locomotion and visual-motor processes. The findings may offer implications for understanding how the human nervous system adapts to maintain stable vision during walking. Moreover, this research could inform future studies in fields such as rehabilitation and applications in virtual reality. https://www.unr.edu/neuroscience/people/students/brian-szekely

Intern (Research)

Object Recognition to Support Indoor Navigation for Travelers with Visual Impairments

Independent wayfinding is a major challenge for blind and visually impaired (BVI) travelers. I describe recent work on improvements to real-time object recognition algorithms that will be used to support an accessible navigation app for BVI travelers in indoor environments. Recognition of key visual landmarks (such as Exit signs and artwork) in an environment provides information about the user’s current location, and improvements to the recognition algorithms will enhance the localization process used in the navigation app.