Scientific

Zoom Brown Bag: A Virtual Environment for Training Blind People to Learn to Use Camera-Based Navigation Systems

Zoom Brown Bag: A Virtual Environment for Training Blind People to Learn to Use Camera-Based Navigation Systems

Abstract: Assistive navigation systems for blind people take images or videos as an input for various tasks such as finding the location of a user, recognizing objects, and detecting obstacles. Though the quality of the images and videos affect the performance of the systems significantly, manipulating a camera to have a clear image with proper framing is a challenging task for blind users. In this research, we explore the interactions between a camera and blind users in assistive navigation systems through interviews with blind participants and researchers in human-computer interaction and computer vision. We further develop a virtual environment where blind users can train themselves on manipulating a camera so that blind users can understand and effectively use gestures in the interactions identified in the interview such as scanning their environments with a camera and maintaining a desired camera position or orientation. This presentation shares the results of the interview, the method to implement the virtual environment, and a plan for evaluating the virtual environment through a user study. https://yua.jul.mybluehost.me/users/jonggi-hong

Tuesday, Zoom Colloquium: Making Calculus Accessible

Postponed: Tuesday, Zoom Colloquium: Making Calculus Accessible

Abstract – When Isaac Newton developed calculus in the 1600s, he was trying to tie together math and physics in an intuitive, geometrical way. But over time math and physics teaching became heavily weighted toward algebra, and less toward geometrical problem solving. However, many practicing mathematicians and physicists will get their intuition geometrically first and do the algebra later. Joan Horvath and Rich Cameron’s upcoming book, Make:Calculus, imagines how Newton might have used 3D printed models, construction toys, programming, craft materials, and a dash of electronics to teach calculus concepts intuitively. The book uses as little reliance on algebra as possible while still retaining enough to allow comparison with a traditional curriculum. The 3D printable models are written in OpenSCAD, the text-based, open-source CAD program. The models will be released in an open source repository when the book is published, and are designed to be edited, explored, and customized by teachers and learners. Joan and Rich will also address how they think about the tactile storytelling of their models. They hope their work will make calculus more accessible, in the broadest sense of the word, to enable more people to start on the road to STEM careers. Joan Horvath and Rich Cameron are the co-founders of Pasadena-based Nonscriptum LLC, 3D printing and maker tech consultants, trainers, and authors. Eight previous books include Make: Geometry, which developed a similar repository of models for middle and high-school math in collaboration with the SKI “3Ps” project. They have also authored popular LinkedIn Learning courses on additive manufacturing, and run several related (currently virtual) Meetup groups. Improving Zoom accessibility for people with hearing impairments People with hearing impairments often use lipreading and speechreading to improve speech comprehension. This approach is helpful but only works if the speaker’s face and mouth are clearly visible. For the benefit of people with hearing impairments on Zoom calls, please enable your device’s camera whenever you are speaking on Zoom, and face the camera while you speak. (Feel free to disable your camera when you aren’t speaking.) Abstract:When Isaac Newton developed calculus in the 1600s, he was trying to tie together math and physics in an intuitive, geometrical way. But over time math and physics teaching became heavily weighted toward algebra, and less toward geometrical problem solving. However, many practicing mathematicians and physicists will get their intuition geometrically first and do the algebra later. Joan Horvath and Rich Cameron’s upcoming book, Make: Calculus, imagines how Newton might have used 3D printed models, construction toys, programming, craft materials, and a dash of electronics to teach calculus concepts intuitively. The book uses as little reliance on algebra as possible while still retaining enough to allow comparison with a traditional curriculum.The 3D printable models are written in OpenSCAD, the text-based, open-source CAD program. The models will be released in an open source repository when the book is published, and are designed to be edited, explored, and customized by teachers and learners. Joan and Rich will also address how they think about the tactile storytelling of their models. They hope their work will make calculus more accessible, in the broadest sense of the word, to enable more people to start on the road to STEM careers.Joan Horvath and Rich Cameron are the co-founders of Pasadena-based Nonscriptum LLC, 3D printing and maker tech consultants, trainers, and authors. Eight previous books include Make: Geometry, which developed a similar repository of models for middle and high-school math in collaboration with the SKI “3Ps” project. They have also authored popular LinkedIn Learning courses on additive manufacturing, and run several related (currently virtual) Meetup groups.Hacker CalculusImproving Zoom accessibility for people with hearing impairments People with hearing impairments often use lipreading and speechreading to improve speech comprehension. This approach is helpful but only works if the speaker’s face and mouth are clearly visible. For the benefit of people with hearing impairments on Zoom calls, please enable your device’s camera whenever you are speaking on Zoom, and face the camera while you speak. (Feel free to disable your camera when you aren’t speaking.)

Zoom Colloquium: Braille Literacy Rates in the U.S.: Knowing What We Don’t Know

Zoom Colloquium: Braille Literacy Rates in the U.S.: Knowing What We Don’t Know

For almost as long as braille has existed, researchers, advocates, educators, and innovators have been influenced by assumptions or beliefs about rates of braille readership. However, despite repeated claims in the media and in advocacy materials, U.S. braille literacy statistics have proven difficult to substantiate and clarify. In this session, authors Rebecca Sheffield, Frances Mary D’Andrea, and Sarah Chatfield will discuss their systematic literature review, which began in 2015 in collaboration with Smith-Kettlewell scientist Valerie Morash. The research findings raise numerous questions, including: In the absence of current, reliable data on braille literacy, what evidence is there about the demand for braille-related innovations and research? What lessons should we take from the proliferation of unsupported claims about braille literacy rates? How has the nature of being a “braille reader” changed with the advent of technology? How might researchers approach agreeing on definitions and gathering useful data on braille readership rates? Sheffield, R. M., D’Andrea, F. M., Morash, V., & Chatfield, S. (2022). How many braille readers? Policy, politics, and perception. Journal of Visual Impairment & Blindness, 116(1), 14–25. https://doi.org/10.1177/0145482X211071125 Improving Zoom accessibility for people with hearing impairments People with hearing impairments often use lipreading and speechreading to improve speech comprehension. This approach is helpful but only works if the speaker’s face and mouth are clearly visible. For the benefit of people with hearing impairments on Zoom calls, please enable your device’s camera whenever you are speaking on Zoom, and face the camera while you speak. (Feel free to disable your camera when you aren’t speaking.)

Zoom Brown Bag: Studying sensory reweighting via the perception of gravity in aging with central vision loss

Zoom Brown Bag: Studying sensory reweighting via the perception of gravity in aging with central vision loss

Abstract: To interact with the world around us, we must accurately perceive our environment and how we are moving within it. For this, visual, vestibular, and somatosensory (proprioceptive and tactile) inputs must be integrated and appropriately (re)weighted depending on signal reliability and environmental and task demands. This sensory reweighting process is therefore dynamic. Age-related sensory deficits are thought to lead older adults to systematically up-weight visual information, however. This visual dependence in older age is associated with alterations in body coordination, adaptation difficulties, balance, and falls, among others, and such limitations can be debilitating when visual information is reduced and unreliable, as in the case of central visual field loss due to age-related macular degeneration (AMD). It is unclear whether aging allows for sensory adaptations to compensate for vision loss in AMD, and it is possible that visual dependence persists, whereby affected individuals are relying more on the sense that is failing them. We are therefore examining sensory reweighting in AMD with classic measures of subjective visual vertical estimation, an essential aspect of space perception and postural control. Prior to studying the complex case of AMD, where aging, vision loss, and an eccentric oculomotor reference frame may all play a part, we first examine whether the use of eccentric viewing strategies alone may affect verticality judgments. Since individuals with binocular central field loss commonly employ an eccentric preferred retinal locus (PRL) in their better eye, and given that eye position signals also contribute to individuals’ space perception and postural orientation and control, the consequences of AMD may extend beyond visual and oculomotor tasks. Thus, in addition to investigating the potential influence of eye eccentricity on verticality judgments in younger adults with no vision deficits, we look at individuals with monocular AMD who can serve as their own controls. Preliminary results seem to indicate that older adults with AMD rely on visual context in their subjective vertical estimation, despite their vision loss, and that eccentric viewing alters one’s verticality perception. The potential interaction of eye orientation and contextual visual information will be essential to consider further in designing rehabilitation protocols for individuals with AMD. https://yua.jul.mybluehost.me/users/catherine-agathos

Zoom Brown Bag: Modeling the impairment of smooth pursuit eye movements in macular degeneration

Zoom Brown Bag: Modeling the impairment of smooth pursuit eye movements in macular degeneration

Abstract – Age-related macular degeneration (AMD) is the most prevalent cause of central visual field loss. Since the fovea (oculomotor locus) is often impaired, individuals with AMD typically have difficulties with saccadic and smooth pursuit eye movements (Verghese, Vullings, & Shanidze, 2021). We propose that smooth pursuit eye movements are impaired in macular degeneration due to two factors: 1) the transient disappearance of the target into the scotoma and 2) noise that depends on the eccentricity of the oculomotor locus from the target. To assess this claim, we measured performance in a perceptual baseball task where observers had to determine whether a target would cross or miss a rectangular region (plate) after being extinguished (Kim, Badler, & Heinen, 2005), when instructed to either fixate a marker or smoothly track the target. We recorded eye movements of 4 AMD eyes and 6 control eyes with simulated scotomata (matching those of individual AMD participants) during the task. We found that controls with simulated scotomata could better discriminate strikes from balls compared to AMD participants, particularly in the smooth pursuit condition. We also developed a model that predicted performance on the task using visible portions of the target trajectory given the scotoma and position uncertainty given the eccentricity of the eye from the target. The model showed a similar trend to participant results, with better discrimination for simulations using control eye position data (foveal oculomotor locus) than for MD data (peripheral oculomotor loci). However, the model’s discrimination performance was largely better than actual participant performance. These findings suggest that while the disappearance of the target due to the scotoma and noise due to the eccentricity of the peripheral oculomotor locus from the target in AMD affect perceptual discrimination, these factors account only partially for the impairments. https://yua.jul.mybluehost.me/users/jason-rubinstein

Zoom Meeting: Career Paths Outside of Academia

Zoom Meeting: Career Paths Outside of Academia

Abstract – Vision Research at Apple 12:00 – 12:35 pm Speakers: Andrew Watson – Distinguished Chief Vision Scientist, Apple Laura Walker – Sr. Engineer, Visual Health, Apple Apple brings together the smartest and most talented people to make incredible products that impact lives around the globe. The role of vision science is critical to the visual experience our displays provide. Our team works with almost every team across Apple to ensure our displays and algorithms are delighting the human visual system. In this lunchtime chat, we will discuss ways in which vision and human perception experts impact the products we use every day. Panel: Transitioning to Industry 12:35 – 1:15 pm Panelists: Zheng Ma – Research Scientist, Meta(Facebook) Natalie Stepien-Bernabe – Human Factors Senior Scientist, Exponent Panelists with PhDs in vision science and psychology will discuss the paths they took to develop their careers in the field of technology and scientific consulting. After sharing their experience, panelists will take questions from the audience.

Zoom Brown Bag: The development of visual temporal processing

Zoom Brown Bag: The development of visual temporal processing

Abstract – The visual system must organize dynamic input into meaningful percepts across time, balancing between stability and sensitivity to change. The Temporal Integration Window (TIW) has been hypothesized to underlie this balance: if two or more stimuli fall within the same TIW, they are integrated into a single percept; those that fall in different windows are segmented. Visual TIWs have mainly been studied in adults, showing average windows of 65 ms. However, it is unclear how temporal windows develop throughout early childhood. Differences in TIWs can influence high-level cognitive and perceptual processes that require well-adapted timing, such as object individuation, apparent motion, action sequence perception, language processing, action planning, and pragmatic aspects of communication, such as interactional synchrony. Because of the fundamental role temporal processing plays in visual perception, it is important then, to understand not only the trajectory of how TIWs change over typical development (TD) but also neurodevelopmental disorders like autism spectrum disorder (ASD). My work uncovered the developmental trajectory of visual temporal processing in young children with and without autism as well as mapped the development of peak alpha frequency – a potential neural correlate of visual temporal processing. https://yua.jul.mybluehost.me/users/julie-freschl

Zoom Brown Bag: Eye Movement During Object Search and Its Comparison to Free Viewing

Zoom Brown Bag: Eye Movement During Object Search and Its Comparison to Free Viewing

Abstract – Eye movement is an observable behavior relating to visual attention, which can be characterized into two types: one a bottom-up process that is solely based on the visual input and the other a top-down process that is influenced by the behavioral goal. These two types of attention are largely considered to correspond to the eye movements made during free viewing and visual search tasks, respectively. Recent development of deep learning methods provides the opportunity for training models of fixation prediction and comparing their performance. However, most visual search studies that have recorded eye movement have been small-scale efforts limited to only dozens or a few hundreds of unique search images. There is no image dataset labeled with search fixations that is large and general enough for training deep network models, nor are there parallel datasets of search and free-viewing behavior to provide a direct comparison between these two tasks on the same images. To fill in this gap, we created COCO-Search18 and COCO-FreeView, large-scale datasets of eye fixations from people either searching for a target object or freely viewing the same images. We characterized eye movement behaviors in both datasets and trained deep network models to predict fixations on a disjoint test dataset. Additionally, we also collected COCO-CursorSearch, a third parallel dataset using the same images and 18 target categories as COCO-Search18 but with people using a “foveated” mouse-deblurring paradigm to manually search for targets. We validate our mouse movement approximation of search fixations and will discuss the potential that online data collection has for modeling attention. https://yua.jul.mybluehost.me/users/yupei-chen

Zoom Brown Bag: "Solving Perplexing PowerPoint Puzzles: A presentation within a presentation."

Zoom Brown Bag: “Solving Perplexing PowerPoint Puzzles: A presentation within a presentation.”

Abstract – This seminar will cover many fun, sometimes perplexing, features of PowerPoint. Topics include cropping videos, animating text, sharpening shapes, taming icons, punching holes in shapes (!), working with layers, and managing morphs. All of these will be demonstrated with reference to a PowerPoint presentation about what parents of children with CVI have reported regarding their children’s ability to read – research coming out of Dr. Arvind Chandna’s lab.

Announcing the 'Virtual' Seventeenth Annual Meeting  Low Vision Rehabilitation Study Group. It's a 2-Day Event (Feb. 4th & 5th)

Announcing the ‘Virtual’ Seventeenth Annual Meeting Low Vision Rehabilitation Study Group. It’s a 2-Day Event (Feb. 4th & 5th) both days 9:00 am – 12:00 pm

Announcing – the ‘Virtual’ Seventeenth Annual Meeting Low Vision Rehabilitation Study Group. It’s a 2-Day Event (Feb. 4th & 5th) both days 9:00 am – 12:00 pm Purpose: An informal gathering of clinicians/clinical researchers in low vision rehabilitation • Discuss problem cases • Share techniques • Brainstorm ideas for new treatments or investigations • Enjoy collegiality Location: the easy chair at your house • Hosted by Don Fletcher, Ron Cole, Gus Colenbrander, Tiffany Chan and Annemarie Rossi • Sponsored by Smith-Kettlewell Eye Research Institute (SKERI) and CPMC Dept. of Ophthalmology Dates: February 4th and 5th, 2022 • Friday, February 4th from 9 AM to 12 noon Pacific Time • Saturday, February 5th from 9 AM to 12 noon Pacific Time Who is Invited: • Anyone actively involved in vision rehabilitation • NOT newcomers wanting to get started (sorry – get your feet wet then join us) Registration Fee: NONE (zero, no charge, $0.00 – what a deal!) • Contact Don Fletcher at floridafletch@msn.com to save a spot for Friday/Saturday Attire: Something nice enough to turn the zoom camera on Format: Informal • No invited speakers • Bring a case or technique to discuss • There is no set agenda – we will divide the time between all comers • If time allows we can discuss and solve all the problems facing the field Promise: We won’t always agree but we’ll have a good time as a group that has a common interest/passion. Donald C. Fletcher MD https://yua.jul.mybluehost.me/users/don-fletcher