Scientific

Photo of Dragan Ahmetovic

Using digital documents with mathematical content for people with visual impairments

Reading scientific content within digital documents is often challenging for people with visual impairments. Text information can be easily engaged with by means of assistive tools such as refreshable braille displays or screen readers. However, mathematical formulae or graphs are difficult to translate into a screen-readable form while preserving the expressiveness of the original format. Providing broadly usable content is a burden left to document authors, which requires substantial time and effort. Document authors are also frequently unaware of the need for document usability, and they may not have the required know-how to create blind-friendly documents. Thus, scientific content is rarely provided in a blind-friendly format, which creates a barrier to the availability of STEM education and employment for people with visual impairments. At the laboratory “S. Polin” of the University of Turin, Italy, our research focuses on assistive technologies to enable use of digital documents with scientific content for people with visual impairments. I will present two of our most recent works: Axessibility is a LaTeX package for generating PDF documents in which mathematical formulae are usable by people with visual impairments using braille displays or screen readers, without requiring the author to add blind-friendly content manually. AudioFunctions.web is a web-based system that enables blind people to explore mathematical function graphs, using sonification, earcons, and speech synthesis to convey the overall shape of the function graph, its key points of interest, is accurate quantitative information at any given point.

Special Time: 11:00 AM, Zoom Colloquium: OKO - app that uses computer vision to assist blind and visually impaired people

Special Time: 11:00 AM, Zoom Colloquium: OKO – app that uses computer vision to assist blind and visually impaired people

AYES is a Belgian-based company that is co-founded by three computer scientists. Our journey started when our visually impaired family friend, Bram, told us about the challenges he faced while navigating outdoors. Quite quickly we realized that current assistive technologies are outdated and could benefit from artificial intelligence. For that reason, we started developing a mobile application, called OKO, that uses the smartphone camera and computer vision to assist people in their daily lives. Crossing the street is a very stressful task if there is no pedestrian signal installed that blind or low-vision people can detect. For that reason, we’ve developed a feature capable of detecting the pedestrian traffic light. So far, we have identified 70.000+ safe crossings. Our goal now is to bring this technology to the USA since a lot of cities are experiencing difficulties with installing pedestrian signals that can be used by all. In recent months we’ve also added public transport recognition which identifies the bus number and destination to get people on the right bus. In the future, we’ll add more computer vision-related features to deliver an even better navigation experience.

Hybrid Colloquium: Making Calculus Accessible

Hybrid Colloquium: Calculus for the Blind and Visually-Impaired

Abstract: When Isaac Newton developed calculus in the 1600s, he was trying to tie together math and physics in an intuitive, geometrical way. But over time math and physics teaching became heavily weighted toward algebra, and less toward geometrical problem-solving. However, many practicing mathematicians and physicists will get their intuition geometrically first and do the algebra later. Joan Horvath and Rich Cameron’s new book, Make: Calculus, imagines how Newton might have used 3D printed models, LEGO bricks, programming, craft materials, and a dash of electronics to teach calculus concepts intuitively with hands-on models. The book uses as little reliance on algebra as possible while still retaining enough to allow comparison with a traditional curriculum. The 3D printable models are written in OpenSCAD, the text-based, open-source CAD program. The models are in an open source repository and are designed to be edited, explored, and customized by teachers and learners. Joan and Rich will also address how they think about the tactile storytelling of their models. They hope their work will enable more people to master calculus and start on the road to STEM careers. Make: Calculus is available in a softcover print version, in a PDF/epub3 bundle in which the epub3 with MathML equations has been optimized for screenreaders (Thorium epub3 reader recommended), and in Kindle format. Joan and Rich will talk about some of the technology gaps they encountered trying to keep a book with calculus equations usable by blind and visually-impaired students. Joan Horvath and Rich Cameron are the co-founders of Pasadena-based Nonscriptum LLC, which provides 3D printing and maker tech consulting and training. Their eight previous books include Make: Geometry, which developed a similar repository of models for middle and high-school math in collaboration with the SKI “3Ps” project. They have also authored popular LinkedIn Learning courses on additive manufacturing, and run several related (currently virtual) Meetup groups.

Catherine Agathos, Post-Doctoral Research Fellow

Implications of age-related central visual field loss for spatial orientation and postural control

 Multisensory integration is essential for postural control and enabling safe navigation. The vestibular system plays a crucial role in these processes, contributing to functions important for maintaining one’s autonomy, such as balance, oculomotor control and spatial orientation. Healthy aging is accompanied by a decline in many perceptual, cognitive, and motor abilities, however, potentially leading to a loss of autonomy and increased health risks, most notably falls. Age-related macular degeneration (AMD), the leading cause of irreversible visual impairment in older adults in industrialized countries, adds another layer of complexity to these age-related changes. Binocular AMD results in central visual field loss, forcing individuals to adopt eccentric viewing strategies and develop  preferred retinal loci (PRLs) in the peripheral retina. This adaptation calls for significant oculomotor and eye-head coordination changes that would require a recalibration with the vestibular system in particular. The interplay of visual impairment, altered oculomotor function, and age-related sensorimotor deficits likely contributes to the balance and mobility difficulties reported in this population, though the mechanisms are poorly understood. For instance, how does the combined loss of central vision and associated oculomotor changes affect visual sampling during motion, space representation, and the integration of visual and body-based signals necessary for accurate perception and adaptive movements? Little is known about whether and how older individuals with AMD adapt to a peripheral PRL in the context of spatial orientation and postural control.In this talk, I will present relevant work from the literature and my studies in healthy aging and central field loss. I will then introduce an R01 proposal evolving from this research, which examines the recalibration of different sensorimotor systems to a PRL in individuals with AMD. The proposal focuses on three key areas: 1) head stabilization and the exploitation of residual vision, 2) gaze-direction induced illusions in spatial orientation perception, and 3) postural adaptation to eccentrically-viewed optic flow stimuli.  This research aims to bridge our knowledge gap in multisensory integration for balance, mobility, and fall risk in AMD. Ultimately, the goal is to inform the development of appropriate interventions, aids, and rehabilitation strategies for this population.

Dr. Hari Palani Principal Researcher & CEO

Multisensory Information access and AI: Advancing opportunities for the Visually Impaired

Lack of timely access to information is a significant problem for the nearly 24 million blind or visually impaired (BVI) individuals in the U.S. and 285 million globally. Significant progress has been made in ensuring BVI individuals get the necessary accommodations they need. However, very little work has been done towards making critical materials accessible in real-time, especially for non-textual materials such as math expressions, graphical representations, and maps. Current methods for authoring, converting, and/or producing accessible versions of non-textual materials requires significant human, time, and financial resources. The process also requires people with rare and specialized knowledge such as braille transcribers and tactile graphic designers.My research program is aimed at addressing these issues and is unified by two complementary strands of applied science, (1) multisensory information access, and (2) AI. In this talk, I will first introduce the notion of multisensory information access, with a focus on how it impacts the perception, cognition, and behavioral characteristics of humans. Then, using the findings as foundation, I will show how we can (and are) using AI to enable multisensory information access in educational and navigational settings to address the long-standing accessibility issues of BVI individuals. https://www.unarlabs.com/

Advanced Psychophysical Methods for Comprehensive Visual Function Assessment

Assessing visual function is a fundamental aspect of eye research. However, existing tests often face limitations due to their design for in-clinic use, the necessity for trained personnel, time consumption, and the provision of coarse result resolution. This presentation will review the current limitations of vision assessment and introduce a range of new visual function tools developed to address these challenges. Specifically, it will describe various rapid, generalizable psychophysical paradigms capable of constructing personalized performance models. Additionally, it will cover tools for continuous perceptual multistability measurement. The advent of these tools has secondary effects, such as enabling the use of machine learning to detect novel categories of atypical vision and to identify redundant and predictive visual functions for specific populations. The presentation will include examples from typical clinical populations such as individuals with refractive errors, color vision deficits, amblyopia, albinism, and retinal disorders. Moreover, the talk will advocate for expanding vision assessments beyond conventional screening of e.g., acuity, contrast, or color, highlighting the importance of evaluating other visual modalities such as form, motion, face, and object perception and their clinical relevance. 

Radoslaw Cichy a Professor in the Department of Education and Psychology at the Free University of Berlin, and PI of the Neural Dynamics of Visual Cognition group

Deep neural networks as scientific models of vision

Artificial deep neural networks (DNNs) are used in many different ways to address scientific questions about how biological vision works. In spite of the wide usage of DNNs in this context, their scientific value is periodically questioned. I will argue that DNNs are good in three ways for vision science: for prediction, for explanation, and for exploration. I will illustrate these claims by recently published or still ongoing projects in the lab. I will also propose future steps to accelerate progress. https://www.ewi-psy.fu-berlin.de/en/psychologie/arbeitsbereiche/neural_dyn_of_vis_cog/team_v2/group_leader/rm_cichy/index.html

Dr. Sarika Gopalakrishnan PhD, FAAO Post-doctoral Research Fellow Envision Research Institute

The Role of virtual reality and augmented reality technologies in low vision rehabilitation

Visual impairment refers to a condition where a person’s eyesight cannot be improved with medical treatment. Such individuals face difficulties in performing daily living activities independently and require assistance from others. They need help in performing tasks that they cannot execute due to low vision or blindness. Virtual reality (VR) technology can be used to understand the visual performance of people with low vision in real-world scenarios. VR scenarios provide a more realistic way of measuring visual parameters such as visual acuity, contrast, eye and head movements, and visual search than clinical settings. Augmented reality (AR) technology can help analyze the functional vision while performing daily activities. Augmented Reality (AR) can help improve the visual functions of people with low vision, such as distance and near visual acuity, distance and near contrast sensitivity, and more. It can also enhance functional vision activities like reading, writing, watching television, working with computers, identifying currency, finding objects in a crowd, etc. AR can greatly enhance the visual experience of people with low vision. In this presentation, we will discuss the applications of virtual reality and augmented reality technology in the field of low vision rehabilitation. https://research.envisionus.com/Team/Sarika-Gopalakrishnan,-PhD,-FAAO