CamIO Hands

image

Principal Investigator:

James Coughlan

This project builds on the CamIO project to provide point-and-tap interactions allowing a user to acquire detailed information about tactile graphics and 3D models. 

The interface uses an iPhone’s depth and color cameras to track the user’s hands while they interact with a model. When the user points to a feature of interest on the model with their index finger, the system reads aloud basic information about that feature. For additional information, the user lifts their index finger and taps the feature again. This process can be repeated multiple times to access additional levels of information. For instance, tapping once on a region in a tactile map could trigger the name of the region, with subsequent taps eliciting the population, area, climate, etc. No audio labels are triggered unless the user makes a pointing gesture, which allows the user to explore the model freely with one or both hands. Multiple taps can be used to skip through information levels quickly, with each tap interrupting the current utterance. This allows users to reach the desired level of information more quickly than listening to all levels in sequence. 

Pilot experiments with both sighted and blind and visually impaired (BVI) participants, followed by formal experiments with an additional six BVI participants, demonstrate the effectiveness of the interface. See a video demonstration here.

A manuscript describing the interface and the experiments have been accepted as a peer-reviewed publication in the upcoming International Conference on Computers Helping People with Special Needs (ICCHP ’24): A. Narcisi, H. Shen, D. Ahmetovic, S. Mascetti and J. Coughlan. “Accessible Point-and-Tap Interaction for Acquiring Detailed Information about Tactile Graphics and 3D Models.” July 2024. (Preprint pdf here.)

Moreover, the interface will be released as a free iOS app, allowing people with high-end iOS devices that include depth cameras to create their own tactile graphics and audio labels for these graphics. In this version of the app, we recommend that the tactile graphics be chosen from the following colors in this fluorescent cardstock product: green, yellow, orange, blue and magenta, all against a white background. The app currently supports four pre-defined different tactile models (pdf’s are here: training, inner solar system, rockets, British Isles). Future releases of the app will allow the users to label rigid objects with any colors.

If you would like to test a beta version of the app, please email Andrea Narcisi at andrea.narcisi@studenti.unimi.it .

Projects

  • Active
CamIO Picture shows person pointing stylus to plastic model of a biological cell; webcam (not pictured) views model and stylus; computer, connected to webcam, announces "Nucleolus"

CamIO

CamIO (short for “Camera Input-Output”) is a system to make physical objects (such as documents, maps, devices and 3D models) accessible to blind and visually impaired persons, by providing real-time audio feedback in response to the location on an object

Labs

Coughlan Lab L to R: Huiying Shen, Ali Cheraghi, Brandon Biggs, James Coughlan, Charity Pitcher-Cooper, Giovanni Fusco

Coughlan Lab

The goal of our laboratory is to develop and test assistive technology for blind and visually impaired persons that is enabled by computer vision and other sensor technologies.

Collaborators

News

Smith-Kettlewell Eye Research Institute to Illuminate Adaptive Strategies in Vision Impairment at IMRF Symposium

Smith-Kettlewell Eye Research Institute to Illuminate Adaptive Strategies in Vision Impairment at IMRF Symposium

The Smith-Kettlewell Eye Research Institute played a key role at this year’s International Multisensory Research Forum (IMRF) in Reno, Nevada, beginning June 17, 2024. An all-SKERI symposium “Shifting Sensory Reliance: Adaptive Strategies in Vision Impairment and Blindness.”…