Interactive audio-haptic perception for blind and low-vision users
PI: Kathleen McKeown
Co-PI: Tatiana Aloi Emmanouil, CUNY; Nikolaus Kriegeskorte, Columbia; Brian Smith, Columbia; Carl Vondrick, Columbia
Abstract
A picture is worth a thousand words, but only if the viewer can experience it. We propose a project to help blind and low-vision (BLV) people experience works of art, including their sensory and semantic dimensions, their emotional valence, the artist’s intentions, and other contextual information. Our project brings together research from vision, language, and interaction design, all informed by cognitive science, to develop AI Art Immersion for the visually impaired. We imagine a multimodal system that can guide the user through an immersive exploration of artwork through a combination of text, speech, and haptic senses such as touch. Our approach will enable conveying different thematic aspects of artwork (color, texture, depicted objects, invoked imagery and interpretation). Our research will focus on AI generative techniques that are informed by how humans best experience artwork and will enable a personalized approach that weaves together multiple modalities in ways that are appropriate for individual end users. Our project will result in a use case system that demonstrates the benefit of integrating AI with insights from cognitive science and neuroscience.
Publications
In progress
Resources
In progress
