Distinguish Seminar Series

Location: Kavli Auditorium at Zuckerman Institute
Time: 3:00pm to 4:00pm
Zoom: Upon request @ [email protected]
Leila Wehbe is an associate professor in the Machine Learning Department and the Neuroscience Institute at Carnegie Mellon University. Her work is at the interface of cognitive neuroscience and computer science. It combines naturalistic functional imaging with machine learning both to improve our understanding of the brain and to find insight to build better artificial systems. She is the recipient of an NSF CAREER award, a Google faculty research award and an NIH CRCNS R01. Previously, she was a postdoctoral researcher at UC Berkeley and obtained her PhD from Carnegie Mellon University
Title: Model prediction error reveals separate mechanisms for integrating multi-modal information in the human cortex
Abstract: Language comprehension engages much of the human cortex, extending beyond the canonical language system. Yet in everyday life, language unfolds alongside other modalities, such as vision, that recruit these same distributed areas. Because language is often studied in isolation, we still know little about how the brain coordinates and integrates multimodal representations. In this talk, we use fMRI data from participants viewing 37 hours of TV series and movies to model the interaction of auditory and visual input. Using encoding models that predict brain activity from each stream, we introduce a framework based on prediction error that reveals how individual brain regions combine multimodal information.



