Distinguish Seminar Series

Location: Kavli Auditorium at Zuckerman Institute (9th Floor)
Date: 4/24/2026
Time: 3:00pm to 4:00pm
Zoom: Upon request @ [email protected]
Title and Abstract: TBD

Location: Kavli Auditorium at Zuckerman Institute (9th Floor)
Date: 5/21/2026
Time: 3:00pm to 4:00pm
Zoom: Upon request @ [email protected]
Title and Abstract: TBD

Location: Kavli Auditorium at Zuckerman Institute
Time: 3:00pm to 4:00pm
Zoom: Upon request @ [email protected]
Leila Wehbe is an associate professor in the Machine Learning Department and the Neuroscience Institute at Carnegie Mellon University. Her work is at the interface of cognitive neuroscience and computer science. It combines naturalistic functional imaging with machine learning both to improve our understanding of the brain and to find insight to build better artificial systems. She is the recipient of an NSF CAREER award, a Google faculty research award and an NIH CRCNS R01. Previously, she was a postdoctoral researcher at UC Berkeley and obtained her PhD from Carnegie Mellon University
Title: Model prediction error reveals separate mechanisms for integrating multi-modal information in the human cortex
Abstract: Language comprehension engages much of the human cortex, extending beyond the canonical language system. Yet in everyday life, language unfolds alongside other modalities, such as vision, that recruit these same distributed areas. Because language is often studied in isolation, we still know little about how the brain coordinates and integrates multimodal representations. In this talk, we use fMRI data from participants viewing 37 hours of TV series and movies to model the interaction of auditory and visual input. Using encoding models that predict brain activity from each stream, we introduce a framework based on prediction error that reveals how individual brain regions combine multimodal information.
Marlene Behrmann joined the Department of Ophthalmology at the University of Pittsburgh School of Medicine, where she holds the John and Clelia Sheppard Chair, in 2022. She also holds the position of Emeritus Professor at Carnegie Mellon University. Dr. Behrmann’s research is concerned with the psychological and neural bases of visual processing, with specific attention to the mechanisms by which the signals from the eye are transformed into meaningful percepts by the brain. She adopts an interdisciplinary approach combining computational, neuropsychological and neuroimaging studies with adults and children in health and disease. Examples of her recent studies include investigations of the cortical visual system in paediatric patients following hemispherectomy and identifying mechanisms of plasticity and elucidating the potential for cortical reorganization, but she has also studied visual cortical function in individuals with inherited retinal dystrophy. Dr. Behrmann was elected a member of the Society for Experimental Psychologists in 2008, and was inducted into the National Academy of Sciences in 2015, and into the American Academy of Arts and Sciences in 2019. Dr Behrmann has received many awards including the Presidential Early Career Award for Engineering and Science, the APA Distinguished Scientific Award for Early Career Contributions and the Fred Kavli Distinguished Career Contributions in Cognitive Neuroscience Award from the Cognitive Neuroscience Society.
Title: The development, hemispheric organization, and plasticity of high-level vision
Abstract:
Adults recognize complex visual inputs, such as faces and words, with remarkable speed, accuracy and ease, but a full understanding of these abilities is still lacking. Much prior research has favoured a binary separation of faces and words, with the right hemisphere specialized for the representation of faces, and the left hemisphere specialized for the representation of words. Close scrutiny of the data, however, suggest a more graded and distributed hemispheric organization, as well as differing hemispheric profiles across individuals. Combining detailed behavioral data with structural and functional imaging data reveals how the distribution of function both within and between the two cerebral hemispheres emerges over the course of development, and a computational account of this mature organization is offered and tested. Provocatively, this mature profile is more malleable than previously thought, and cross-sectional and longitudinal data acquired from individuals with hemispherectomy reveal how a single hemisphere can subserve both visual classes. Together, the findings support a view of cortical visual organization (and perhaps, the organization of other functions too) as plastic and dynamic, both within and between hemispheres.

ARNI was thrilled to co-host Dr. Danqi Chen, Assistant Professor of Computer Science at Princeton University and Associate Director of Princeton Language and Intelligence, for an insightful lecture as part of Columbia Engineering's Lecture Series in AI.
In her talk, “Training Language Models in Academia: Research Questions and Opportunities,” Dr. Chen shared her exciting research on advancing the open language model ecosystem. From developing smaller yet highly capable models like Sheared LLaMA to exploring long-context models like ProLong, she demonstrated how academic innovation can lead the way in making AI more efficient, scalable, and accessible.
Read more about Danqi Chen's talk: Link
Title: Do contemporary, machine-executable models (aka digital twins) of the primate ventral visual system unlock the ability to non-invasively, beneficially modulate high level brain states?
Abstract:
In this talk, I will first briefly review the story of how neuroscience, cognitive science and computer science (“AI”) converged to create specific, image-computable, deep neural network models intended to appropriately abstract, emulate and explain the mechanisms of primate core visual object identification and categorization behaviors. Based on a large body of primate neurophysiological and behavioral data, some of these network models are now the most accurate emulators of the primate ventral visual stream — they well-approximate both its internal neural mechanisms and how those mechanisms support the ability of humans and other primates to rapidly and accurately infer object identity, position, pose, etc. from the set of pixels (image) received during typical natural viewing.
Because these leading neuroscientific emulator models — aka “digital twins” — are fully observable and machine-executable, they offer predictive and potential application power that our field’s prior conceptual models did not. I will describe two recent examples from our team. First, the current leading digital twins predict that the brain’s high level visual neurons (inferior temporal cortex, IT) should be highly susceptible to “adversarial attacks” in which an agent (the adversary) aims to strongly disrupt the normal neural response (here, neural firing rate) to any given natural image via small magnitude, targeted changes to that image. We verified this surprising prediction in monkey IT neurons. Second, we show how we can turn this result around and extend it: instead of making adversarial “attacks”, we propose using digital twin models to support non-invasive, beneficial brain modulation. Specifically, we show that we can use a digital twin to design spatial patterns of light energy that, when applied to the organism’s retina in the context of ongoing natural visual processing, results in precise modulation (i.e. rate bias) of the pattern of a population of IT neurons (where any intended modulation pattern is chosen ahead of time by the scientist). Because the IT visual neural populations are known to directly modulate downstream neural circuits involved in mood and anxiety, we speculate that this could provide a new, non-invasive application avenue of potential future human clinical benefit.


