Loading Events

« All Events

  • This event has passed.

Speaker: Bryan Li – ARNI Frontier Models for Neuroscience and Behavior Working Group

November 5 @ 2:00 pm - 3:00 pm
Bio
Bryan Li is completing his PhD in NeuroAI at the University of Edinburgh, under the supervision of Arno Onken and Nathalie Rochefort. His main PhD project focuses on building deep learning-based encoding models of the visual cortex that accurately predict neural activity in response to arbitrary visual stimuli. Recently, he joined Dario Farina’s lab at Imperial College London as an Encode Fellow, working on neuromotor interfacing and decoding.
Title (https://www.biorxiv.org/content/10.1101/2025.09.16.676524v2)
Movie-trained transformer reveals novel response properties to dynamic stimuli in mouse visual cortex
Abstract
Understanding how the brain encodes complex, dynamic visual stimuli remains a
fundamental challenge in neuroscience. Here, we introduce ViV1T, a transformer-based model trained on natural movies to predict neuronal responses in mouse primary visual cortex (V1). ViV1T outperformed state-of-the-art models in predicting responses to both natural and artificial dynamic stimuli, while requiring fewer parameters and reducing runtime. Despite being trained exclusively on natural movies, ViV1T accurately captured core V1 properties, including orientation and direction selectivity as well as contextual modulation, despite lacking explicit feedback mechanisms. ViV1T also revealed novel functional features. The model predicted a wider range of contextual responses when using natural and model-generated surround stimuli compared to traditional gratings, with novel model-generated dynamic stimuli eliciting maximal V1 responses. ViV1T also predicted that dynamic surrounds elicited stronger contextual modulation than static surrounds. Finally, the model identified a subpopulation of neurons that exhibit contrast-dependent surround modulation, switching their response to surround stimuli from inhibition to excitation when contrast decreases. These predictions were validated through semi-closed-loop in vivo recordings. Overall, ViV1T establishes a powerful, data-driven framework for understanding how brain sensory areas process dynamic visual information across space and time.
Zoom link: Upon request @ [email protected]

Details

  • Date: November 5
  • Time:
    2:00 pm - 3:00 pm

Venue

  • Zuckerman Institute – L5-084
  • 3227 Broadway
    New York, NY United States
    + Google Map