Internal Working Group Speakers
Frontier Models for Neuroscience and Behavior

Date: November 5, 2025
Bio
Bryan Li is completing his PhD in NeuroAI at the University of Edinburgh, under the supervision of Arno Onken and Nathalie Rochefort. His main PhD project focuses on building deep learning-based encoding models of the visual cortex that accurately predict neural activity in response to arbitrary visual stimuli. Recently, he joined Dario Farina’s lab at Imperial College London as an Encode Fellow, working on neuromotor interfacing and decoding.
Title:
Movie-trained transformer reveals novel response properties to dynamic stimuli in mouse visual cortex (https://www.biorxiv.org/content/10.1101/2025.09.16.676524v2)
Abstract:
Understanding how the brain encodes complex, dynamic visual stimuli remains a fundamental challenge in neuroscience. Here, we introduce ViV1T, a transformer-based model trained on natural movies to predict neuronal responses in mouse primary visual cortex (V1). ViV1T outperformed state-of-the-art models in predicting responses to both natural and artificial dynamic stimuli, while requiring fewer parameters and reducing runtime. Despite being trained exclusively on natural movies, ViV1T accurately captured core V1 properties, including orientation and direction selectivity as well as contextual modulation, despite lacking explicit feedback mechanisms. ViV1T also revealed novel functional features. The model predicted a wider range of contextual responses when using natural and model-generated surround stimuli compared to traditional gratings, with novel model-generated dynamic stimuli eliciting maximal V1 responses. ViV1T also predicted that dynamic surrounds elicited stronger contextual modulation than static surrounds. Finally, the model identified a subpopulation of neurons that exhibit contrast-dependent surround modulation, switching their response to surround stimuli from inhibition to excitation when contrast decreases. These predictions were validated through semi-closed-loop in vivo recordings. Overall, ViV1T establishes a powerful, data-driven framework for understanding how brain sensory areas process dynamic visual information across space and time.
Multi-resource-cost Optimization of Neural Network Models

Date: December 10, 2025
Time: 11:00am
Location: Zuckerman Institute L5-116
Title: Economics of temporal evidence integration
Abstract: The temporal integration of sensory information is an important aspect of many human decision tasks. I will present results of ongoing research in my laboratory aimed at understanding the dynamic processes underlying evidence integration. In particular, I will discuss a novel resource-rational model that treats both the representation as well as the integration and maintenance of sensory evidence as actively controlled, performance-effort trade-off mechanisms. Validated against data from various behavioral experiments, the model not only provides a normative explanation for observed non-linear dynamics in evidence integration but also a parsimonious explanation for individual tendencies for recency or primacy behavior. As the work is ongoing and unpublished, I am looking forward to an engaged discussion with the audience.
Zoom Link: Upon request @ [email protected]













