Internal Working Group Speakers

Frontier Models for Neuroscience and Behavior

Hubert Banville

Date: May 11, 2026
Time: 3:00pm
Virtual Link: Upon request at [email protected]

Title: A foundation model of vision, audition, and language for in-silico neuroscience

Abstract: Cognitive neuroscience is fragmented into specialized models, each tailored to specific experimental paradigms, hence preventing a unified model of cognition in the human brain. Here, we introduce TRIBE v2, a tri-modal (video, audio and language) foundation model capable of predicting human brain activity in a variety of naturalistic and experimental conditions. Leveraging a unified dataset of over 1,000 hours of fMRI across 720 subjects, we demonstrate that our model accurately predicts high-resolution brain responses for novel stimuli, tasks and subjects, superseding traditional linear encoding models, delivering several-fold improvements in accuracy. Critically, TRIBE v2 enables in silico experimentation: tested on seminal visual and neuro-linguistic paradigms, it recovers a variety of results established by decades of empirical research. Finally, by extracting interpretable latent features, TRIBE v2 reveals the fine-grained topography of multisensory integration. These results establish artificial intelligence as a unifying framework for exploring the functional organization of the human brain. We will be hosting Hubert Banville from Meta who will discuss their latest TRIBE fMRI foundation model.

 

Multi-resource-cost Optimization of Neural Network Models

Hadi Vafaii

Date: April 7, 2026

Location: ZI L3-079

Time: 1:00pm

Title: Metabolic cost of information processing in Poisson variational autoencoders

Abstract: Computation in biological systems is fundamentally energy-constrained, yet standard theories of computation treat energy as freely available. Here, we argue that variational free energy minimization under a Poisson assumption offers a principled path toward an energy-aware theory of computation. Our key observation is that the Kullback-Leibler (KL) divergence term in the Poisson free energy objective becomes proportional to the prior firing rates of model neurons, yielding an emergent metabolic cost term that penalizes high baseline activity. This structure couples an abstract information-theoretic quantity — the coding rate — to a concrete biophysical variable — the firing rate — which enables a trade-off between coding fidelity and energy expenditure. Such a coupling arises naturally in the Poisson variational autoencoder (P-VAE; a brain-inspired generative model that encodes inputs as discrete spike counts and recovers a spiking form of sparse coding as a special case) but is absent from standard Gaussian VAEs. To demonstrate that this metabolic cost structure is unique to the Poisson formulation, we compare the P-VAE against GReLU-VAE, a Gaussian VAE with ReLU rectification applied to latent samples, which controls for the non-negativity constraint. Across a systematic sweep of the KL term weighting coefficient β and latent dimensionality, we find that increasing β monotonically increases sparsity and reduces average spiking activity in the P-VAE. In contrast, GReLU-VAE representations remain unchanged, confirming that the effect is specific to Poisson statistics rather than a byproduct of non-negative representations. These results establish Poisson variational inference as a promising foundation for a resource-constrained theory of computation.

Zoom Link: Upon request @ [email protected]

Language and Vision

Sara Gong

Date: April 27, 2026
Location: Virtual
Time: 3pm

Title and Abstract: TBD

Zoom Link: Upon request @ [email protected]