BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//ARNI - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:ARNI
X-ORIGINAL-URL:https://arni-institute.org
X-WR-CALDESC:Events for ARNI
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20260308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20261101T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251112T160000
DTEND;TZID=America/New_York:20251112T170000
DTSTAMP:20260403T165948
CREATED:20251112T151131Z
LAST-MODIFIED:20251112T151131Z
UID:2032-1762963200-1762966800@arni-institute.org
SUMMARY:ARNI Continual Learning Working Group
DESCRIPTION:Next Meeting Info\n\n\nDate: Wednesday\, Nov 12\nTime: 4pm-5pm\nRoom: CEPSR 620\nZoom: Upon request @ arni@columbia.edu
URL:https://arni-institute.org/event/arni-continual-learning-working-group-3/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251107T160000
DTEND;TZID=America/New_York:20251107T170000
DTSTAMP:20260403T165948
CREATED:20251103T193100Z
LAST-MODIFIED:20251103T193100Z
UID:2024-1762531200-1762534800@arni-institute.org
SUMMARY:ARNI Biological Learning Working Group
DESCRIPTION:Continuation from prior meetings about benchmarks and competition proposals. \nZoom Link: upon request @ ARNI@columbia.edu
URL:https://arni-institute.org/event/arni-biological-learning-working-group-5/
LOCATION:NY
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251105T140000
DTEND;TZID=America/New_York:20251105T150000
DTSTAMP:20260403T165948
CREATED:20251022T211425Z
LAST-MODIFIED:20251028T143328Z
UID:2019-1762351200-1762354800@arni-institute.org
SUMMARY:Speaker: Bryan Li - ARNI Frontier Models for Neuroscience and Behavior Working Group
DESCRIPTION:Bio\nBryan Li is completing his PhD in NeuroAI at the University of Edinburgh\, under the supervision of Arno Onken and Nathalie Rochefort. His main PhD project focuses on building deep learning-based encoding models of the visual cortex that accurately predict neural activity in response to arbitrary visual stimuli. Recently\, he joined Dario Farina’s lab at Imperial College London as an Encode Fellow\, working on neuromotor interfacing and decoding.\n\nTitle (https://www.biorxiv.org/content/10.1101/2025.09.16.676524v2)\nMovie-trained transformer reveals novel response properties to dynamic stimuli in mouse visual cortex\n\nAbstract\nUnderstanding how the brain encodes complex\, dynamic visual stimuli remains a\nfundamental challenge in neuroscience. Here\, we introduce ViV1T\, a transformer-based model trained on natural movies to predict neuronal responses in mouse primary visual cortex (V1). ViV1T outperformed state-of-the-art models in predicting responses to both natural and artificial dynamic stimuli\, while requiring fewer parameters and reducing runtime. Despite being trained exclusively on natural movies\, ViV1T accurately captured core V1 properties\, including orientation and direction selectivity as well as contextual modulation\, despite lacking explicit feedback mechanisms. ViV1T also revealed novel functional features. The model predicted a wider range of contextual responses when using natural and model-generated surround stimuli compared to traditional gratings\, with novel model-generated dynamic stimuli eliciting maximal V1 responses. ViV1T also predicted that dynamic surrounds elicited stronger contextual modulation than static surrounds. Finally\, the model identified a subpopulation of neurons that exhibit contrast-dependent surround modulation\, switching their response to surround stimuli from inhibition to excitation when contrast decreases. These predictions were validated through semi-closed-loop in vivo recordings. Overall\, ViV1T establishes a powerful\, data-driven framework for understanding how brain sensory areas process dynamic visual information across space and time.\n\nZoom link: Upon request @ arni@columbia.edu
URL:https://arni-institute.org/event/arni-frontier-models-for-neuroscience-and-behavior-working-group-2/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251105T113000
DTEND;TZID=America/New_York:20251105T123000
DTSTAMP:20260403T165948
CREATED:20251105T152843Z
LAST-MODIFIED:20251105T152843Z
UID:2028-1762342200-1762345800@arni-institute.org
SUMMARY:CTN: Yael Niv
DESCRIPTION:Seminar Time: 11:30am\nDate: Fri 11/7/25\nSeminar Location: JLG\, L5-084\nHost: Weijia Zhang\n\n\n\nTitle: Latent causes\, prediction errors\, and the organization of memory\n\nAbstract: No two events are alike. But still\, we learn\, which means that we implicitly decide what events are similar enough that experience with one can inform us about what to do in another. We have suggested that this relies on parsing of incoming information into “clusters” according to inferred hidden (latent) causes. Moreover\, we have suggested that unexpected information (that is\, a prediction error) is key to this separation into clusters. In this talk\, I will demonstrate these ideas through behavioral experiments showing evidence for clustering and illustrate the effects of prediction errors on the organization of memory. I will then tie the different findings together into a hypothesis about how information about events is organized in our brain.
URL:https://arni-institute.org/event/ctn-yael-niv/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251029T150000
DTEND;TZID=America/New_York:20251029T163000
DTSTAMP:20260403T165948
CREATED:20251008T214532Z
LAST-MODIFIED:20251015T155911Z
UID:2010-1761750000-1761755400@arni-institute.org
SUMMARY:ARNI Distinguished Seminar Series: Leila Wehbe
DESCRIPTION:Bio: Leila Wehbe is an associate professor in the Machine Learning Department and the Neuroscience Institute at Carnegie Mellon University. Her work is at the interface of cognitive neuroscience and computer science. It combines naturalistic functional imaging with machine learning both to improve our understanding of the brain and to find insight to build better artificial systems. She is the recipient of an NSF CAREER award\, a Google faculty research award and an NIH CRCNS R01. Previously\, she was a postdoctoral researcher at UC Berkeley and obtained her PhD from Carnegie Mellon University \nTitle: Model prediction error reveals separate mechanisms for integrating multi-modal information in the human cortex \nAbstract: Language comprehension engages much of the human cortex\, extending beyond the canonical language system. Yet in everyday life\, language unfolds alongside other modalities\, such as vision\, that recruit these same distributed areas. Because language is often studied in isolation\, we still know little about how the brain coordinates and integrates multimodal representations. In this talk\, we use fMRI data from participants viewing 37 hours of TV series and movies to model the interaction of auditory and visual input. Using encoding models that predict brain activity from each stream\, we introduce a framework based on prediction error that reveals how individual brain regions combine multimodal information.
URL:https://arni-institute.org/event/arni-distinguished-seminar-series-leila-wehbe/
LOCATION:Zuckerman Institute- Kavli Auditorium 9th Fl\, 3227 Broadway\, NY
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251027T150000
DTEND;TZID=America/New_York:20251027T160000
DTSTAMP:20260403T165948
CREATED:20251022T210922Z
LAST-MODIFIED:20251022T210922Z
UID:2018-1761577200-1761580800@arni-institute.org
SUMMARY:ARNI Continual Learning Working Group
DESCRIPTION:Next Meeting Info\n\n\nDate: Monday\, October 27\nTime: 3-4pm\nRoom: CEPSR 620\n\nZoom: Upon request @arni@columbia.edu
URL:https://arni-institute.org/event/arni-continual-learning-working-group-2/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251024T140000
DTEND;TZID=America/New_York:20251024T150000
DTSTAMP:20260403T165948
CREATED:20251022T132648Z
LAST-MODIFIED:20251022T132648Z
UID:2016-1761314400-1761318000@arni-institute.org
SUMMARY:ARNI Biological Learning Working Group
DESCRIPTION:Continuation from prior meetings about benchmarks and competition proposals. \nZoom Link: upon request @ ARNI@columbia.edu
URL:https://arni-institute.org/event/arni-biological-learning-working-group-4/
LOCATION:NY
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251024T113000
DTEND;TZID=America/New_York:20251024T130000
DTSTAMP:20260403T165948
CREATED:20250917T182936Z
LAST-MODIFIED:20251021T182719Z
UID:1992-1761305400-1761310800@arni-institute.org
SUMMARY:CTN: Anna Schapiro
DESCRIPTION:Title: Learning representations of specifics and generalities over time\n\nAbstract: There is a fundamental tension between storing discrete traces of individual experiences\, which allows recall of particular moments in our past without interference\, and extracting regularities across these experiences\, which supports generalization and prediction in similar situations in the future. One influential proposal for how the brain resolves this tension is that it separates the processes anatomically into Complementary Learning Systems\, with the hippocampus rapidly encoding individual episodes and the neocortex slowly extracting regularities over days\, months\, and years. But this does not explain our ability to learn and generalize from new regularities in our environment quickly\, often within minutes. We have put forward a neural network model of the hippocampus that suggests that the hippocampus itself may contain complementary learning systems\, with one pathway specializing in the rapid learning of regularities and a separate pathway handling the region’s classic episodic memory functions. This proposal has broad implications for how we rapidly learn novel information of specific and generalized types\, which we test across statistical learning\, inference\, and category learning paradigms. We also explore how this system interacts with slower-learning neocortical memory systems\, with empirical and modeling investigations into how hippocampal replay shapes neocortical representations during sleep. Together\, the work helps us understand how structured information in our environment is initially encoded and how it then transforms over time.\nZoom: Available upon request @ arni@columbia.edu
URL:https://arni-institute.org/event/ctn-anna-schapiro/
LOCATION:Zuckerman Institute- Kavli Auditorium 9th Fl\, 3227 Broadway\, NY
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251021T130000
DTEND;TZID=America/New_York:20251021T150000
DTSTAMP:20260403T165948
CREATED:20251008T213444Z
LAST-MODIFIED:20251008T221401Z
UID:2007-1761051600-1761058800@arni-institute.org
SUMMARY:Speaker: Jascha Achterberg ARNI WG Multi-resource-cost optimization of neural network models
DESCRIPTION:Title:\nBuilding the brain’s efficient system-level architecture: optimisations across space\, time\, and multiple regions \nAbstract:\nThe computations a brain can perform are fundamentally constrained by physical realities: energetic resources are limited\, and time is precious. To understand why the brain works the way it does\, we must understand its function in the context of these constraints. Prior modeling work has successfully demonstrated how spatial energetic constraints drive structure-function co-optimization\, giving rise to many of the architectural features we observe across areas of neuroscience. By incorporating such physical constraints\, we can build complex systems-level models that are meaningfully constrained by physically measurable factors rather than arbitrary design choices. \nIn this talk\, I will expand on these spatial frameworks by introducing new work on temporal processing and signal precision constraints in neural networks. I will demonstrate how different optimization strategies within individual regions can be combined in heterogeneous multi-region models\, revealing how the brain trades off resource use across tasks and situations. Finally\, I will show how space and time interact in surprising ways to achieve efficient computation — principles that apply not only to the brain but to any large-scale distributed computing system. Together\, these advances bring us closer to understanding the general principles that enable sophisticated intelligence to emerge from physically and energetically constrained computing systems.
URL:https://arni-institute.org/event/speaker-jascha-achterberg-arni-wg-multi-resource-cost-optimization-of-neural-network-models/
LOCATION:Zuckerman Institute – L3-079\, 3227 Broadway\, New York\, NY\, 10027\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251017T113000
DTEND;TZID=America/New_York:20251017T130000
DTSTAMP:20260403T165948
CREATED:20250923T155504Z
LAST-MODIFIED:20250923T155517Z
UID:2001-1760700600-1760706000@arni-institute.org
SUMMARY:CTN: Ilana Witten
DESCRIPTION:Title and Abstract: TBD
URL:https://arni-institute.org/event/ctn-illana-witten/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251010T113000
DTEND;TZID=America/New_York:20251010T130000
DTSTAMP:20260403T165948
CREATED:20250923T155316Z
LAST-MODIFIED:20251007T171403Z
UID:1999-1760095800-1760101200@arni-institute.org
SUMMARY:CTN: Maryam Shanechi
DESCRIPTION:Title: Dynamical models of neural-behavioral data with application to AI-driven neurotechnology \nAbstract: A major challenge in neuroAI is to model\, decode\, and modulate the activity of large populations of neurons that underlie our brain’s functions and dysfunctions. Toward addressing this challenge\, I will present our work on novel dynamical models of neural-behavioral data and applying them to enable a new generation of brain-computer interfaces for disorders such as major depression. First\, I will present a novel dynamical modeling framework that jointly describes neural-behavioral data\, dissociates behaviorally relevant neural dynamics\, and learns them more accurately. Then\, I will show how we can also predict the effect of inputs\, such as sensory stimuli or neurostimulation\, to dissociate intrinsic and input-driven neural dynamics. I further present how these models can incorporate multiple spatiotemporal scales of brain activity simultaneously\, from spikes to LFP to brain-wide neuroimaging. Finally\, I will discuss the challenge of developing AI algorithms for neurotechnology. I will present a framework that combines neural networks with stochastic state-space models to enable accurate yet flexible inference of brain states causally\, non-causally\, and even with missing neural samples. The above dynamical models can enable next-generation AI-driven neurotechnologies that restore lost motor and emotional function in diverse brain disorders such as paralysis and major depression.
URL:https://arni-institute.org/event/ctn-maryam-shanechi/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251008T140000
DTEND;TZID=America/New_York:20251008T150000
DTSTAMP:20260403T165948
CREATED:20250917T150021Z
LAST-MODIFIED:20251006T170343Z
UID:1991-1759932000-1759935600@arni-institute.org
SUMMARY:ARNI Frontier Models for Neuroscience and Behavior Working Group
DESCRIPTION:Title:\nOmniMouse: Scaling properties of multi-modal\, multi-task Brain Models on 150B Neural Tokens \nAbstract:\nScaling data and artificial neural networks has transformed AI\, driving breakthroughs in language and vision. Whether similar principles apply to modeling brain activity remains unclear. Here we leveraged a dataset of 3.3 million neurons from the visual cortex of 78 mice across 323 sessions\, totaling more than 150 billion neural tokens recorded during natural movies\, images and parametric stimuli\, and behavior. We train multi-modal\, multi-task transformer models (1M–300M parameters) that support three regimes flexibly at test time: neural prediction (predicting neuronal responses from sensory input and behavior)\, behavioral decoding (predicting behavior from neural activity)\, neural forecasting (predicting future activity from current neural dynamics)\, or any combination of the three. We find that performance scales reliably with more data\, but gains from increasing model size saturate — suggesting that current brain models are limited by data rather than compute. This inverts the standard AI scaling story: in language and computer vision\, massive datasets make parameter scaling the primary driver of progress\, whereas in brain modeling — even in the mouse visual cortex\, a relatively simple and low-resolution system — models remain data-limited despite vast recordings. These findings highlight the need for richer stimuli\, tasks\, and larger-scale recordings to build brain foundation models. The observation of systematic scaling raises the possibility of phase transitions in neural modeling\, where larger and richer datasets might unlock qualitatively new capabilities\, paralleling the emergent properties seen in large language models. \nZoom: Upon request @ arni@columbia.edu \n 
URL:https://arni-institute.org/event/arni-frontier-models-for-neuroscience-and-behavior-working-group/
LOCATION:Zuckerman Institute – L3-079\, 3227 Broadway\, New York\, NY\, 10027\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251003T140000
DTEND;TZID=America/New_York:20251003T150000
DTSTAMP:20260403T165948
CREATED:20250922T183104Z
LAST-MODIFIED:20250922T183104Z
UID:1994-1759500000-1759503600@arni-institute.org
SUMMARY:ARNI Biological Learning Working Group
DESCRIPTION:Biological Learning first fall 2025 working group session of the year! Pingsheng Li\, PhD student with Blake Richards at MILA\, will be presenting Log-Normal Multiplicative Dynamics for Stable Low-Precision Training of Large Networks.\n \nBrief discussion of how the group can all collaborate together on a project\, define a benchmark for ourselves with some metrics we care about\, and then the group will break into pods that each will develop methods towards solving the benchmark tasks. \nGoogle Meets: upon request @ arni@columbia.edu
URL:https://arni-institute.org/event/arni-biological-learning-working-group-3/
LOCATION:NY
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251003T113000
DTEND;TZID=America/New_York:20251003T130000
DTSTAMP:20260403T165948
CREATED:20250923T155204Z
LAST-MODIFIED:20250923T155204Z
UID:1997-1759491000-1759496400@arni-institute.org
SUMMARY:CTN: Reza Shadmehr
DESCRIPTION:Title and Abstract: TBD
URL:https://arni-institute.org/event/ctn-reza-shadmehr/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250929T150000
DTEND;TZID=America/New_York:20250929T160000
DTSTAMP:20260403T165948
CREATED:20250917T145723Z
LAST-MODIFIED:20250922T184156Z
UID:1990-1759158000-1759161600@arni-institute.org
SUMMARY:ARNI Continual Learning Working Group Project
DESCRIPTION:Next Monday\, September 29\, we will continue our fall semester program with a group workshop on the topic of neural memory models.  First\, we will have a presentation from group member Max Bennett about our ongoing work on generalized neural memory systems that perform flexible updates based on learning instructions specified in natural language.  After the presentation\, we will spend some time in open discussion of memory models\, and hopefully discuss potential (interdisciplinary) projects for the group.\n\nNext Meeting Info\n\n\nDate: Monday\, September 29\nTime: 3-4pm\nRoom: CEPSR 620\nZoom: upon request @ arni@columbia.edu
URL:https://arni-institute.org/event/arni-continual-learning-working-group-project-12/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250926T113000
DTEND;TZID=America/New_York:20250926T130000
DTSTAMP:20260403T165948
CREATED:20250902T200421Z
LAST-MODIFIED:20250923T155034Z
UID:1971-1758886200-1758891600@arni-institute.org
SUMMARY:CTN: Ann Kennedy
DESCRIPTION:Title: Neural computations underlying the regulation of motivated behavior \nAbstract: As we interact with the world around us\, we experience a constant stream of sensory inputs\, and must generate a constant stream of behavioral actions. What makes brains more than simple input-output machines is their capacity to integrate sensory inputs with an animal’s own internal motivational state to produce behavior that is flexible and adaptive. In this talk\, I will present three recent stories from the lab exploring the dynamics and modulation of motivational states. First\, working with neural recordings from a hypothalamic nucleus involved in regulation of aggression\, I show how we relate the dynamical properties of neural populations to escalation of an aggressive motivational state. Next\, using methods from control theory and reinforcement learning\, I show that different sites of modulation within a neural circuit produce different resulting effects on behavior and neural activity. Finally\, I will show how theoretical models can reveal unexpected effects of neuromodulation on the dynamic regimes of recurrent neural networks\, illuminating the ways in which the brain might use small molecules to reshape its activity and thus modify behavior.
URL:https://arni-institute.org/event/ctn-ann-kennedy/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250922T150000
DTEND;TZID=America/New_York:20250922T160000
DTSTAMP:20260403T165948
CREATED:20250904T153706Z
LAST-MODIFIED:20250904T204820Z
UID:1977-1758553200-1758556800@arni-institute.org
SUMMARY:ARNI Continual Learning Working Group Project
DESCRIPTION:Zoom Link: Upon request @ arni@columbia.edu
URL:https://arni-institute.org/event/arni-continual-learning-working-group-project-10/
LOCATION:CSB 480\, Mudd Building\, 500 W 120th Street
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250919T113000
DTEND;TZID=America/New_York:20250919T130000
DTSTAMP:20260403T165948
CREATED:20250902T200250Z
LAST-MODIFIED:20250917T183026Z
UID:1969-1758281400-1758286800@arni-institute.org
SUMMARY:CTN: Dani Bassett
DESCRIPTION:Title and Abstract: TBD \nZoom:\nMeeting ID: 993 3345 6502\nPasscode: Upon request @ arni@columbia.edu
URL:https://arni-institute.org/event/ctn-dani-bassett/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250915T150000
DTEND;TZID=America/New_York:20250915T160000
DTSTAMP:20260403T165948
CREATED:20250910T204507Z
LAST-MODIFIED:20250910T204507Z
UID:1988-1757948400-1757952000@arni-institute.org
SUMMARY:ARNI Continual Learning Working Group Project
DESCRIPTION:Monday (9/15) the Continual Learning Group will have a presentation from group member Yunfan Zhang.  Yunfan will be sharing his ongoing work on developing a continual learning benchmark based on deriving up-to-date facts from news over time.\n\n\nDate: Monday\, September 15\nTime: 3-4pm\nRoom: CSB 480\nZoom: upon request @ arni@columbia.edu
URL:https://arni-institute.org/event/arni-continual-learning-working-group-project-11/
LOCATION:CSB 480\, Mudd Building\, 500 W 120th Street
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250912T113000
DTEND;TZID=America/New_York:20250912T130000
DTSTAMP:20260403T165948
CREATED:20250902T195737Z
LAST-MODIFIED:20250909T154600Z
UID:1963-1757676600-1757682000@arni-institute.org
SUMMARY:CTN: Naomi Leonard
DESCRIPTION:Title:\nFast and Flexible Group Decision-Making \nAbstract:\nA wide range of animals live and move in groups. Many animals do better in groups than alone when\, for example\, foraging for food\, migrating\, and avoiding predators. A key to group success is social interaction. Less well understood is how a group\, with no centralized control\, is capable of the fast and flexible decision-making required to carry out its tasks in an environment with uncertainty\, variability\, and rapid change. I will introduce an approach to modeling group decision-making dynamics that draws on biophysical models from computational neuroscience. Analysis of our model provides new insights into fast and flexible decision-making: how indecision can be broken as fast as it becomes costly\, how sensitivity to stimulus can be tuned as context and environment change\, how social heterogeneity can enhance stability and flexibility\, and how excitability (spiking) provides further agility and frugality. I will discuss the significance of these results for the study and design of collective intelligence in nature and technology.
URL:https://arni-institute.org/event/ctn-naomi-leonard/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250908T150000
DTEND;TZID=America/New_York:20250908T160000
DTSTAMP:20260403T165948
CREATED:20250904T153618Z
LAST-MODIFIED:20250904T204910Z
UID:1976-1757343600-1757347200@arni-institute.org
SUMMARY:ARNI Continual Learning Working Group Project
DESCRIPTION:Zoom link: Upon request @arni@columbia.edu
URL:https://arni-institute.org/event/arni-continual-learning-working-group-project-9/
LOCATION:CSB 480\, Mudd Building\, 500 W 120th Street
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250905T113000
DTEND;TZID=America/New_York:20250905T130000
DTSTAMP:20260403T165948
CREATED:20250902T145542Z
LAST-MODIFIED:20250902T145542Z
UID:1960-1757071800-1757077200@arni-institute.org
SUMMARY:CTN: Christine Constantinople
DESCRIPTION:Title: Neural circuit mechanisms of value-based decision-making \nAbstract: \nThe value of the environment determines animals’ motivational states and sets expectations for error-based learning. But how are values computed? We developed a novel temporal wagering task with latent structure\, and used high-throughput behavioral training to obtain well-powered behavioral datasets from hundreds of rats that learned the structure of the task. We found that rats use distinct value computations for sequential decisions within single trials. Moreover\, these sequential decisions are supported by different brain regions\, suggesting that distinct neural circuits support specific types of value computations. I will discuss our ongoing efforts to delineate how distributed circuits in the orbitofrontal cortex and striatum coordinate complex value-based decisions.
URL:https://arni-institute.org/event/ctn-christine-constantinople/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250827T140000
DTEND;TZID=America/New_York:20250827T160000
DTSTAMP:20260403T165948
CREATED:20250826T144924Z
LAST-MODIFIED:20250826T144924Z
UID:1952-1756303200-1756310400@arni-institute.org
SUMMARY:Speakers: Vinam Arora and Ji Xia – ARNI Frontier Models for Neuroscience and Behavior Working Group
DESCRIPTION:Title and Abstracts:  \n1st Speaker: Vinam Arora\, UPenn\nTitle: Know Thyself by Knowing Others: Learning Neuron Identity from Population Context\nAbstract: Identifying the functional identity of individual neurons is essential for interpreting circuit dynamics\, yet remains a major challenge in large-scale in vivo recordings where anatomical and molecular labels are often unavailable. Here we introduce NuCLR\, a self-supervised framework that learns context-aware representations of neuron identity by modeling each neuron’s role within the broader population. NuCLR employs a spatiotemporal transformer that captures both within-neuron dynamics and across-neuron interactions\, and is trained with a sample-wise contrastive objective that encourages stable\, discriminative embeddings across time. Across multiple open-access datasets\, NuCLR outperforms prior methods in both cell type and brain region classification. It enables zero-shot generalization to entirely new populations—without retraining or access to stimulus labels—offering a scalable approach for real-time\, functional decoding of neuron identity across diverse experimental settings. \n2nd Speaker: Ji Xia\, Columbia\nTitle: In painting the neural picture: Inferring Unrecorded Brain Area Dynamics from Multi-Animal Datasets.\nAbstract: Understanding how the brain drives memory-guided movements requires recording neural activity from the motor cortex and interconnected subcortical areas. Neuropixels probes now allow simultaneous recordings from subsets of these areas\, but no single session captures all areas of interest\, and different neurons are sampled from each area across sessions. This poses a key challenge: how to integrate neural data across sessions to reconstruct the complete multi-area picture. We address this with a transformer-based autoencoder that aligns neural activity into a shared latent space across sessions and animals\, separately for each brain area\, including those not recorded in a given session. This approach enables single-trial analysis of multi-area neural dynamics from all areas of interest. I am now working on improving this method\, and will discuss both its present challenges and promising directions for future work. \nZoom: Upon request at @ arni@columbia.edu.
URL:https://arni-institute.org/event/speakers-vinam-arora-and-ji-xia-arni-frontier-models-for-neuroscience-and-behavior-working-group/
LOCATION:Zuckerman Institute – L3-079\, 3227 Broadway\, New York\, NY\, 10027\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250807T143000
DTEND;TZID=America/New_York:20250807T163000
DTSTAMP:20260403T165948
CREATED:20250703T154507Z
LAST-MODIFIED:20250729T153708Z
UID:1845-1754577000-1754584200@arni-institute.org
SUMMARY:Speaker: Kwabena Boahen ARNI WG Multi-resource-cost optimization of neural network models
DESCRIPTION:Title: From 2D Chips to 3D Brains \nAbstract: \nArtificial intelligence (AI) realizes a synaptocentric conception of the learning brain with dot-products and advances by performing twice as many multiplications every two months. But the semiconductor industry tiles twice as many multipliers on a chip only every two years. Moreover\, the returns from tiling these multipliers ever more densely now diminish\, because signals must travel relatively farther and farther\, expending energy and exhausting heat that scales quadratically. As a result\, communication is now much more expensive than computation. Much more so than in biological brains\, where energy-use scales linearly rather than quadratically with neuron count. That allows an 86-billion-neuron human brain to use as little power as a single lightbulb (25W) rather than as much as the entire US (3TW). Hence\, rescaling a chip’s energy-use from quadratic to linear is critical to scale AI sustainably from trillion (1012) parameters (mouse scale) today to a quadrillion (1015) parameters (human scale) in the next five years. But this would require communication cost to be reduced radically. Towards that end\, I will present a recent re-conception of the brain’s fundamental unit of computation that sparsifies signals by moving away from synaptocentric learning with dot-products to dendrocentric learning with sequence detectors. \nZoom: Request @ ARNI@columbia.edu
URL:https://arni-institute.org/event/speaker-arni-wg-multi-resource-cost-optimization-of-neural-network-models/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250730T140000
DTEND;TZID=America/New_York:20250730T160000
DTSTAMP:20260403T165948
CREATED:20250703T150730Z
LAST-MODIFIED:20250723T174753Z
UID:1842-1753884000-1753891200@arni-institute.org
SUMMARY:Speaker: Memming Park  – ARNI Frontier Models for Neuroscience and Behavior Working Group
DESCRIPTION:Title: Meta-dynamical state space modeling for integrative neural data analysis \nAbstract:\nUncovering the organizing principles of neural systems requires integrating information across diverse datasets—each alone offering a limited view and signal-to-noise ratio\, but together revealing coherent dynamical structures. We present a meta-dynamical state-space modeling framework that learns a shared solution space of neural dynamics from heterogeneous recordings across sessions\, animals\, and tasks. By capturing cross-dataset similarity and variability on a low-dimensional manifold that spans a space of dynamical systems\, our approach enables few-shot inference\, rapid adaptation to new recordings\, and discovery of latent dynamical motifs that underlie behavior. We demonstrate its utility in modeling motor cortex activity\, revealing dynamics that generalize across individuals and track the change in dynamics during learning. We argue that for understanding neural computation and real-time neuroscience applications\, our approach is well-suited as a foundation model for integrative neuroscience. \nZoom: Request @arni@columbia.edu
URL:https://arni-institute.org/event/speaker-memming-park-arni-frontier-models-for-neuroscience-and-behavior-working-group/
LOCATION:Zuckerman Institute – L3-079\, 3227 Broadway\, New York\, NY\, 10027\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250715T180000
DTEND;TZID=America/New_York:20250715T193000
DTSTAMP:20260403T165948
CREATED:20250709T154233Z
LAST-MODIFIED:20250709T154233Z
UID:1871-1752602400-1752607800@arni-institute.org
SUMMARY:AI and Neuroscience/Cognitive Science Activities Brainstorming
DESCRIPTION:ARNI will host an informal brainstorming session on July 15th (ZI Education Lab) focused on developing AI and neuroscience/cognitive science activities for K–12 students. The goal is to create engaging ways to help young learners better understand the brain and artificial intelligence. Trainees are encouraged to attend—if you’re interested in making an impact on youth education\, this is a great opportunity to get involved. Join if you are free! There will be free pizza! \nThis is the registration form!
URL:https://arni-institute.org/event/ai-and-neuroscience-cognitive-science-activities-brainstorming-2/
LOCATION:NY
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250709T110000
DTEND;TZID=America/New_York:20250709T123000
DTSTAMP:20260403T165948
CREATED:20250709T153527Z
LAST-MODIFIED:20250709T153527Z
UID:1868-1752058800-1752064200@arni-institute.org
SUMMARY:Biological Learning Working Group Meeting
DESCRIPTION:Continuation of the prior meeting. \nZoom: Upon request @arni@columbia.edu
URL:https://arni-institute.org/event/biological-learning-working-group-meeting/
LOCATION:Virtual
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250627T113000
DTEND;TZID=America/New_York:20250627T133000
DTSTAMP:20260403T165948
CREATED:20250407T145614Z
LAST-MODIFIED:20250611T182613Z
UID:1629-1751023800-1751031000@arni-institute.org
SUMMARY:CTN: Blake Richards
DESCRIPTION:Title: Brain-like learning with exponentiated gradients \nAbstract: Computational neuroscience relies on gradient descent (GD) for training artificial neural network (ANN) models of the brain. The advantage of GD is that it is effective at learning difficult tasks. However\, it produces ANNs that are a poor phenomenological fit to biology\, making them less relevant as models of the brain. Specifically\, it violates Dale’s law\, by allowing synapses to change from excitatory to inhibitory\, and leads to synaptic weights that are not log-normally distributed\, contradicting experimental data. Here\, starting from first principles of optimisation theory\, we present an alternative learning algorithm\, exponentiated gradient (EG)\, that respects Dale’s Law and produces log-normal weights\, without losing the power of learning with gradients. We also show that in biologically relevant settings EG outperforms GD\, including learning from sparsely relevant signals and dealing with synaptic pruning. Altogether\, our results show that EG is a superior learning algorithm for modelling the brain with ANNs. \nZoom Link: By Request
URL:https://arni-institute.org/event/ctn-blake-richards/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250625T140000
DTEND;TZID=America/New_York:20250625T153000
DTSTAMP:20260403T165948
CREATED:20250604T173448Z
LAST-MODIFIED:20250703T150230Z
UID:1787-1750860000-1750865400@arni-institute.org
SUMMARY:Speaker: Dr. Guillaume Lajoie - ARNI Frontier Models for Neuroscience and Behavior Working Group
DESCRIPTION:Title: POSSM: Generalizable\, real-time neural decoding with hybrid state-space models \nAbstract: \nReal-time decoding of neural spiking data is a core aspect of neurotechnology applications such as brain-computer interfaces\, where models are subject to strict latency constraints. Traditional methods\, including simple recurrent neural networks\, are fast and lightweight but are less equipped for generalization to unseen data. In contrast\, recent Transformer-based approaches leverage large-scale neural datasets to attain strong generalization performance. However\, these models typically have much larger computational requirements and are not suitable for settings requiring low latency or limited memory. To address these shortcomings\, we present POSSM\, a novel architecture that combines individual spike tokenization and an input cross-attention module with a recurrent state-space model (SSM) backbone\, thereby enabling (1) fast and causal online prediction on neural activity and (2) efficient generalization to new sessions\, individuals\, and tasks through multi-dataset pre-training. We evaluate our model’s performance in terms of decoding accuracy and inference speed on monkey reaching datasets\, and show that it extends to clinical applications\, namely handwriting and speech decoding. Notably\, we demonstrate that pre-training on monkey motor-cortical recordings improves decoding performance on the human handwriting task\, highlighting the exciting potential for cross-species transfer. In all of these tasks\, we find that POSSM achieves comparable decoding accuracy with state-of-the-art Transformers\, at a fraction of the inference cost. These results suggest that hybrid SSMs may be the key to bridging the gap between accuracy\, inference speed\, and generalization when training neural decoders for real-time\, closed-loop applications. \nZoom Link: Request via email arni@columbia.edu
URL:https://arni-institute.org/event/speaker-dr-guillaume-lajoie-arni-frontier-models-for-neuroscience-and-behavior-working-group/
LOCATION:Zuckerman Institute – L3-079\, 3227 Broadway\, New York\, NY\, 10027\, United States
CATEGORIES:ARNI Frontier Models for Neuroscience and Behavior Working Group
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250620T110000
DTEND;TZID=America/New_York:20250620T130000
DTSTAMP:20260403T165948
CREATED:20250407T145439Z
LAST-MODIFIED:20250421T152336Z
UID:1627-1750417200-1750424400@arni-institute.org
SUMMARY:CTN: Andrew Saxe
DESCRIPTION:Zoom Link: https://columbiauniversity.zoom.us/j/92032394293?pwd=ZkQBLK7LrSU7ku2zkvXTd2QEw4WUSn.1
URL:https://arni-institute.org/event/cnt-andrew-saxe/
LOCATION:NY
END:VEVENT
END:VCALENDAR