BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//ARNI - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:ARNI
X-ORIGINAL-URL:https://arni-institute.org
X-WR-CALDESC:Events for ARNI
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20260308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20261101T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20270314T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20271107T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260410T160000
DTEND;TZID=America/New_York:20260410T170000
DTSTAMP:20260403T141351
CREATED:20260330T140055Z
LAST-MODIFIED:20260330T140055Z
UID:2421-1775836800-1775840400@arni-institute.org
SUMMARY:ARNI Biological Learning Working Group
DESCRIPTION:Continuation of prior meetings. \nZoom: Upon request @arni@columbia.edu
URL:https://arni-institute.org/event/arni-biological-learning-working-group-9/
LOCATION:Virtual
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260409T150000
DTEND;TZID=America/New_York:20260409T160000
DTSTAMP:20260403T141351
CREATED:20260327T173721Z
LAST-MODIFIED:20260327T173721Z
UID:2418-1775746800-1775750400@arni-institute.org
SUMMARY:Speaker: Mengye Ren - ARNI Continual Learning Working Group Meeting
DESCRIPTION:Mengye Ren
URL:https://arni-institute.org/event/speaker-mengye-ren-arni-continual-learning-working-group-meeting/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260330T150000
DTEND;TZID=America/New_York:20260330T160000
DTSTAMP:20260403T141351
CREATED:20260331T161001Z
LAST-MODIFIED:20260331T161001Z
UID:2422-1774882800-1774886400@arni-institute.org
SUMMARY:Speaker Josue Ortega Caro: ARNI Frontier Models for Neuroscience and Behavior Working Group
DESCRIPTION:Time: 30th March. 3pm EST\n\nTitle: Large scale models for spatiotemporal data.\nSpeaker: Josue Ortega Caro https://josueortc.github.io/\nAbstract:  Spatiotemporal and multimodal datasets contain structured variability distributed across space\, time\, and measurement modality\, motivating modeling approaches that can learn representations directly from large-scale data. Inspired by video foundational models\, we study how the masked autoencoder training objective can learn shared structure across heterogeneous observations while preserving modality-specific information\, and how training these models requires multiple engineering methods for scaling. Furthermore\, we show that self-attention supports the emergence of interpretable structure by decomposing them based on the variability across samples. These results suggest that large-scale self-supervised learning provides a unified approach for modeling high-dimensional dynamical systems while enabling interpretation of the learned representations.
URL:https://arni-institute.org/event/speaker-josue-ortega-caro-arni-frontier-models-for-neuroscience-and-behavior-working-group/
LOCATION:Virtual
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260327T160000
DTEND;TZID=America/New_York:20260327T170000
DTSTAMP:20260403T141351
CREATED:20260324T150404Z
LAST-MODIFIED:20260324T150404Z
UID:2411-1774627200-1774630800@arni-institute.org
SUMMARY:ARNI Biological Learning Working Group
DESCRIPTION:Continuation of prior meetings. \nZoom: Upon request @arni@columbia.edu \n  \n 
URL:https://arni-institute.org/event/arni-biological-learning-working-group-8/
LOCATION:NY
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260327T110000
DTEND;TZID=America/New_York:20260327T120000
DTSTAMP:20260403T141351
CREATED:20260217T161534Z
LAST-MODIFIED:20260324T150421Z
UID:2372-1774609200-1774612800@arni-institute.org
SUMMARY:Lecture Series in AI: Richard Zemel
DESCRIPTION:General website \nTitle: Integrating Past and Present in Continual Learning \nAbstract: Continual learning aims to bridge the gap between typical human and machine-learning environments. The continual setting does not have separate training and testing phases\, and instead models are evaluated online while learning novel concepts and tasks. The most capable current AI systems struggle to learn new knowledge sequentially without forgetting old ones. Challenging research questions include how to rapidly assess a learner system’s abilities and how to most efficiently train it to improve on a sequence of tasks. I will describe recent progress on these questions\, across various research groups in ARNI\, our NSF AI Institute for Artificial and Natural Intelligence. Finally we will consider open issues and challenges in continual learning. \nBio: Richard Zemel is the Trianthe Dakolias Professor of Engineering and Applied Science in the Computer Science Department at Columbia University. \nHe is the Director of the NSF AI Institute for Artificial and Natural Intelligence (ARNI)\, and was the co-founder and inaugural Research Director of the Vector Institute for Artificial Intelligence. His awards include an AI Lifetime Achievement Award (CAIA) and a Pioneer of AI Award (NVIDIA). His research contributions include foundational work on systems that learn useful representations of data with little or no supervision; graph-based machine learning; and algorithms for fair and robust machine learning.
URL:https://arni-institute.org/event/lecture-series-in-ai-richard-zemel/
LOCATION:Davis Auditorium\, 530 W 120th St\, New York\, NY 10027\, New York\, NY\, 10027
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260324T150000
DTEND;TZID=America/New_York:20260324T170000
DTSTAMP:20260403T141351
CREATED:20260224T152401Z
LAST-MODIFIED:20260224T152401Z
UID:2397-1774364400-1774371600@arni-institute.org
SUMMARY:Speaker: Vijay Balasubramanian ARNI WG Multi-resource-cost optimization of neural network models
DESCRIPTION:Title and Abstract: TBD
URL:https://arni-institute.org/event/speaker-vijay-balasubramanian-arni-wg-multi-resource-cost-optimization-of-neural-network-models/
LOCATION:Zuckerman Institute – L3-079\, 3227 Broadway\, New York\, NY\, 10027\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260323T150000
DTEND;TZID=America/New_York:20260323T160000
DTSTAMP:20260403T141351
CREATED:20260303T195824Z
LAST-MODIFIED:20260303T195824Z
UID:2404-1774278000-1774281600@arni-institute.org
SUMMARY:Speaker: Katherine Xu - Language and Vision Working Group
DESCRIPTION:Title: Are Vision-Language Models Checking or Looking?\n\nAbstract:\nToday’s AI vision systems are trained on vast amounts of data\, yet it remains unclear whether they simply retrieve memorized answers or actively reason. We conjecture that hallucinations and limited creativity in these models stem from an over-reliance on superficial “checking” rather than active “looking.” Checking retrieves the most probable memorized association\, which often fails when novel inputs mismatch stored patterns. In contrast\, looking involves reasoning on the fly by iteratively sampling information\, revising interpretations\, and integrating evidence across modalities. First\, I will share our recent work on Vibe Spaces for creatively connecting visual concepts. Second\, I will propose visual humor as a lens to probe these cross-modal reasoning deficits. I will conclude with early findings from my ongoing research to open a discussion on potential collaborative directions for our working group.\n\nZoom: upon request@ arni@columbia.edu
URL:https://arni-institute.org/event/speaker-katherine-xu-language-and-vision-working-group/
LOCATION:Virtual
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260320T113000
DTEND;TZID=America/New_York:20260320T130000
DTSTAMP:20260403T141351
CREATED:20260217T161957Z
LAST-MODIFIED:20260217T161957Z
UID:2376-1774006200-1774011600@arni-institute.org
SUMMARY:CTN: Farzaneh Najafi
DESCRIPTION:
URL:https://arni-institute.org/event/ctn-farzaneh-najafi/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260312T150000
DTEND;TZID=America/New_York:20260312T163000
DTSTAMP:20260403T141351
CREATED:20260217T161352Z
LAST-MODIFIED:20260323T184008Z
UID:2370-1773327600-1773333000@arni-institute.org
SUMMARY:Speaker: Xuexin Wei ARNI WG Multi-resource-cost optimization of neural network models
DESCRIPTION:Title: Constraints of efficient neural computation \nAbstract: Neural systems adapt to the statistical structure of the environment to support behavior. While it is generally recognized that such adaptation is subject to various biological constraints (such as noise\, metabolism\, wiring cost)\, how these constraints determine the optimal neural computation remains unclear. For the first part of this talk\, I will discuss theories of efficient coding based on consideration of metabolic cost and neural noise. For the second part\, I will present ongoing work on how the geometry of the stimulus manifold shapes the structure of neural code. In particular\, using the processing of heading direction as an example\, I will show that the asymmetry of the stimulus manifold naturally accounts for key properties of heading direction encoding in macaque MST.
URL:https://arni-institute.org/event/speaker-xuexin-wei-arni-wg-multi-resource-cost-optimization-of-neural-network-models/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260306T113000
DTEND;TZID=America/New_York:20260306T130000
DTSTAMP:20260403T141351
CREATED:20260211T211748Z
LAST-MODIFIED:20260303T195851Z
UID:2354-1772796600-1772802000@arni-institute.org
SUMMARY:CTN: Andreas Tolias
DESCRIPTION:Title: Foundation models of the brain \nAbstract: You … your memories and ambitions\, your sense of personal identity and free will\, are in fact no more than the behavior of a vast assembly of nerve cells …’ Crick’s words capture the profound challenge of decrypting the neural code. This challenge has long been hindered by our limited ability to record activity from large neuronal populations under the complex\, variable conditions in which brains evolve\, and our capacity to model the intricate relationships between stimuli\, behaviors\, and neural activity. Recent breakthroughs are starting to overcome these barriers. Cutting-edge technologies now enable large-scale recordings\, while AI can construct predictive brain models that link stimuli\, neural activity\, and behavior. These digital twins open the door to limitless in silico experiments\, testing theories that are otherwise impossible at scale in living brains. I will discuss our work in creating these digital twins and uncovering neural representation mechanisms\, which we validate with closed-loop experiments. \nZoom link: Request @arni@columbia.edu
URL:https://arni-institute.org/event/ctn-andreas-tolias/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260305T150000
DTEND;TZID=America/New_York:20260305T160000
DTSTAMP:20260403T141351
CREATED:20260303T201102Z
LAST-MODIFIED:20260303T201102Z
UID:2407-1772722800-1772726400@arni-institute.org
SUMMARY:ARNI Continual Learning Working Group Meeting
DESCRIPTION:Next session March 5th
URL:https://arni-institute.org/event/arni-continual-learning-working-group-meeting-2/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260304T090000
DTEND;TZID=America/New_York:20260304T180000
DTSTAMP:20260403T141351
CREATED:20260224T170430Z
LAST-MODIFIED:20260224T170430Z
UID:2401-1772614800-1772647200@arni-institute.org
SUMMARY:Columbia University AI Summit - Reimagining Teaching and Learning in the Age of AI: An AI and Education Forum
DESCRIPTION:Date: Wednesday\, March 4\, 2026\nTime: 9:00 AM – 6:30 PM\nLocation: Faculty Room\, Low Memorial Library\nAddress: 535 W. 116th Street\, New York\, NY 10027 – Visitor Information \nOverview\nHow does a university learn and adapt as AI becomes woven into teaching\, learning\, and intellectual life? This program invites both celebration of innovative experimentation and collective reflection on the challenging questions ahead.\nReimagining Teaching and Learning in the Age of AI brings together educational and school leadership\, faculty\, students\, and invited experts to listen\, examine\, and explore how AI is influencing teaching and learning at Columbia. The program will explore what is already unfolding in today’s classrooms\, how the University is building responsible foundations for AI in education\, and the shared questions that will shape higher education. \nThe goal of the forum is to create space for dialogue and reflection across the Columbia community through a range of sessions\, including student and faculty panels\, interactive demonstrations\, a national keynote\, and a student-led debate. Rather than offering definitive answers\, we seek to deepen understanding of how the University can evolve while preserving its core educational mission in an AI-driven landscape. \nRegister and more information: HERE
URL:https://arni-institute.org/event/columbia-university-ai-summit-reimagining-teaching-and-learning-in-the-age-of-ai-an-ai-and-education-forum/
LOCATION:NY
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260302T150000
DTEND;TZID=America/New_York:20260302T160000
DTSTAMP:20260403T141351
CREATED:20260224T154023Z
LAST-MODIFIED:20260224T154023Z
UID:2399-1772463600-1772467200@arni-institute.org
SUMMARY:Speaker: Jorge Menendez – ARNI Frontier Models for Neuroscience and Behavior Working Group
DESCRIPTION:Date and time: Monday\, March 2\, from 3–4 PM.\nMeeting Link: Upon request @arni@columbia.edu\nSpeakers: Jorge Menendez\, Research Scientist at CTRL-Labs\, and Trung Le\, postdoc in Prof. Chethan Pandarinath’s group. \nTitle: A generic non-invasive neuromotor interface for human-computer interaction\nSince the advent of computing\, humans have sought computer input technologies that are expressive\, intuitive and universal. While diverse modalities have been developed\, including keyboards\, mice and touchscreens\, they require interaction with a device that can be limiting\, especially in on-the-go scenarios. Gesture-based systems use cameras or inertial sensors to avoid an intermediary device\, but tend to perform well only for unobscured movements. By contrast\, brain–computer or neuromotor interfaces that directly interface with the body’s electrical signalling have been imagined to solve the interface problem\, but high-bandwidth communication has been demonstrated only using invasive interfaces with bespoke decoders designed for single individuals. Here\, we describe the development of a generic non-invasive neuromotor interface that enables computer input decoded from surface electromyography (sEMG). We developed a highly sensitive\, easily donned sEMG wristband and a scalable infrastructure for collecting training data from thousands of consenting participants. Together\, these data enabled us to develop generic sEMG decoding models that generalize across people. Test users demonstrate a closed-loop median performance of gesture decoding of 0.66 target acquisitions per second in a continuous navigation task\, 0.88 gesture detections per second in a discrete-gesture task and handwriting at 20.9 words per minute. We demonstrate that the decoding performance of handwriting models can be further improved by 16% by personalizing sEMG decoding models. To our knowledge\, this is the first high-bandwidth neuromotor interface with performant out-of-the-box generalization across people. \nTitle: SPINT: Spatial Permutation-Invariant Neural Transformer for Consistent Intracortical Motor Decoding\nIntracortical Brain-Computer Interfaces (iBCI) decode behavior from neural population activity to restore motor functions and communication abilities in individuals with motor impairments. A central challenge for long-term iBCI deployment is the nonstationarity of neural recordings\, where the composition and tuning profiles of the recorded populations are unstable across recording sessions. Existing approaches attempt to address this issue by explicit alignment techniques; however\, they rely on fixed neural identities and require test-time labels or parameter updates\, limiting their generalization across sessions and imposing additional computational burden during deployment. In this work\, we address the problem of cross-session nonstationarity in long-term iBCI systems and introduce SPINT – a Spatial Permutation-Invariant Neural Transformer framework for behavioral decoding that operates directly on unordered sets of neural units. Central to our approach is a novel context-dependent positional embedding scheme that dynamically infers unit-specific identities\, enabling flexible generalization across recording sessions. SPINT supports inference on variable-size populations and allows few-shot\, gradient-free adaptation using a small amount of unlabeled data from the test session. We evaluate SPINT on three multi-session datasets from the FALCON Benchmark\, covering continuous motor decoding tasks in human and non-human primates. SPINT demonstrates robust cross-session generalization\, outperforming existing zero-shot and few-shot unsupervised baselines while eliminating the need for test-time alignment and fine-tuning. Our work contributes an initial step toward a robust and scalable neural decoding framework for long-term iBCI applications.
URL:https://arni-institute.org/event/speaker-jorge-menendez-arni-frontier-models-for-neuroscience-and-behavior-working-group/
LOCATION:Virtual
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260227T113000
DTEND;TZID=America/New_York:20260227T130000
DTSTAMP:20260403T141351
CREATED:20260217T161738Z
LAST-MODIFIED:20260224T165808Z
UID:2373-1772191800-1772197200@arni-institute.org
SUMMARY:CTN: Denise Cai
DESCRIPTION:Denise Cai \nTitle: Dynamic neural ensembles support memory stability and flexibility across the lifetime \nAbstract: Creating stable memories is critical for survival. An animal relies on past learning to navigate its environment\, avoid dangerous situations\, and find needed resources. Because the environment is dynamic\, stable memories must be updated with new information to enable responses to changing threats (a specific danger) and rewards (such as food and water). The brain circuits involved in memory and learning require both stability and flexibility. We found that traumatic experiences can alter past memories and have long-lasting changes to how future memories are encoded. This has important implications for how the brain stably stores and flexibly updates memories across the lifetime.
URL:https://arni-institute.org/event/ctn-denise-cai/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260220T160000
DTEND;TZID=America/New_York:20260220T170000
DTSTAMP:20260403T141351
CREATED:20260219T150147Z
LAST-MODIFIED:20260219T150154Z
UID:2391-1771603200-1771606800@arni-institute.org
SUMMARY:ARNI Biological Learning Working Group
DESCRIPTION:Continuation from prior meetings \nZoom Link- Upon request @ arni@columbia.edu
URL:https://arni-institute.org/event/arni-biological-learning-working-group-7/
LOCATION:Virtual
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260220T113000
DTEND;TZID=America/New_York:20260220T130000
DTSTAMP:20260403T141351
CREATED:20260217T161139Z
LAST-MODIFIED:20260220T192450Z
UID:2368-1771587000-1771592400@arni-institute.org
SUMMARY:CTN: Mitra Javadzadeh
DESCRIPTION:Title: Inter-area connectivity and the emergence of multi-timescale cortical dynamics \nAbstract: The brain generates behaviors spanning a wide range of timescales\, from rapid sensory responses to the slow integrative processes underlying cognition. How does the anatomical connectivity of the neocortex give rise to such flexible\, multi-timescale dynamics? In this talk\, I will examine how the parcellation of the neocortex into specialized areas\, coupled through reciprocal connections\, structures its dynamical landscape. \nIn Part I\, I will present experimental and modeling work combining simultaneous multi-area recordings in mouse visual cortex with focal optogenetic perturbations and biologically constrained latent circuit models. We show that reciprocal excitatory connections between primary (V1) and higher visual cortex (LM) generate an approximate line attractor in their joint dynamics. These dynamics selectively slow the decay of activity patterns that encode stimulus features consistently across areas\, promoting the gradual emergence of cross-area consensus. \nIn Part II\, I extend these findings within an analytical framework for balanced multi-area networks. We show how the structure and asymmetry of feedforward and feedback connectivity between cortical areas tune their contribution to globally consistent activity patterns. This framework makes model-free predictions on the organization of timescales across the neocortex. Furthermore\, we validate these predictions with new experiments using switchable optoGPCRs to selectively disrupt long-range cortical communication. \nTogether\, these results link anatomical connectivity to collective cortical computation\, providing a theory for how distributed brain areas reconcile information through structured multi-area dynamics.
URL:https://arni-institute.org/event/ctn-mitra-javadzadeh/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260219T150000
DTEND;TZID=America/New_York:20260219T160000
DTSTAMP:20260403T141351
CREATED:20260217T160830Z
LAST-MODIFIED:20260217T160830Z
UID:2367-1771513200-1771516800@arni-institute.org
SUMMARY:ARNI Continual Learning Working Group Meeting
DESCRIPTION:We are aiming to accelerate progress on the benchmark\, and will demo a working prototype very soon. If you are interested in contributing to our project\, we strongly encourage you to participate so that we can fix and implement our plan of action for the coming few months.
URL:https://arni-institute.org/event/arni-continual-learning-working-group-meeting/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260213T113000
DTEND;TZID=America/New_York:20260213T130000
DTSTAMP:20260403T141351
CREATED:20260211T195441Z
LAST-MODIFIED:20260211T201644Z
UID:2351-1770982200-1770987600@arni-institute.org
SUMMARY:CTN: SueYeon Chung
DESCRIPTION:SueYeon Chung \nTitle: Computing with Neural Manifolds: A Multi-Scale Framework for Understanding Biological and Artificial Neural Networks \nAbstract: Recent breakthroughs in experimental neuroscience and machine learning have opened new frontiers in understanding the computational principles governing neural circuits and artificial neural networks (ANNs). Both biological and artificial systems exhibit an astonishing degree of orchestrated information processing capabilities across multiple scales – from the microscopic responses of individual neurons to the emergent macroscopic phenomena of cognition and task functions. At the mesoscopic scale\, the structures of neuron population activities manifest themselves as neural representations. Neural computation can be viewed as a series of transformations of these representations through various processing stages of the brain. The primary focus of my lab’s research is to develop theories of neural representations that describe the principles of neural coding and\, importantly\, capture the complex structure of real data from both biological and artificial systems. \nIn this talk\, I will present three related approaches that leverage techniques from statistical physics\, machine learning\, and geometry to study the multi-scale nature of neural computation. First\, I will introduce new theories based on statistical physics and convex geometry that connect complex geometric structures that arise from neural responses (i.e.\, neural manifolds) to the efficiency of neural representations in implementing a task. Second\, I will employ these theories to analyze how these representations evolve across scales\, shaped by the properties of single neurons\, learning dynamics\, and the transformations across distinct brain regions. Finally\, I will show how these insights extend efficient coding principles beyond early sensory stages\, linking representational geometry to efficient task implementations. This framework not only help interpret and compare models of brain data but also offers a principled approach to designing ANN models for higher-level vision. This perspective opens new opportunities for using neuroscience-inspired principles to guide the development of intelligent systems. \n 
URL:https://arni-institute.org/event/ctn-sueyeon-chung/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260206T160000
DTEND;TZID=America/New_York:20260206T170000
DTSTAMP:20260403T141351
CREATED:20260204T163514Z
LAST-MODIFIED:20260204T163514Z
UID:2335-1770393600-1770397200@arni-institute.org
SUMMARY:ARNI Biological Learning Working Group
DESCRIPTION:1) Charlotte onboarded vision and audio MNIST dataloaders. She’s focusing on predictive coding for sequential tasks (e.g.\, audio data/moving MNIST).\n2) Todd built a multi-modal predictive coding baselines showing that unsupervised representation learning is possible here.\n3) Eivinas proposed a backprop based autoencoder and Hebbian based autoencoder (maybe Exponentiated Gradients?).\n4) Nihal offered to onboard 3D MNIST and workin Hebbian learning rules. \nThere was also discussion of software engineering conventions (e.g.\, Github practices\, configuration tooling\, etc.). \nVirtual Link: request @ arni@columbia.edu
URL:https://arni-institute.org/event/arni-biological-learning-working-group-6/
LOCATION:NY
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260206T113000
DTEND;TZID=America/New_York:20260206T130000
DTSTAMP:20260403T141351
CREATED:20260204T163719Z
LAST-MODIFIED:20260204T163719Z
UID:2336-1770377400-1770382800@arni-institute.org
SUMMARY:CTN: Herbert Zheng Wu
DESCRIPTION:Herbert Zheng Wu \nTitle: Neural Basis of Leader–Follower Dynamics in Cooperative Behavior \n\nAbstract: Cooperation allows social species to achieve outcomes that individuals cannot accomplish alone. Even in simple groups\, cooperative behavior often depends on complementary social roles such as leaders and followers\, yet the neural computations supporting these dynamic relationships are not well understood. Our lab investigates how the brain represents social partners\, coordinates shared goals\, and flexibly allocates control across individuals. Using a new mouse paradigm that captures naturalistic leader–follower behavior during joint foraging\, we combine large-scale neural recording\, circuit perturbation\, and computational modeling to dissect the mechanisms that enable cooperative decision-making. We find that activity in the medial prefrontal cortex reflects both the individual’s role and the evolving social context\, integrating self- and partner-related information to guide coordinated action. To probe latent strategies of the animals\, we developed a multi-agent inverse reinforcement learning framework that infers the individual goals governing joint behavior\, which closely mirror and are decodable from prefrontal activity. Together\, these studies aim to reveal general principles by which distributed brain networks support higher-order social cognition and collective behavior.
URL:https://arni-institute.org/event/ctn-herbert-zheng-wu/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260202T150000
DTEND;TZID=America/New_York:20260202T160000
DTSTAMP:20260403T141351
CREATED:20260130T145319Z
LAST-MODIFIED:20260130T145319Z
UID:2310-1770044400-1770048000@arni-institute.org
SUMMARY:Speaker: Thuy Nguyen – ARNI Frontier Models for Neuroscience and Behavior Working Group
DESCRIPTION:Title:A multimodal sleep foundation model for disease prediction \nAbstract: Sleep is a fundamental biological process with broad implications for physical and mental health\, yet its complex relationship with disease remains poorly understood. Polysomnography (PSG)—the gold standard for sleep analysis—captures rich physiological signals but is underutilized due to challenges in standardization\, generalizability and multimodal integration. To address these challenges\, we developed SleepFM\, a multimodal sleep foundation model trained with a new contrastive learning approach that accommodates multiple PSG configurations. Trained on a curated dataset of over 585\,000 hours of PSG recordings from approximately 65\,000 participants across several cohorts\, SleepFM produces latent sleep representations that capture the physiological and temporal structure of sleep and enable accurate prediction of future disease risk. From one night of sleep\, SleepFM accurately predicts 130 conditions with a C-Index of at least 0.75 (Bonferroni-corrected P < 0.01)\, including all-cause mortality (C-Index\, 0.84)\, dementia (0.85)\, myocardial infarction (0.81)\, heart failure (0.80)\, chronic kidney disease (0.79)\, stroke (0.78) and atrial fibrillation (0.78). Moreover\, the model demonstrates strong transfer learning performance on a dataset from the Sleep Heart Health Study—a dataset that was excluded from pretraining—and performs competitively with specialized sleep-staging models such as U-Sleep and YASA on common sleep analysis tasks\, achieving mean F1 scores of 0.70–0.78 for sleep staging and accuracies of 0.69 and 0.87 for classifying sleep apnea severity and presence. This work shows that foundation models can learn the language of sleep from multimodal sleep recordings\, enabling scalable\, label-efficient analysis and disease prediction.
URL:https://arni-institute.org/event/speaker-thuy-nguyen-arni-frontier-models-for-neuroscience-and-behavior-working-group/
LOCATION:MILA\, A14
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260123T120000
DTEND;TZID=America/New_York:20260123T130000
DTSTAMP:20260403T141351
CREATED:20260113T193600Z
LAST-MODIFIED:20260113T193600Z
UID:2194-1769169600-1769173200@arni-institute.org
SUMMARY:Language and Vision Working Group
DESCRIPTION:Initial Meeting! \nAbout: \nThe ARNI Language & Vision Working Group aims to bring together researchers across neuroscience\, cognitive science\, computer science\, and AI to collaboratively advance our understanding of how humans and machines construct multimodal experiences. Its goal is to create a space for discussing ongoing language- and vision-focused projects\, identifying natural points of overlap\, and transforming them into larger\, interdisciplinary initiatives. Grounded in the idea that language and vision form a dynamic\, symbiotic system rather than isolated modules\, the group seeks to explore how this integration is represented in the brain and in the machine. Strengthening collaboration between these domains is essential for building the next generation of AI systems that learn from continual\, multimodal input\, reflect human cognitive principles\, and ultimately support real-world human needs. \nMore questions: Contact Anna Krason (akrason@gc.cuny.edu) \nZoom: upon request @ arni@columbia.edu
URL:https://arni-institute.org/event/language-and-vision-working-group/
LOCATION:NY
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260123T113000
DTEND;TZID=America/New_York:20260123T130000
DTSTAMP:20260403T141351
CREATED:20260113T192529Z
LAST-MODIFIED:20260121T163725Z
UID:2193-1769167800-1769173200@arni-institute.org
SUMMARY:CTN: Scott Linderman
DESCRIPTION:Title: When and How to Parallelize Seemingly Sequential Models\n \nAbstract: Transformers have become the de facto model for sequential data in large part because they are well adapted to modern hardware: At training time\, the loss can be evaluated in parallel over the sequence length on GPUs and TPUs. By contrast\, evaluating nonlinear recurrent neural networks (RNNs) appears to be an inherently sequential problem. However\, recent advances like DEER (arXiv:2309.12252) and DeepPCR (arXiv:2309.16318) have shown that evaluating a nonlinear recursion can be recast as solving a parallelizable optimization problem\, and sometimes this approach can yield dramatic speed-ups in wall-clock time. However\, the factors that govern the difficulty of these optimization problems remain unclear\, limiting the larger adoption of the technique. I will present a recent line of work from my lab that further develops these methods in both theory and practice. We establish a precise relationship between the dynamics of a nonlinear system and the conditioning of its corresponding optimization formulation. We show that the predictability of a system\, defined as the degree to which small perturbations in state influence future behavior\, impacts the number of optimization steps required for evaluation. In predictable systems\, the state trajectory can be computed in O(log2T) time\, where T is the sequence length\, a major improvement over the conventional sequential approach. In contrast\, chaotic or unpredictable systems exhibit poor conditioning\, with the consequence that parallel evaluation converges too slowly to be useful. We validate our claims through extensive experiments\, with a particular emphasis on parallelizing nonlinear RNNs and Markov chain Monte Carlo (MCMC) algorithms for Bayesian statistics. I will provide practical guidance on when nonlinear dynamical systems can be efficiently parallelized\, and highlighting predictability as a key design principle for parallelizable models.
URL:https://arni-institute.org/event/ctn-scott-linderman/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260122T150000
DTEND;TZID=America/New_York:20260122T153000
DTSTAMP:20260403T141351
CREATED:20260121T163934Z
LAST-MODIFIED:20260121T163934Z
UID:2248-1769094000-1769095800@arni-institute.org
SUMMARY:Continual Learning Working Group
DESCRIPTION:To kick off the Continual Learning Working Group activities for this year\, the Continual Learning working group will meet on Thursday\, Jan 22 at 3pm on Zoom and in CEPSR 620.\n\nThe meeting will be brief\, and we’ll discuss our agenda and scheduling for the coming semester\, including our goals for the benchmark project.\nJoin via Zoom: Upon request @arni@columbia.edu
URL:https://arni-institute.org/event/continual-learning-working-group-11/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260120T150000
DTEND;TZID=America/New_York:20260120T160000
DTSTAMP:20260403T141351
CREATED:20251211T202316Z
LAST-MODIFIED:20260113T180504Z
UID:2162-1768921200-1768924800@arni-institute.org
SUMMARY:Speaker: Xaq Pitkow ARNI WG Multi-resource-cost optimization of neural network models
DESCRIPTION:Title: Frugal Inference for Control\n\nAbstract: A key challenge in advancing artificial intelligence is achieving the right balance between utility maximization and resource use by both external movement and internal computation. While this trade-off has been studied in fully observable settings\, our understanding of resource efficiency in partially observable environments remains limited. Motivated by this challenge\, we develop a version of the POMDP framework where the information gained through inference is treated as a resource that must be optimized alongside task performance and motion effort. By solving this problem in environments described by linear-Gaussian dynamics\, we uncover fundamental principles of resource efficiency. Our study reveals a phase transition in the inference\, switching from a Bayes-optimal approach to one that strategically leaves some uncertainty unresolved. This frugal behavior gives rise to a structured family of equally effective strategies\, facilitating adaptation to later objectives and constraints overlooked during the original optimization. We illustrate the applicability of our framework and the generality of the principles we derived using two nonlinear tasks. Overall\, this work provides a foundation for a new type of rational computation that both brains and machines could use for effective but resource-efficient control under uncertainty.\n\nZoom Link: Upon request @ arni@columbia.edu
URL:https://arni-institute.org/event/speaker-xaq-pitkow-arni-wg-multi-resource-cost-optimization-of-neural-network-models/
LOCATION:Zuckerman Institute – L3-079\, 3227 Broadway\, New York\, NY\, 10027\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260116T113000
DTEND;TZID=America/New_York:20260116T130000
DTSTAMP:20260403T141351
CREATED:20260113T180942Z
LAST-MODIFIED:20260113T180942Z
UID:2191-1768563000-1768568400@arni-institute.org
SUMMARY:CTN: Nao Uchida
DESCRIPTION:Title: A normative perspective on diversity of dopamine neurons \nAbstract: TBD
URL:https://arni-institute.org/event/ctn-nao-uchida/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251219T113000
DTEND;TZID=America/New_York:20251219T130000
DTSTAMP:20260403T141351
CREATED:20251209T200448Z
LAST-MODIFIED:20251216T170437Z
UID:2159-1766143800-1766149200@arni-institute.org
SUMMARY:CTN: Roozbeh Kiani
DESCRIPTION:Seminar Time: 11:30am\nDate: 12/19/25\nSeminar Location: JLG\, L5-084\nHost: Tahereh Toosi\n\n \n\nTitle: Flexible decision-making: policies and rules\n \nAbstract: Flexible behavior requires flexible decision-making. We adapt seamlessly to changing environments—adjusting biases\, altering decision rules\, and inferring hidden task contexts—often without explicit cues. In this talk\, I will outline a framework that formalizes different levels of this flexibility and show how these adjustments are implemented in neural codes across the frontoparietal cortex. I will highlight three forms of decision flexibility: (1) Bias adjustments\, driven by asymmetric rewards\, shift neural activity along the decision variable axis;  (2) Rule changes\, such as varying sensory weights in a multi-feature discrimination task\, produce rotational changes in the population geometry\, supporting rapid changes in decision policy; and (3) Hierarchical inference\, where animals infer hidden contexts to adapt to task structure\, is reflected in the emergence of latent variables represented in distributed subspaces.
URL:https://arni-institute.org/event/ctn-roozbeh-kiani/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251212T113000
DTEND;TZID=America/New_York:20251212T130000
DTSTAMP:20260403T141351
CREATED:20251209T200320Z
LAST-MODIFIED:20251209T200320Z
UID:2157-1765539000-1765544400@arni-institute.org
SUMMARY:CTN: Mehrdad Jazayeri
DESCRIPTION:Title: Adaptive problem solving in the primate frontal cortex\n\nAbstract: Humans excel at solving problems adaptively. When missing the bus to an appointment\, for instance\, we might wait for the next one\, call a taxi\, cancel\, or reschedule\, depending on the situation. This ability to assess context and choose a suitable strategy is central to intelligence\, yet its neural and computational foundations remain poorly understood. To address this gap\, we trained monkeys on a challenging decision-making task that could be solved using multiple strategies\, providing a controlled setting to study strategic flexibility. Behaviorally\, the animals performed accurately and generalized to new conditions\, but their choices were inconsistent with any single policy\, suggesting the use of internally generated strategies. Large-scale electrophysiological recordings from the dorsomedial frontal cortex revealed that population activity unfolded along distinct neural trajectories corresponding to different strategies. The structure of these trajectories—set by the organization of initial neural states and their subsequent evolution—showed that animals assessed the problem and engaged distinct\, rationally structured computational algorithms. A latent behavioral model grounded in these neural dynamics predicted the animals’ choices more accurately than any fixed-strategy model\, providing a direct link between cortical population activity and adaptive decision-making. Together\, these findings reveal a neurophysiological mechanism for strategic decision-making and offer a mechanistic understanding of the neural basis of adaptive problem solving.
URL:https://arni-institute.org/event/ctn-mehrdad-jazayeri/
LOCATION:Zuckerman Institute- Kavli Auditorium 9th Fl\, 3227 Broadway\, NY
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251211T150000
DTEND;TZID=America/New_York:20251211T160000
DTSTAMP:20260403T141351
CREATED:20251209T181210Z
LAST-MODIFIED:20251209T181210Z
UID:2095-1765465200-1765468800@arni-institute.org
SUMMARY:ARNI Continual Learning Working Group
DESCRIPTION:Next Meeting Info\n\n\nDate: Thursday\, Dec 11\nTime: 3pm-4pm\nRoom: CEPSR 620\nZoom: Upon request @ arni@columbia.edu
URL:https://arni-institute.org/event/arni-continual-learning-working-group-4/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251210T110000
DTEND;TZID=America/New_York:20251210T120000
DTSTAMP:20260403T141351
CREATED:20251125T161729Z
LAST-MODIFIED:20251201T155241Z
UID:2039-1765364400-1765368000@arni-institute.org
SUMMARY:Speaker: Alan Stocker ARNI WG Multi-resource-cost optimization of neural network models
DESCRIPTION:Alan Stocker\nProfessor of Psychology at UPenn \nTitle: Economics of temporal evidence integration \nAbstract: The temporal integration of sensory information is an important aspect of many human decision tasks. I will present results of ongoing research in my laboratory aimed at understanding the dynamic processes underlying evidence integration. In particular\, I will discuss a novel resource-rational model that treats both the representation as well as the integration and maintenance of sensory evidence as actively controlled\, performance-effort trade-off mechanisms. Validated against data from various behavioral experiments\, the model not only provides a normative explanation for observed non-linear dynamics in evidence integration but also a parsimonious explanation for individual tendencies for recency or primacy behavior. As the work is ongoing and unpublished\, I am looking forward to an engaged discussion with the audience. \nZoom Link: Upon request @ arni@columbia.edu
URL:https://arni-institute.org/event/speaker-alan-stocker-arni-wg-multi-resource-cost-optimization-of-neural-network-models/
LOCATION:Zuckerman Institute – L3-079\, 3227 Broadway\, New York\, NY\, 10027\, United States
END:VEVENT
END:VCALENDAR