BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//ARNI - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:ARNI
X-ORIGINAL-URL:https://arni-institute.org
X-WR-CALDESC:Events for ARNI
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20260308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20261101T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20270314T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20271107T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260302T150000
DTEND;TZID=America/New_York:20260302T160000
DTSTAMP:20260403T154234
CREATED:20260224T154023Z
LAST-MODIFIED:20260224T154023Z
UID:2399-1772463600-1772467200@arni-institute.org
SUMMARY:Speaker: Jorge Menendez – ARNI Frontier Models for Neuroscience and Behavior Working Group
DESCRIPTION:Date and time: Monday\, March 2\, from 3–4 PM.\nMeeting Link: Upon request @arni@columbia.edu\nSpeakers: Jorge Menendez\, Research Scientist at CTRL-Labs\, and Trung Le\, postdoc in Prof. Chethan Pandarinath’s group. \nTitle: A generic non-invasive neuromotor interface for human-computer interaction\nSince the advent of computing\, humans have sought computer input technologies that are expressive\, intuitive and universal. While diverse modalities have been developed\, including keyboards\, mice and touchscreens\, they require interaction with a device that can be limiting\, especially in on-the-go scenarios. Gesture-based systems use cameras or inertial sensors to avoid an intermediary device\, but tend to perform well only for unobscured movements. By contrast\, brain–computer or neuromotor interfaces that directly interface with the body’s electrical signalling have been imagined to solve the interface problem\, but high-bandwidth communication has been demonstrated only using invasive interfaces with bespoke decoders designed for single individuals. Here\, we describe the development of a generic non-invasive neuromotor interface that enables computer input decoded from surface electromyography (sEMG). We developed a highly sensitive\, easily donned sEMG wristband and a scalable infrastructure for collecting training data from thousands of consenting participants. Together\, these data enabled us to develop generic sEMG decoding models that generalize across people. Test users demonstrate a closed-loop median performance of gesture decoding of 0.66 target acquisitions per second in a continuous navigation task\, 0.88 gesture detections per second in a discrete-gesture task and handwriting at 20.9 words per minute. We demonstrate that the decoding performance of handwriting models can be further improved by 16% by personalizing sEMG decoding models. To our knowledge\, this is the first high-bandwidth neuromotor interface with performant out-of-the-box generalization across people. \nTitle: SPINT: Spatial Permutation-Invariant Neural Transformer for Consistent Intracortical Motor Decoding\nIntracortical Brain-Computer Interfaces (iBCI) decode behavior from neural population activity to restore motor functions and communication abilities in individuals with motor impairments. A central challenge for long-term iBCI deployment is the nonstationarity of neural recordings\, where the composition and tuning profiles of the recorded populations are unstable across recording sessions. Existing approaches attempt to address this issue by explicit alignment techniques; however\, they rely on fixed neural identities and require test-time labels or parameter updates\, limiting their generalization across sessions and imposing additional computational burden during deployment. In this work\, we address the problem of cross-session nonstationarity in long-term iBCI systems and introduce SPINT – a Spatial Permutation-Invariant Neural Transformer framework for behavioral decoding that operates directly on unordered sets of neural units. Central to our approach is a novel context-dependent positional embedding scheme that dynamically infers unit-specific identities\, enabling flexible generalization across recording sessions. SPINT supports inference on variable-size populations and allows few-shot\, gradient-free adaptation using a small amount of unlabeled data from the test session. We evaluate SPINT on three multi-session datasets from the FALCON Benchmark\, covering continuous motor decoding tasks in human and non-human primates. SPINT demonstrates robust cross-session generalization\, outperforming existing zero-shot and few-shot unsupervised baselines while eliminating the need for test-time alignment and fine-tuning. Our work contributes an initial step toward a robust and scalable neural decoding framework for long-term iBCI applications.
URL:https://arni-institute.org/event/speaker-jorge-menendez-arni-frontier-models-for-neuroscience-and-behavior-working-group/
LOCATION:Virtual
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260227T113000
DTEND;TZID=America/New_York:20260227T130000
DTSTAMP:20260403T154234
CREATED:20260217T161738Z
LAST-MODIFIED:20260224T165808Z
UID:2373-1772191800-1772197200@arni-institute.org
SUMMARY:CTN: Denise Cai
DESCRIPTION:Denise Cai \nTitle: Dynamic neural ensembles support memory stability and flexibility across the lifetime \nAbstract: Creating stable memories is critical for survival. An animal relies on past learning to navigate its environment\, avoid dangerous situations\, and find needed resources. Because the environment is dynamic\, stable memories must be updated with new information to enable responses to changing threats (a specific danger) and rewards (such as food and water). The brain circuits involved in memory and learning require both stability and flexibility. We found that traumatic experiences can alter past memories and have long-lasting changes to how future memories are encoded. This has important implications for how the brain stably stores and flexibly updates memories across the lifetime.
URL:https://arni-institute.org/event/ctn-denise-cai/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260220T160000
DTEND;TZID=America/New_York:20260220T170000
DTSTAMP:20260403T154234
CREATED:20260219T150147Z
LAST-MODIFIED:20260219T150154Z
UID:2391-1771603200-1771606800@arni-institute.org
SUMMARY:ARNI Biological Learning Working Group
DESCRIPTION:Continuation from prior meetings \nZoom Link- Upon request @ arni@columbia.edu
URL:https://arni-institute.org/event/arni-biological-learning-working-group-7/
LOCATION:Virtual
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260220T113000
DTEND;TZID=America/New_York:20260220T130000
DTSTAMP:20260403T154234
CREATED:20260217T161139Z
LAST-MODIFIED:20260220T192450Z
UID:2368-1771587000-1771592400@arni-institute.org
SUMMARY:CTN: Mitra Javadzadeh
DESCRIPTION:Title: Inter-area connectivity and the emergence of multi-timescale cortical dynamics \nAbstract: The brain generates behaviors spanning a wide range of timescales\, from rapid sensory responses to the slow integrative processes underlying cognition. How does the anatomical connectivity of the neocortex give rise to such flexible\, multi-timescale dynamics? In this talk\, I will examine how the parcellation of the neocortex into specialized areas\, coupled through reciprocal connections\, structures its dynamical landscape. \nIn Part I\, I will present experimental and modeling work combining simultaneous multi-area recordings in mouse visual cortex with focal optogenetic perturbations and biologically constrained latent circuit models. We show that reciprocal excitatory connections between primary (V1) and higher visual cortex (LM) generate an approximate line attractor in their joint dynamics. These dynamics selectively slow the decay of activity patterns that encode stimulus features consistently across areas\, promoting the gradual emergence of cross-area consensus. \nIn Part II\, I extend these findings within an analytical framework for balanced multi-area networks. We show how the structure and asymmetry of feedforward and feedback connectivity between cortical areas tune their contribution to globally consistent activity patterns. This framework makes model-free predictions on the organization of timescales across the neocortex. Furthermore\, we validate these predictions with new experiments using switchable optoGPCRs to selectively disrupt long-range cortical communication. \nTogether\, these results link anatomical connectivity to collective cortical computation\, providing a theory for how distributed brain areas reconcile information through structured multi-area dynamics.
URL:https://arni-institute.org/event/ctn-mitra-javadzadeh/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260219T150000
DTEND;TZID=America/New_York:20260219T160000
DTSTAMP:20260403T154234
CREATED:20260217T160830Z
LAST-MODIFIED:20260217T160830Z
UID:2367-1771513200-1771516800@arni-institute.org
SUMMARY:ARNI Continual Learning Working Group Meeting
DESCRIPTION:We are aiming to accelerate progress on the benchmark\, and will demo a working prototype very soon. If you are interested in contributing to our project\, we strongly encourage you to participate so that we can fix and implement our plan of action for the coming few months.
URL:https://arni-institute.org/event/arni-continual-learning-working-group-meeting/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260213T113000
DTEND;TZID=America/New_York:20260213T130000
DTSTAMP:20260403T154234
CREATED:20260211T195441Z
LAST-MODIFIED:20260211T201644Z
UID:2351-1770982200-1770987600@arni-institute.org
SUMMARY:CTN: SueYeon Chung
DESCRIPTION:SueYeon Chung \nTitle: Computing with Neural Manifolds: A Multi-Scale Framework for Understanding Biological and Artificial Neural Networks \nAbstract: Recent breakthroughs in experimental neuroscience and machine learning have opened new frontiers in understanding the computational principles governing neural circuits and artificial neural networks (ANNs). Both biological and artificial systems exhibit an astonishing degree of orchestrated information processing capabilities across multiple scales – from the microscopic responses of individual neurons to the emergent macroscopic phenomena of cognition and task functions. At the mesoscopic scale\, the structures of neuron population activities manifest themselves as neural representations. Neural computation can be viewed as a series of transformations of these representations through various processing stages of the brain. The primary focus of my lab’s research is to develop theories of neural representations that describe the principles of neural coding and\, importantly\, capture the complex structure of real data from both biological and artificial systems. \nIn this talk\, I will present three related approaches that leverage techniques from statistical physics\, machine learning\, and geometry to study the multi-scale nature of neural computation. First\, I will introduce new theories based on statistical physics and convex geometry that connect complex geometric structures that arise from neural responses (i.e.\, neural manifolds) to the efficiency of neural representations in implementing a task. Second\, I will employ these theories to analyze how these representations evolve across scales\, shaped by the properties of single neurons\, learning dynamics\, and the transformations across distinct brain regions. Finally\, I will show how these insights extend efficient coding principles beyond early sensory stages\, linking representational geometry to efficient task implementations. This framework not only help interpret and compare models of brain data but also offers a principled approach to designing ANN models for higher-level vision. This perspective opens new opportunities for using neuroscience-inspired principles to guide the development of intelligent systems. \n 
URL:https://arni-institute.org/event/ctn-sueyeon-chung/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260206T160000
DTEND;TZID=America/New_York:20260206T170000
DTSTAMP:20260403T154234
CREATED:20260204T163514Z
LAST-MODIFIED:20260204T163514Z
UID:2335-1770393600-1770397200@arni-institute.org
SUMMARY:ARNI Biological Learning Working Group
DESCRIPTION:1) Charlotte onboarded vision and audio MNIST dataloaders. She’s focusing on predictive coding for sequential tasks (e.g.\, audio data/moving MNIST).\n2) Todd built a multi-modal predictive coding baselines showing that unsupervised representation learning is possible here.\n3) Eivinas proposed a backprop based autoencoder and Hebbian based autoencoder (maybe Exponentiated Gradients?).\n4) Nihal offered to onboard 3D MNIST and workin Hebbian learning rules. \nThere was also discussion of software engineering conventions (e.g.\, Github practices\, configuration tooling\, etc.). \nVirtual Link: request @ arni@columbia.edu
URL:https://arni-institute.org/event/arni-biological-learning-working-group-6/
LOCATION:NY
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260206T113000
DTEND;TZID=America/New_York:20260206T130000
DTSTAMP:20260403T154234
CREATED:20260204T163719Z
LAST-MODIFIED:20260204T163719Z
UID:2336-1770377400-1770382800@arni-institute.org
SUMMARY:CTN: Herbert Zheng Wu
DESCRIPTION:Herbert Zheng Wu \nTitle: Neural Basis of Leader–Follower Dynamics in Cooperative Behavior \n\nAbstract: Cooperation allows social species to achieve outcomes that individuals cannot accomplish alone. Even in simple groups\, cooperative behavior often depends on complementary social roles such as leaders and followers\, yet the neural computations supporting these dynamic relationships are not well understood. Our lab investigates how the brain represents social partners\, coordinates shared goals\, and flexibly allocates control across individuals. Using a new mouse paradigm that captures naturalistic leader–follower behavior during joint foraging\, we combine large-scale neural recording\, circuit perturbation\, and computational modeling to dissect the mechanisms that enable cooperative decision-making. We find that activity in the medial prefrontal cortex reflects both the individual’s role and the evolving social context\, integrating self- and partner-related information to guide coordinated action. To probe latent strategies of the animals\, we developed a multi-agent inverse reinforcement learning framework that infers the individual goals governing joint behavior\, which closely mirror and are decodable from prefrontal activity. Together\, these studies aim to reveal general principles by which distributed brain networks support higher-order social cognition and collective behavior.
URL:https://arni-institute.org/event/ctn-herbert-zheng-wu/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260202T150000
DTEND;TZID=America/New_York:20260202T160000
DTSTAMP:20260403T154234
CREATED:20260130T145319Z
LAST-MODIFIED:20260130T145319Z
UID:2310-1770044400-1770048000@arni-institute.org
SUMMARY:Speaker: Thuy Nguyen – ARNI Frontier Models for Neuroscience and Behavior Working Group
DESCRIPTION:Title:A multimodal sleep foundation model for disease prediction \nAbstract: Sleep is a fundamental biological process with broad implications for physical and mental health\, yet its complex relationship with disease remains poorly understood. Polysomnography (PSG)—the gold standard for sleep analysis—captures rich physiological signals but is underutilized due to challenges in standardization\, generalizability and multimodal integration. To address these challenges\, we developed SleepFM\, a multimodal sleep foundation model trained with a new contrastive learning approach that accommodates multiple PSG configurations. Trained on a curated dataset of over 585\,000 hours of PSG recordings from approximately 65\,000 participants across several cohorts\, SleepFM produces latent sleep representations that capture the physiological and temporal structure of sleep and enable accurate prediction of future disease risk. From one night of sleep\, SleepFM accurately predicts 130 conditions with a C-Index of at least 0.75 (Bonferroni-corrected P < 0.01)\, including all-cause mortality (C-Index\, 0.84)\, dementia (0.85)\, myocardial infarction (0.81)\, heart failure (0.80)\, chronic kidney disease (0.79)\, stroke (0.78) and atrial fibrillation (0.78). Moreover\, the model demonstrates strong transfer learning performance on a dataset from the Sleep Heart Health Study—a dataset that was excluded from pretraining—and performs competitively with specialized sleep-staging models such as U-Sleep and YASA on common sleep analysis tasks\, achieving mean F1 scores of 0.70–0.78 for sleep staging and accuracies of 0.69 and 0.87 for classifying sleep apnea severity and presence. This work shows that foundation models can learn the language of sleep from multimodal sleep recordings\, enabling scalable\, label-efficient analysis and disease prediction.
URL:https://arni-institute.org/event/speaker-thuy-nguyen-arni-frontier-models-for-neuroscience-and-behavior-working-group/
LOCATION:MILA\, A14
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260123T120000
DTEND;TZID=America/New_York:20260123T130000
DTSTAMP:20260403T154234
CREATED:20260113T193600Z
LAST-MODIFIED:20260113T193600Z
UID:2194-1769169600-1769173200@arni-institute.org
SUMMARY:Language and Vision Working Group
DESCRIPTION:Initial Meeting! \nAbout: \nThe ARNI Language & Vision Working Group aims to bring together researchers across neuroscience\, cognitive science\, computer science\, and AI to collaboratively advance our understanding of how humans and machines construct multimodal experiences. Its goal is to create a space for discussing ongoing language- and vision-focused projects\, identifying natural points of overlap\, and transforming them into larger\, interdisciplinary initiatives. Grounded in the idea that language and vision form a dynamic\, symbiotic system rather than isolated modules\, the group seeks to explore how this integration is represented in the brain and in the machine. Strengthening collaboration between these domains is essential for building the next generation of AI systems that learn from continual\, multimodal input\, reflect human cognitive principles\, and ultimately support real-world human needs. \nMore questions: Contact Anna Krason (akrason@gc.cuny.edu) \nZoom: upon request @ arni@columbia.edu
URL:https://arni-institute.org/event/language-and-vision-working-group/
LOCATION:NY
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260123T113000
DTEND;TZID=America/New_York:20260123T130000
DTSTAMP:20260403T154234
CREATED:20260113T192529Z
LAST-MODIFIED:20260121T163725Z
UID:2193-1769167800-1769173200@arni-institute.org
SUMMARY:CTN: Scott Linderman
DESCRIPTION:Title: When and How to Parallelize Seemingly Sequential Models\n \nAbstract: Transformers have become the de facto model for sequential data in large part because they are well adapted to modern hardware: At training time\, the loss can be evaluated in parallel over the sequence length on GPUs and TPUs. By contrast\, evaluating nonlinear recurrent neural networks (RNNs) appears to be an inherently sequential problem. However\, recent advances like DEER (arXiv:2309.12252) and DeepPCR (arXiv:2309.16318) have shown that evaluating a nonlinear recursion can be recast as solving a parallelizable optimization problem\, and sometimes this approach can yield dramatic speed-ups in wall-clock time. However\, the factors that govern the difficulty of these optimization problems remain unclear\, limiting the larger adoption of the technique. I will present a recent line of work from my lab that further develops these methods in both theory and practice. We establish a precise relationship between the dynamics of a nonlinear system and the conditioning of its corresponding optimization formulation. We show that the predictability of a system\, defined as the degree to which small perturbations in state influence future behavior\, impacts the number of optimization steps required for evaluation. In predictable systems\, the state trajectory can be computed in O(log2T) time\, where T is the sequence length\, a major improvement over the conventional sequential approach. In contrast\, chaotic or unpredictable systems exhibit poor conditioning\, with the consequence that parallel evaluation converges too slowly to be useful. We validate our claims through extensive experiments\, with a particular emphasis on parallelizing nonlinear RNNs and Markov chain Monte Carlo (MCMC) algorithms for Bayesian statistics. I will provide practical guidance on when nonlinear dynamical systems can be efficiently parallelized\, and highlighting predictability as a key design principle for parallelizable models.
URL:https://arni-institute.org/event/ctn-scott-linderman/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260122T150000
DTEND;TZID=America/New_York:20260122T153000
DTSTAMP:20260403T154234
CREATED:20260121T163934Z
LAST-MODIFIED:20260121T163934Z
UID:2248-1769094000-1769095800@arni-institute.org
SUMMARY:Continual Learning Working Group
DESCRIPTION:To kick off the Continual Learning Working Group activities for this year\, the Continual Learning working group will meet on Thursday\, Jan 22 at 3pm on Zoom and in CEPSR 620.\n\nThe meeting will be brief\, and we’ll discuss our agenda and scheduling for the coming semester\, including our goals for the benchmark project.\nJoin via Zoom: Upon request @arni@columbia.edu
URL:https://arni-institute.org/event/continual-learning-working-group-11/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260120T150000
DTEND;TZID=America/New_York:20260120T160000
DTSTAMP:20260403T154234
CREATED:20251211T202316Z
LAST-MODIFIED:20260113T180504Z
UID:2162-1768921200-1768924800@arni-institute.org
SUMMARY:Speaker: Xaq Pitkow ARNI WG Multi-resource-cost optimization of neural network models
DESCRIPTION:Title: Frugal Inference for Control\n\nAbstract: A key challenge in advancing artificial intelligence is achieving the right balance between utility maximization and resource use by both external movement and internal computation. While this trade-off has been studied in fully observable settings\, our understanding of resource efficiency in partially observable environments remains limited. Motivated by this challenge\, we develop a version of the POMDP framework where the information gained through inference is treated as a resource that must be optimized alongside task performance and motion effort. By solving this problem in environments described by linear-Gaussian dynamics\, we uncover fundamental principles of resource efficiency. Our study reveals a phase transition in the inference\, switching from a Bayes-optimal approach to one that strategically leaves some uncertainty unresolved. This frugal behavior gives rise to a structured family of equally effective strategies\, facilitating adaptation to later objectives and constraints overlooked during the original optimization. We illustrate the applicability of our framework and the generality of the principles we derived using two nonlinear tasks. Overall\, this work provides a foundation for a new type of rational computation that both brains and machines could use for effective but resource-efficient control under uncertainty.\n\nZoom Link: Upon request @ arni@columbia.edu
URL:https://arni-institute.org/event/speaker-xaq-pitkow-arni-wg-multi-resource-cost-optimization-of-neural-network-models/
LOCATION:Zuckerman Institute – L3-079\, 3227 Broadway\, New York\, NY\, 10027\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260116T113000
DTEND;TZID=America/New_York:20260116T130000
DTSTAMP:20260403T154234
CREATED:20260113T180942Z
LAST-MODIFIED:20260113T180942Z
UID:2191-1768563000-1768568400@arni-institute.org
SUMMARY:CTN: Nao Uchida
DESCRIPTION:Title: A normative perspective on diversity of dopamine neurons \nAbstract: TBD
URL:https://arni-institute.org/event/ctn-nao-uchida/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251219T113000
DTEND;TZID=America/New_York:20251219T130000
DTSTAMP:20260403T154234
CREATED:20251209T200448Z
LAST-MODIFIED:20251216T170437Z
UID:2159-1766143800-1766149200@arni-institute.org
SUMMARY:CTN: Roozbeh Kiani
DESCRIPTION:Seminar Time: 11:30am\nDate: 12/19/25\nSeminar Location: JLG\, L5-084\nHost: Tahereh Toosi\n\n \n\nTitle: Flexible decision-making: policies and rules\n \nAbstract: Flexible behavior requires flexible decision-making. We adapt seamlessly to changing environments—adjusting biases\, altering decision rules\, and inferring hidden task contexts—often without explicit cues. In this talk\, I will outline a framework that formalizes different levels of this flexibility and show how these adjustments are implemented in neural codes across the frontoparietal cortex. I will highlight three forms of decision flexibility: (1) Bias adjustments\, driven by asymmetric rewards\, shift neural activity along the decision variable axis;  (2) Rule changes\, such as varying sensory weights in a multi-feature discrimination task\, produce rotational changes in the population geometry\, supporting rapid changes in decision policy; and (3) Hierarchical inference\, where animals infer hidden contexts to adapt to task structure\, is reflected in the emergence of latent variables represented in distributed subspaces.
URL:https://arni-institute.org/event/ctn-roozbeh-kiani/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251212T113000
DTEND;TZID=America/New_York:20251212T130000
DTSTAMP:20260403T154234
CREATED:20251209T200320Z
LAST-MODIFIED:20251209T200320Z
UID:2157-1765539000-1765544400@arni-institute.org
SUMMARY:CTN: Mehrdad Jazayeri
DESCRIPTION:Title: Adaptive problem solving in the primate frontal cortex\n\nAbstract: Humans excel at solving problems adaptively. When missing the bus to an appointment\, for instance\, we might wait for the next one\, call a taxi\, cancel\, or reschedule\, depending on the situation. This ability to assess context and choose a suitable strategy is central to intelligence\, yet its neural and computational foundations remain poorly understood. To address this gap\, we trained monkeys on a challenging decision-making task that could be solved using multiple strategies\, providing a controlled setting to study strategic flexibility. Behaviorally\, the animals performed accurately and generalized to new conditions\, but their choices were inconsistent with any single policy\, suggesting the use of internally generated strategies. Large-scale electrophysiological recordings from the dorsomedial frontal cortex revealed that population activity unfolded along distinct neural trajectories corresponding to different strategies. The structure of these trajectories—set by the organization of initial neural states and their subsequent evolution—showed that animals assessed the problem and engaged distinct\, rationally structured computational algorithms. A latent behavioral model grounded in these neural dynamics predicted the animals’ choices more accurately than any fixed-strategy model\, providing a direct link between cortical population activity and adaptive decision-making. Together\, these findings reveal a neurophysiological mechanism for strategic decision-making and offer a mechanistic understanding of the neural basis of adaptive problem solving.
URL:https://arni-institute.org/event/ctn-mehrdad-jazayeri/
LOCATION:Zuckerman Institute- Kavli Auditorium 9th Fl\, 3227 Broadway\, NY
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251211T150000
DTEND;TZID=America/New_York:20251211T160000
DTSTAMP:20260403T154234
CREATED:20251209T181210Z
LAST-MODIFIED:20251209T181210Z
UID:2095-1765465200-1765468800@arni-institute.org
SUMMARY:ARNI Continual Learning Working Group
DESCRIPTION:Next Meeting Info\n\n\nDate: Thursday\, Dec 11\nTime: 3pm-4pm\nRoom: CEPSR 620\nZoom: Upon request @ arni@columbia.edu
URL:https://arni-institute.org/event/arni-continual-learning-working-group-4/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251210T110000
DTEND;TZID=America/New_York:20251210T120000
DTSTAMP:20260403T154234
CREATED:20251125T161729Z
LAST-MODIFIED:20251201T155241Z
UID:2039-1765364400-1765368000@arni-institute.org
SUMMARY:Speaker: Alan Stocker ARNI WG Multi-resource-cost optimization of neural network models
DESCRIPTION:Alan Stocker\nProfessor of Psychology at UPenn \nTitle: Economics of temporal evidence integration \nAbstract: The temporal integration of sensory information is an important aspect of many human decision tasks. I will present results of ongoing research in my laboratory aimed at understanding the dynamic processes underlying evidence integration. In particular\, I will discuss a novel resource-rational model that treats both the representation as well as the integration and maintenance of sensory evidence as actively controlled\, performance-effort trade-off mechanisms. Validated against data from various behavioral experiments\, the model not only provides a normative explanation for observed non-linear dynamics in evidence integration but also a parsimonious explanation for individual tendencies for recency or primacy behavior. As the work is ongoing and unpublished\, I am looking forward to an engaged discussion with the audience. \nZoom Link: Upon request @ arni@columbia.edu
URL:https://arni-institute.org/event/speaker-alan-stocker-arni-wg-multi-resource-cost-optimization-of-neural-network-models/
LOCATION:Zuckerman Institute – L3-079\, 3227 Broadway\, New York\, NY\, 10027\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251121T113000
DTEND;TZID=America/New_York:20251121T123000
DTSTAMP:20260403T154234
CREATED:20251105T152923Z
LAST-MODIFIED:20251118T184741Z
UID:2030-1763724600-1763728200@arni-institute.org
SUMMARY:CTN: Karel Svoboda
DESCRIPTION:Seminar Time: 11:30am\nDate: 11/21/25\nSeminar Location: JLG\, L5-084\nHost: Ji Xia\n\n\nTitle: Illuminating synaptic learning \nAbstract: How do synapses in the middle of the brain know how to adjust their weight to advance a behavioral goal? This is referred to as the synaptic ‘credit assignment problem’. A large variety of synaptic learning rules have been proposed\, mainly in the context of artificial neural networks. The most powerful learning rules (e.g. back-propagation of error) are thought to be biologically implausible\, whereas the widely studied biological learning rules (Hebbian) are insufficient for goal-directed learning. I will describe ongoing work\, both experimental and theoretical\, focused on understanding learning at the level of circuits and synapses in the motor cortex. 
URL:https://arni-institute.org/event/ctn-karel-svoboda/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251117T080000
DTEND;TZID=America/New_York:20251118T130000
DTSTAMP:20260403T154234
CREATED:20251112T151814Z
LAST-MODIFIED:20251112T151821Z
UID:2033-1763366400-1763470800@arni-institute.org
SUMMARY:ARNI Annual Retreat 2025
DESCRIPTION:To celebrate the many accomplishments as we wrap up year two and continue our momentum into year three. \nWe anticipate engaging discussions in the working groups and panels as we explore future directions for ARNI. \nWe also want to highlight the participation of Bing Brunton\, Jim DiCarlo\, and Thomas Reardon from our External Advisory Board. \nBy registration only!
URL:https://arni-institute.org/event/arni-annual-retreat-2025/
LOCATION:NY
ORGANIZER;CN="ARNI":MAILTO:arni@columbia.edu
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251113T143000
DTEND;TZID=America/New_York:20251113T153000
DTSTAMP:20260403T154234
CREATED:20251103T193526Z
LAST-MODIFIED:20251103T193526Z
UID:2025-1763044200-1763047800@arni-institute.org
SUMMARY:Carl Vondick Hosts Talk with Aaron Hertzmann (Adobe)
DESCRIPTION:Aaron Hertzmann \nWhy Do Pictures Work? Explanations From Real-World Vision\nSpeaker: Aaron Hertzmann (Adobe)\nHost: Carl Vondrick\nDate: Thursday\, November 13\, 2025\nTime: 2:30 PM\nLocation: CSB 453\n\nAbstract: I outline possible answers to the long-standing question of why pictures work: why can people look at a painting or photograph\, and see a depicted subject\, rather than just marks on a page or lights on a display? Observers with no prior experience with pictures can understand some kinds of pictures\, indicating that picture understanding is not solely a product of experience or culture.  I argues that picture perception can be explained as a product of several properties of real-world vision. First\, the fact that humans can understand certain real-world phenomena—refraction\, reflection\, cast shadows—as simultaneously surface phenomena but also images of an underlying cause explains why we can see pictures as depictions and not just markings.  Second\, the fact that viewers can understand real-world scenes with unfamiliar combinations of objects explains our ability to understand many different styles of depiction. For example\, we can understand black-and-white photos of people because\, in real-world vision\, we could recognize a familiar person who had been painted gray. Third\, our robustness to visual defects and other difficult viewing conditions explains our ability to understand styles of pictorial textures\, like paint strokes.  Extensions of these basic ideas can explain depiction in many different visual styles\, including photographic tone reproduction\, line drawings\, silhouettes\, cartoons\, painterly styles\, and more. The proposed models of picture understanding could significantly inform future analysis of perceptual mechanisms\, picture aesthetics\, and the nature of different styles of depiction.\n\nZoom Link: Upon request @arni@columbia.edu
URL:https://arni-institute.org/event/carl-vondick-hosts-talk-with-aaron-hertzmann-adobe/
LOCATION:CSB 453\, Mudd Building\, 500 W 120th Street
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251112T160000
DTEND;TZID=America/New_York:20251112T170000
DTSTAMP:20260403T154234
CREATED:20251112T151131Z
LAST-MODIFIED:20251112T151131Z
UID:2032-1762963200-1762966800@arni-institute.org
SUMMARY:ARNI Continual Learning Working Group
DESCRIPTION:Next Meeting Info\n\n\nDate: Wednesday\, Nov 12\nTime: 4pm-5pm\nRoom: CEPSR 620\nZoom: Upon request @ arni@columbia.edu
URL:https://arni-institute.org/event/arni-continual-learning-working-group-3/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251107T160000
DTEND;TZID=America/New_York:20251107T170000
DTSTAMP:20260403T154234
CREATED:20251103T193100Z
LAST-MODIFIED:20251103T193100Z
UID:2024-1762531200-1762534800@arni-institute.org
SUMMARY:ARNI Biological Learning Working Group
DESCRIPTION:Continuation from prior meetings about benchmarks and competition proposals. \nZoom Link: upon request @ ARNI@columbia.edu
URL:https://arni-institute.org/event/arni-biological-learning-working-group-5/
LOCATION:NY
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251105T140000
DTEND;TZID=America/New_York:20251105T150000
DTSTAMP:20260403T154234
CREATED:20251022T211425Z
LAST-MODIFIED:20251028T143328Z
UID:2019-1762351200-1762354800@arni-institute.org
SUMMARY:Speaker: Bryan Li - ARNI Frontier Models for Neuroscience and Behavior Working Group
DESCRIPTION:Bio\nBryan Li is completing his PhD in NeuroAI at the University of Edinburgh\, under the supervision of Arno Onken and Nathalie Rochefort. His main PhD project focuses on building deep learning-based encoding models of the visual cortex that accurately predict neural activity in response to arbitrary visual stimuli. Recently\, he joined Dario Farina’s lab at Imperial College London as an Encode Fellow\, working on neuromotor interfacing and decoding.\n\nTitle (https://www.biorxiv.org/content/10.1101/2025.09.16.676524v2)\nMovie-trained transformer reveals novel response properties to dynamic stimuli in mouse visual cortex\n\nAbstract\nUnderstanding how the brain encodes complex\, dynamic visual stimuli remains a\nfundamental challenge in neuroscience. Here\, we introduce ViV1T\, a transformer-based model trained on natural movies to predict neuronal responses in mouse primary visual cortex (V1). ViV1T outperformed state-of-the-art models in predicting responses to both natural and artificial dynamic stimuli\, while requiring fewer parameters and reducing runtime. Despite being trained exclusively on natural movies\, ViV1T accurately captured core V1 properties\, including orientation and direction selectivity as well as contextual modulation\, despite lacking explicit feedback mechanisms. ViV1T also revealed novel functional features. The model predicted a wider range of contextual responses when using natural and model-generated surround stimuli compared to traditional gratings\, with novel model-generated dynamic stimuli eliciting maximal V1 responses. ViV1T also predicted that dynamic surrounds elicited stronger contextual modulation than static surrounds. Finally\, the model identified a subpopulation of neurons that exhibit contrast-dependent surround modulation\, switching their response to surround stimuli from inhibition to excitation when contrast decreases. These predictions were validated through semi-closed-loop in vivo recordings. Overall\, ViV1T establishes a powerful\, data-driven framework for understanding how brain sensory areas process dynamic visual information across space and time.\n\nZoom link: Upon request @ arni@columbia.edu
URL:https://arni-institute.org/event/arni-frontier-models-for-neuroscience-and-behavior-working-group-2/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251105T113000
DTEND;TZID=America/New_York:20251105T123000
DTSTAMP:20260403T154234
CREATED:20251105T152843Z
LAST-MODIFIED:20251105T152843Z
UID:2028-1762342200-1762345800@arni-institute.org
SUMMARY:CTN: Yael Niv
DESCRIPTION:Seminar Time: 11:30am\nDate: Fri 11/7/25\nSeminar Location: JLG\, L5-084\nHost: Weijia Zhang\n\n\n\nTitle: Latent causes\, prediction errors\, and the organization of memory\n\nAbstract: No two events are alike. But still\, we learn\, which means that we implicitly decide what events are similar enough that experience with one can inform us about what to do in another. We have suggested that this relies on parsing of incoming information into “clusters” according to inferred hidden (latent) causes. Moreover\, we have suggested that unexpected information (that is\, a prediction error) is key to this separation into clusters. In this talk\, I will demonstrate these ideas through behavioral experiments showing evidence for clustering and illustrate the effects of prediction errors on the organization of memory. I will then tie the different findings together into a hypothesis about how information about events is organized in our brain.
URL:https://arni-institute.org/event/ctn-yael-niv/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251029T150000
DTEND;TZID=America/New_York:20251029T163000
DTSTAMP:20260403T154234
CREATED:20251008T214532Z
LAST-MODIFIED:20251015T155911Z
UID:2010-1761750000-1761755400@arni-institute.org
SUMMARY:ARNI Distinguished Seminar Series: Leila Wehbe
DESCRIPTION:Bio: Leila Wehbe is an associate professor in the Machine Learning Department and the Neuroscience Institute at Carnegie Mellon University. Her work is at the interface of cognitive neuroscience and computer science. It combines naturalistic functional imaging with machine learning both to improve our understanding of the brain and to find insight to build better artificial systems. She is the recipient of an NSF CAREER award\, a Google faculty research award and an NIH CRCNS R01. Previously\, she was a postdoctoral researcher at UC Berkeley and obtained her PhD from Carnegie Mellon University \nTitle: Model prediction error reveals separate mechanisms for integrating multi-modal information in the human cortex \nAbstract: Language comprehension engages much of the human cortex\, extending beyond the canonical language system. Yet in everyday life\, language unfolds alongside other modalities\, such as vision\, that recruit these same distributed areas. Because language is often studied in isolation\, we still know little about how the brain coordinates and integrates multimodal representations. In this talk\, we use fMRI data from participants viewing 37 hours of TV series and movies to model the interaction of auditory and visual input. Using encoding models that predict brain activity from each stream\, we introduce a framework based on prediction error that reveals how individual brain regions combine multimodal information.
URL:https://arni-institute.org/event/arni-distinguished-seminar-series-leila-wehbe/
LOCATION:Zuckerman Institute- Kavli Auditorium 9th Fl\, 3227 Broadway\, NY
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251027T150000
DTEND;TZID=America/New_York:20251027T160000
DTSTAMP:20260403T154234
CREATED:20251022T210922Z
LAST-MODIFIED:20251022T210922Z
UID:2018-1761577200-1761580800@arni-institute.org
SUMMARY:ARNI Continual Learning Working Group
DESCRIPTION:Next Meeting Info\n\n\nDate: Monday\, October 27\nTime: 3-4pm\nRoom: CEPSR 620\n\nZoom: Upon request @arni@columbia.edu
URL:https://arni-institute.org/event/arni-continual-learning-working-group-2/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251024T140000
DTEND;TZID=America/New_York:20251024T150000
DTSTAMP:20260403T154234
CREATED:20251022T132648Z
LAST-MODIFIED:20251022T132648Z
UID:2016-1761314400-1761318000@arni-institute.org
SUMMARY:ARNI Biological Learning Working Group
DESCRIPTION:Continuation from prior meetings about benchmarks and competition proposals. \nZoom Link: upon request @ ARNI@columbia.edu
URL:https://arni-institute.org/event/arni-biological-learning-working-group-4/
LOCATION:NY
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251024T113000
DTEND;TZID=America/New_York:20251024T130000
DTSTAMP:20260403T154234
CREATED:20250917T182936Z
LAST-MODIFIED:20251021T182719Z
UID:1992-1761305400-1761310800@arni-institute.org
SUMMARY:CTN: Anna Schapiro
DESCRIPTION:Title: Learning representations of specifics and generalities over time\n\nAbstract: There is a fundamental tension between storing discrete traces of individual experiences\, which allows recall of particular moments in our past without interference\, and extracting regularities across these experiences\, which supports generalization and prediction in similar situations in the future. One influential proposal for how the brain resolves this tension is that it separates the processes anatomically into Complementary Learning Systems\, with the hippocampus rapidly encoding individual episodes and the neocortex slowly extracting regularities over days\, months\, and years. But this does not explain our ability to learn and generalize from new regularities in our environment quickly\, often within minutes. We have put forward a neural network model of the hippocampus that suggests that the hippocampus itself may contain complementary learning systems\, with one pathway specializing in the rapid learning of regularities and a separate pathway handling the region’s classic episodic memory functions. This proposal has broad implications for how we rapidly learn novel information of specific and generalized types\, which we test across statistical learning\, inference\, and category learning paradigms. We also explore how this system interacts with slower-learning neocortical memory systems\, with empirical and modeling investigations into how hippocampal replay shapes neocortical representations during sleep. Together\, the work helps us understand how structured information in our environment is initially encoded and how it then transforms over time.\nZoom: Available upon request @ arni@columbia.edu
URL:https://arni-institute.org/event/ctn-anna-schapiro/
LOCATION:Zuckerman Institute- Kavli Auditorium 9th Fl\, 3227 Broadway\, NY
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251021T130000
DTEND;TZID=America/New_York:20251021T150000
DTSTAMP:20260403T154234
CREATED:20251008T213444Z
LAST-MODIFIED:20251008T221401Z
UID:2007-1761051600-1761058800@arni-institute.org
SUMMARY:Speaker: Jascha Achterberg ARNI WG Multi-resource-cost optimization of neural network models
DESCRIPTION:Title:\nBuilding the brain’s efficient system-level architecture: optimisations across space\, time\, and multiple regions \nAbstract:\nThe computations a brain can perform are fundamentally constrained by physical realities: energetic resources are limited\, and time is precious. To understand why the brain works the way it does\, we must understand its function in the context of these constraints. Prior modeling work has successfully demonstrated how spatial energetic constraints drive structure-function co-optimization\, giving rise to many of the architectural features we observe across areas of neuroscience. By incorporating such physical constraints\, we can build complex systems-level models that are meaningfully constrained by physically measurable factors rather than arbitrary design choices. \nIn this talk\, I will expand on these spatial frameworks by introducing new work on temporal processing and signal precision constraints in neural networks. I will demonstrate how different optimization strategies within individual regions can be combined in heterogeneous multi-region models\, revealing how the brain trades off resource use across tasks and situations. Finally\, I will show how space and time interact in surprising ways to achieve efficient computation — principles that apply not only to the brain but to any large-scale distributed computing system. Together\, these advances bring us closer to understanding the general principles that enable sophisticated intelligence to emerge from physically and energetically constrained computing systems.
URL:https://arni-institute.org/event/speaker-jascha-achterberg-arni-wg-multi-resource-cost-optimization-of-neural-network-models/
LOCATION:Zuckerman Institute – L3-079\, 3227 Broadway\, New York\, NY\, 10027\, United States
END:VEVENT
END:VCALENDAR