BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//ARNI - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://arni-institute.org
X-WR-CALDESC:Events for ARNI
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20260308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20261101T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20270314T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20271107T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260206T113000
DTEND;TZID=America/New_York:20260206T130000
DTSTAMP:20260404T113309
CREATED:20260204T163719Z
LAST-MODIFIED:20260204T163719Z
UID:2336-1770377400-1770382800@arni-institute.org
SUMMARY:CTN: Herbert Zheng Wu
DESCRIPTION:Herbert Zheng Wu \nTitle: Neural Basis of Leader–Follower Dynamics in Cooperative Behavior \n\nAbstract: Cooperation allows social species to achieve outcomes that individuals cannot accomplish alone. Even in simple groups\, cooperative behavior often depends on complementary social roles such as leaders and followers\, yet the neural computations supporting these dynamic relationships are not well understood. Our lab investigates how the brain represents social partners\, coordinates shared goals\, and flexibly allocates control across individuals. Using a new mouse paradigm that captures naturalistic leader–follower behavior during joint foraging\, we combine large-scale neural recording\, circuit perturbation\, and computational modeling to dissect the mechanisms that enable cooperative decision-making. We find that activity in the medial prefrontal cortex reflects both the individual’s role and the evolving social context\, integrating self- and partner-related information to guide coordinated action. To probe latent strategies of the animals\, we developed a multi-agent inverse reinforcement learning framework that infers the individual goals governing joint behavior\, which closely mirror and are decodable from prefrontal activity. Together\, these studies aim to reveal general principles by which distributed brain networks support higher-order social cognition and collective behavior.
URL:https://arni-institute.org/event/ctn-herbert-zheng-wu/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260206T160000
DTEND;TZID=America/New_York:20260206T170000
DTSTAMP:20260404T113309
CREATED:20260204T163514Z
LAST-MODIFIED:20260204T163514Z
UID:2335-1770393600-1770397200@arni-institute.org
SUMMARY:ARNI Biological Learning Working Group
DESCRIPTION:1) Charlotte onboarded vision and audio MNIST dataloaders. She’s focusing on predictive coding for sequential tasks (e.g.\, audio data/moving MNIST).\n2) Todd built a multi-modal predictive coding baselines showing that unsupervised representation learning is possible here.\n3) Eivinas proposed a backprop based autoencoder and Hebbian based autoencoder (maybe Exponentiated Gradients?).\n4) Nihal offered to onboard 3D MNIST and workin Hebbian learning rules. \nThere was also discussion of software engineering conventions (e.g.\, Github practices\, configuration tooling\, etc.). \nVirtual Link: request @ arni@columbia.edu
URL:https://arni-institute.org/event/arni-biological-learning-working-group-6/
LOCATION:NY
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260213T113000
DTEND;TZID=America/New_York:20260213T130000
DTSTAMP:20260404T113309
CREATED:20260211T195441Z
LAST-MODIFIED:20260211T201644Z
UID:2351-1770982200-1770987600@arni-institute.org
SUMMARY:CTN: SueYeon Chung
DESCRIPTION:SueYeon Chung \nTitle: Computing with Neural Manifolds: A Multi-Scale Framework for Understanding Biological and Artificial Neural Networks \nAbstract: Recent breakthroughs in experimental neuroscience and machine learning have opened new frontiers in understanding the computational principles governing neural circuits and artificial neural networks (ANNs). Both biological and artificial systems exhibit an astonishing degree of orchestrated information processing capabilities across multiple scales – from the microscopic responses of individual neurons to the emergent macroscopic phenomena of cognition and task functions. At the mesoscopic scale\, the structures of neuron population activities manifest themselves as neural representations. Neural computation can be viewed as a series of transformations of these representations through various processing stages of the brain. The primary focus of my lab’s research is to develop theories of neural representations that describe the principles of neural coding and\, importantly\, capture the complex structure of real data from both biological and artificial systems. \nIn this talk\, I will present three related approaches that leverage techniques from statistical physics\, machine learning\, and geometry to study the multi-scale nature of neural computation. First\, I will introduce new theories based on statistical physics and convex geometry that connect complex geometric structures that arise from neural responses (i.e.\, neural manifolds) to the efficiency of neural representations in implementing a task. Second\, I will employ these theories to analyze how these representations evolve across scales\, shaped by the properties of single neurons\, learning dynamics\, and the transformations across distinct brain regions. Finally\, I will show how these insights extend efficient coding principles beyond early sensory stages\, linking representational geometry to efficient task implementations. This framework not only help interpret and compare models of brain data but also offers a principled approach to designing ANN models for higher-level vision. This perspective opens new opportunities for using neuroscience-inspired principles to guide the development of intelligent systems. \n 
URL:https://arni-institute.org/event/ctn-sueyeon-chung/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260219T150000
DTEND;TZID=America/New_York:20260219T160000
DTSTAMP:20260404T113309
CREATED:20260217T160830Z
LAST-MODIFIED:20260217T160830Z
UID:2367-1771513200-1771516800@arni-institute.org
SUMMARY:ARNI Continual Learning Working Group Meeting
DESCRIPTION:We are aiming to accelerate progress on the benchmark\, and will demo a working prototype very soon. If you are interested in contributing to our project\, we strongly encourage you to participate so that we can fix and implement our plan of action for the coming few months.
URL:https://arni-institute.org/event/arni-continual-learning-working-group-meeting/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260220T113000
DTEND;TZID=America/New_York:20260220T130000
DTSTAMP:20260404T113309
CREATED:20260217T161139Z
LAST-MODIFIED:20260220T192450Z
UID:2368-1771587000-1771592400@arni-institute.org
SUMMARY:CTN: Mitra Javadzadeh
DESCRIPTION:Title: Inter-area connectivity and the emergence of multi-timescale cortical dynamics \nAbstract: The brain generates behaviors spanning a wide range of timescales\, from rapid sensory responses to the slow integrative processes underlying cognition. How does the anatomical connectivity of the neocortex give rise to such flexible\, multi-timescale dynamics? In this talk\, I will examine how the parcellation of the neocortex into specialized areas\, coupled through reciprocal connections\, structures its dynamical landscape. \nIn Part I\, I will present experimental and modeling work combining simultaneous multi-area recordings in mouse visual cortex with focal optogenetic perturbations and biologically constrained latent circuit models. We show that reciprocal excitatory connections between primary (V1) and higher visual cortex (LM) generate an approximate line attractor in their joint dynamics. These dynamics selectively slow the decay of activity patterns that encode stimulus features consistently across areas\, promoting the gradual emergence of cross-area consensus. \nIn Part II\, I extend these findings within an analytical framework for balanced multi-area networks. We show how the structure and asymmetry of feedforward and feedback connectivity between cortical areas tune their contribution to globally consistent activity patterns. This framework makes model-free predictions on the organization of timescales across the neocortex. Furthermore\, we validate these predictions with new experiments using switchable optoGPCRs to selectively disrupt long-range cortical communication. \nTogether\, these results link anatomical connectivity to collective cortical computation\, providing a theory for how distributed brain areas reconcile information through structured multi-area dynamics.
URL:https://arni-institute.org/event/ctn-mitra-javadzadeh/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260220T160000
DTEND;TZID=America/New_York:20260220T170000
DTSTAMP:20260404T113309
CREATED:20260219T150147Z
LAST-MODIFIED:20260219T150154Z
UID:2391-1771603200-1771606800@arni-institute.org
SUMMARY:ARNI Biological Learning Working Group
DESCRIPTION:Continuation from prior meetings \nZoom Link- Upon request @ arni@columbia.edu
URL:https://arni-institute.org/event/arni-biological-learning-working-group-7/
LOCATION:Virtual
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260227T113000
DTEND;TZID=America/New_York:20260227T130000
DTSTAMP:20260404T113309
CREATED:20260217T161738Z
LAST-MODIFIED:20260224T165808Z
UID:2373-1772191800-1772197200@arni-institute.org
SUMMARY:CTN: Denise Cai
DESCRIPTION:Denise Cai \nTitle: Dynamic neural ensembles support memory stability and flexibility across the lifetime \nAbstract: Creating stable memories is critical for survival. An animal relies on past learning to navigate its environment\, avoid dangerous situations\, and find needed resources. Because the environment is dynamic\, stable memories must be updated with new information to enable responses to changing threats (a specific danger) and rewards (such as food and water). The brain circuits involved in memory and learning require both stability and flexibility. We found that traumatic experiences can alter past memories and have long-lasting changes to how future memories are encoded. This has important implications for how the brain stably stores and flexibly updates memories across the lifetime.
URL:https://arni-institute.org/event/ctn-denise-cai/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260302T150000
DTEND;TZID=America/New_York:20260302T160000
DTSTAMP:20260404T113309
CREATED:20260224T154023Z
LAST-MODIFIED:20260224T154023Z
UID:2399-1772463600-1772467200@arni-institute.org
SUMMARY:Speaker: Jorge Menendez – ARNI Frontier Models for Neuroscience and Behavior Working Group
DESCRIPTION:Date and time: Monday\, March 2\, from 3–4 PM.\nMeeting Link: Upon request @arni@columbia.edu\nSpeakers: Jorge Menendez\, Research Scientist at CTRL-Labs\, and Trung Le\, postdoc in Prof. Chethan Pandarinath’s group. \nTitle: A generic non-invasive neuromotor interface for human-computer interaction\nSince the advent of computing\, humans have sought computer input technologies that are expressive\, intuitive and universal. While diverse modalities have been developed\, including keyboards\, mice and touchscreens\, they require interaction with a device that can be limiting\, especially in on-the-go scenarios. Gesture-based systems use cameras or inertial sensors to avoid an intermediary device\, but tend to perform well only for unobscured movements. By contrast\, brain–computer or neuromotor interfaces that directly interface with the body’s electrical signalling have been imagined to solve the interface problem\, but high-bandwidth communication has been demonstrated only using invasive interfaces with bespoke decoders designed for single individuals. Here\, we describe the development of a generic non-invasive neuromotor interface that enables computer input decoded from surface electromyography (sEMG). We developed a highly sensitive\, easily donned sEMG wristband and a scalable infrastructure for collecting training data from thousands of consenting participants. Together\, these data enabled us to develop generic sEMG decoding models that generalize across people. Test users demonstrate a closed-loop median performance of gesture decoding of 0.66 target acquisitions per second in a continuous navigation task\, 0.88 gesture detections per second in a discrete-gesture task and handwriting at 20.9 words per minute. We demonstrate that the decoding performance of handwriting models can be further improved by 16% by personalizing sEMG decoding models. To our knowledge\, this is the first high-bandwidth neuromotor interface with performant out-of-the-box generalization across people. \nTitle: SPINT: Spatial Permutation-Invariant Neural Transformer for Consistent Intracortical Motor Decoding\nIntracortical Brain-Computer Interfaces (iBCI) decode behavior from neural population activity to restore motor functions and communication abilities in individuals with motor impairments. A central challenge for long-term iBCI deployment is the nonstationarity of neural recordings\, where the composition and tuning profiles of the recorded populations are unstable across recording sessions. Existing approaches attempt to address this issue by explicit alignment techniques; however\, they rely on fixed neural identities and require test-time labels or parameter updates\, limiting their generalization across sessions and imposing additional computational burden during deployment. In this work\, we address the problem of cross-session nonstationarity in long-term iBCI systems and introduce SPINT – a Spatial Permutation-Invariant Neural Transformer framework for behavioral decoding that operates directly on unordered sets of neural units. Central to our approach is a novel context-dependent positional embedding scheme that dynamically infers unit-specific identities\, enabling flexible generalization across recording sessions. SPINT supports inference on variable-size populations and allows few-shot\, gradient-free adaptation using a small amount of unlabeled data from the test session. We evaluate SPINT on three multi-session datasets from the FALCON Benchmark\, covering continuous motor decoding tasks in human and non-human primates. SPINT demonstrates robust cross-session generalization\, outperforming existing zero-shot and few-shot unsupervised baselines while eliminating the need for test-time alignment and fine-tuning. Our work contributes an initial step toward a robust and scalable neural decoding framework for long-term iBCI applications.
URL:https://arni-institute.org/event/speaker-jorge-menendez-arni-frontier-models-for-neuroscience-and-behavior-working-group/
LOCATION:Virtual
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260304T090000
DTEND;TZID=America/New_York:20260304T180000
DTSTAMP:20260404T113309
CREATED:20260224T170430Z
LAST-MODIFIED:20260224T170430Z
UID:2401-1772614800-1772647200@arni-institute.org
SUMMARY:Columbia University AI Summit - Reimagining Teaching and Learning in the Age of AI: An AI and Education Forum
DESCRIPTION:Date: Wednesday\, March 4\, 2026\nTime: 9:00 AM – 6:30 PM\nLocation: Faculty Room\, Low Memorial Library\nAddress: 535 W. 116th Street\, New York\, NY 10027 – Visitor Information \nOverview\nHow does a university learn and adapt as AI becomes woven into teaching\, learning\, and intellectual life? This program invites both celebration of innovative experimentation and collective reflection on the challenging questions ahead.\nReimagining Teaching and Learning in the Age of AI brings together educational and school leadership\, faculty\, students\, and invited experts to listen\, examine\, and explore how AI is influencing teaching and learning at Columbia. The program will explore what is already unfolding in today’s classrooms\, how the University is building responsible foundations for AI in education\, and the shared questions that will shape higher education. \nThe goal of the forum is to create space for dialogue and reflection across the Columbia community through a range of sessions\, including student and faculty panels\, interactive demonstrations\, a national keynote\, and a student-led debate. Rather than offering definitive answers\, we seek to deepen understanding of how the University can evolve while preserving its core educational mission in an AI-driven landscape. \nRegister and more information: HERE
URL:https://arni-institute.org/event/columbia-university-ai-summit-reimagining-teaching-and-learning-in-the-age-of-ai-an-ai-and-education-forum/
LOCATION:NY
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260305T150000
DTEND;TZID=America/New_York:20260305T160000
DTSTAMP:20260404T113309
CREATED:20260303T201102Z
LAST-MODIFIED:20260303T201102Z
UID:2407-1772722800-1772726400@arni-institute.org
SUMMARY:ARNI Continual Learning Working Group Meeting
DESCRIPTION:Next session March 5th
URL:https://arni-institute.org/event/arni-continual-learning-working-group-meeting-2/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260306T113000
DTEND;TZID=America/New_York:20260306T130000
DTSTAMP:20260404T113309
CREATED:20260211T211748Z
LAST-MODIFIED:20260303T195851Z
UID:2354-1772796600-1772802000@arni-institute.org
SUMMARY:CTN: Andreas Tolias
DESCRIPTION:Title: Foundation models of the brain \nAbstract: You … your memories and ambitions\, your sense of personal identity and free will\, are in fact no more than the behavior of a vast assembly of nerve cells …’ Crick’s words capture the profound challenge of decrypting the neural code. This challenge has long been hindered by our limited ability to record activity from large neuronal populations under the complex\, variable conditions in which brains evolve\, and our capacity to model the intricate relationships between stimuli\, behaviors\, and neural activity. Recent breakthroughs are starting to overcome these barriers. Cutting-edge technologies now enable large-scale recordings\, while AI can construct predictive brain models that link stimuli\, neural activity\, and behavior. These digital twins open the door to limitless in silico experiments\, testing theories that are otherwise impossible at scale in living brains. I will discuss our work in creating these digital twins and uncovering neural representation mechanisms\, which we validate with closed-loop experiments. \nZoom link: Request @arni@columbia.edu
URL:https://arni-institute.org/event/ctn-andreas-tolias/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260312T150000
DTEND;TZID=America/New_York:20260312T163000
DTSTAMP:20260404T113309
CREATED:20260217T161352Z
LAST-MODIFIED:20260323T184008Z
UID:2370-1773327600-1773333000@arni-institute.org
SUMMARY:Speaker: Xuexin Wei ARNI WG Multi-resource-cost optimization of neural network models
DESCRIPTION:Title: Constraints of efficient neural computation \nAbstract: Neural systems adapt to the statistical structure of the environment to support behavior. While it is generally recognized that such adaptation is subject to various biological constraints (such as noise\, metabolism\, wiring cost)\, how these constraints determine the optimal neural computation remains unclear. For the first part of this talk\, I will discuss theories of efficient coding based on consideration of metabolic cost and neural noise. For the second part\, I will present ongoing work on how the geometry of the stimulus manifold shapes the structure of neural code. In particular\, using the processing of heading direction as an example\, I will show that the asymmetry of the stimulus manifold naturally accounts for key properties of heading direction encoding in macaque MST.
URL:https://arni-institute.org/event/speaker-xuexin-wei-arni-wg-multi-resource-cost-optimization-of-neural-network-models/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260320T113000
DTEND;TZID=America/New_York:20260320T130000
DTSTAMP:20260404T113309
CREATED:20260217T161957Z
LAST-MODIFIED:20260217T161957Z
UID:2376-1774006200-1774011600@arni-institute.org
SUMMARY:CTN: Farzaneh Najafi
DESCRIPTION:
URL:https://arni-institute.org/event/ctn-farzaneh-najafi/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260323T150000
DTEND;TZID=America/New_York:20260323T160000
DTSTAMP:20260404T113309
CREATED:20260303T195824Z
LAST-MODIFIED:20260303T195824Z
UID:2404-1774278000-1774281600@arni-institute.org
SUMMARY:Speaker: Katherine Xu - Language and Vision Working Group
DESCRIPTION:Title: Are Vision-Language Models Checking or Looking?\n\nAbstract:\nToday’s AI vision systems are trained on vast amounts of data\, yet it remains unclear whether they simply retrieve memorized answers or actively reason. We conjecture that hallucinations and limited creativity in these models stem from an over-reliance on superficial “checking” rather than active “looking.” Checking retrieves the most probable memorized association\, which often fails when novel inputs mismatch stored patterns. In contrast\, looking involves reasoning on the fly by iteratively sampling information\, revising interpretations\, and integrating evidence across modalities. First\, I will share our recent work on Vibe Spaces for creatively connecting visual concepts. Second\, I will propose visual humor as a lens to probe these cross-modal reasoning deficits. I will conclude with early findings from my ongoing research to open a discussion on potential collaborative directions for our working group.\n\nZoom: upon request@ arni@columbia.edu
URL:https://arni-institute.org/event/speaker-katherine-xu-language-and-vision-working-group/
LOCATION:Virtual
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260324T150000
DTEND;TZID=America/New_York:20260324T170000
DTSTAMP:20260404T113309
CREATED:20260224T152401Z
LAST-MODIFIED:20260224T152401Z
UID:2397-1774364400-1774371600@arni-institute.org
SUMMARY:Speaker: Vijay Balasubramanian ARNI WG Multi-resource-cost optimization of neural network models
DESCRIPTION:Title and Abstract: TBD
URL:https://arni-institute.org/event/speaker-vijay-balasubramanian-arni-wg-multi-resource-cost-optimization-of-neural-network-models/
LOCATION:Zuckerman Institute – L3-079\, 3227 Broadway\, New York\, NY\, 10027\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260327T110000
DTEND;TZID=America/New_York:20260327T120000
DTSTAMP:20260404T113309
CREATED:20260217T161534Z
LAST-MODIFIED:20260324T150421Z
UID:2372-1774609200-1774612800@arni-institute.org
SUMMARY:Lecture Series in AI: Richard Zemel
DESCRIPTION:General website \nTitle: Integrating Past and Present in Continual Learning \nAbstract: Continual learning aims to bridge the gap between typical human and machine-learning environments. The continual setting does not have separate training and testing phases\, and instead models are evaluated online while learning novel concepts and tasks. The most capable current AI systems struggle to learn new knowledge sequentially without forgetting old ones. Challenging research questions include how to rapidly assess a learner system’s abilities and how to most efficiently train it to improve on a sequence of tasks. I will describe recent progress on these questions\, across various research groups in ARNI\, our NSF AI Institute for Artificial and Natural Intelligence. Finally we will consider open issues and challenges in continual learning. \nBio: Richard Zemel is the Trianthe Dakolias Professor of Engineering and Applied Science in the Computer Science Department at Columbia University. \nHe is the Director of the NSF AI Institute for Artificial and Natural Intelligence (ARNI)\, and was the co-founder and inaugural Research Director of the Vector Institute for Artificial Intelligence. His awards include an AI Lifetime Achievement Award (CAIA) and a Pioneer of AI Award (NVIDIA). His research contributions include foundational work on systems that learn useful representations of data with little or no supervision; graph-based machine learning; and algorithms for fair and robust machine learning.
URL:https://arni-institute.org/event/lecture-series-in-ai-richard-zemel/
LOCATION:Davis Auditorium\, 530 W 120th St\, New York\, NY 10027\, New York\, NY\, 10027
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260327T160000
DTEND;TZID=America/New_York:20260327T170000
DTSTAMP:20260404T113309
CREATED:20260324T150404Z
LAST-MODIFIED:20260324T150404Z
UID:2411-1774627200-1774630800@arni-institute.org
SUMMARY:ARNI Biological Learning Working Group
DESCRIPTION:Continuation of prior meetings. \nZoom: Upon request @arni@columbia.edu \n  \n 
URL:https://arni-institute.org/event/arni-biological-learning-working-group-8/
LOCATION:NY
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260330T150000
DTEND;TZID=America/New_York:20260330T160000
DTSTAMP:20260404T113309
CREATED:20260331T161001Z
LAST-MODIFIED:20260331T161001Z
UID:2422-1774882800-1774886400@arni-institute.org
SUMMARY:Speaker Josue Ortega Caro: ARNI Frontier Models for Neuroscience and Behavior Working Group
DESCRIPTION:Time: 30th March. 3pm EST\n\nTitle: Large scale models for spatiotemporal data.\nSpeaker: Josue Ortega Caro https://josueortc.github.io/\nAbstract:  Spatiotemporal and multimodal datasets contain structured variability distributed across space\, time\, and measurement modality\, motivating modeling approaches that can learn representations directly from large-scale data. Inspired by video foundational models\, we study how the masked autoencoder training objective can learn shared structure across heterogeneous observations while preserving modality-specific information\, and how training these models requires multiple engineering methods for scaling. Furthermore\, we show that self-attention supports the emergence of interpretable structure by decomposing them based on the variability across samples. These results suggest that large-scale self-supervised learning provides a unified approach for modeling high-dimensional dynamical systems while enabling interpretation of the learned representations.
URL:https://arni-institute.org/event/speaker-josue-ortega-caro-arni-frontier-models-for-neuroscience-and-behavior-working-group/
LOCATION:Virtual
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260409T150000
DTEND;TZID=America/New_York:20260409T160000
DTSTAMP:20260404T113309
CREATED:20260327T173721Z
LAST-MODIFIED:20260327T173721Z
UID:2418-1775746800-1775750400@arni-institute.org
SUMMARY:Speaker: Mengye Ren - ARNI Continual Learning Working Group Meeting
DESCRIPTION:Mengye Ren
URL:https://arni-institute.org/event/speaker-mengye-ren-arni-continual-learning-working-group-meeting/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260410T160000
DTEND;TZID=America/New_York:20260410T170000
DTSTAMP:20260404T113309
CREATED:20260330T140055Z
LAST-MODIFIED:20260330T140055Z
UID:2421-1775836800-1775840400@arni-institute.org
SUMMARY:ARNI Biological Learning Working Group
DESCRIPTION:Continuation of prior meetings. \nZoom: Upon request @arni@columbia.edu
URL:https://arni-institute.org/event/arni-biological-learning-working-group-9/
LOCATION:Virtual
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260414T140000
DTEND;TZID=America/New_York:20260414T150000
DTSTAMP:20260404T113309
CREATED:20260402T193254Z
LAST-MODIFIED:20260402T193254Z
UID:2428-1776175200-1776178800@arni-institute.org
SUMMARY:CTN: Jack Lindsey (Anthropic)
DESCRIPTION:Title: The inner lives of language models \nAbstract: In recent years\, LLMs have evolved from bad text completion engines\, to decent chatbots\, to digital genies that work miracles on your computer (while making the occasional catastrophic error). The increasing sophistication of AI models’ behavior has been accompanied by a commensurate enrichment of their internal representations and computations. In this talk\, I’ll give an overview of what’s known about LLM cognition\, and the ways in which it emulates components of human psychology: emotional reactions\, strategic manipulation\, and forms of introspection. I’ll also cover aspects of LLM behavior that are fundamentally un-human-like\, owing to features of their architecture and training process\, and how these give rise to odd failure modes—for instance\, a weakly anchored sense of self. Finally\, I’ll discuss the urgency of addressing pathologies\, both human-like and alien\, of LLM psychology\, and some ideas for doing so. \nThe talk is in-person. If you do not have card access to the Jerome L. Greene Science center building\, you can email Arianna Pepin <ap4287@columbia.edu> to be added to the guest list for the seminar.
URL:https://arni-institute.org/event/ctn-jack-lindsey-anthropic/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260427T150000
DTEND;TZID=America/New_York:20260427T160000
DTSTAMP:20260404T113309
CREATED:20260402T192942Z
LAST-MODIFIED:20260402T192942Z
UID:2425-1777302000-1777305600@arni-institute.org
SUMMARY:Speaker: TBD – ARNI Frontier Models for Neuroscience and Behavior Working Group
DESCRIPTION:
URL:https://arni-institute.org/event/speaker-tbd-arni-frontier-models-for-neuroscience-and-behavior-working-group/
LOCATION:Virtual
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260427T150000
DTEND;TZID=America/New_York:20260427T160000
DTSTAMP:20260404T113309
CREATED:20260402T193102Z
LAST-MODIFIED:20260402T193102Z
UID:2426-1777302000-1777305600@arni-institute.org
SUMMARY:Speaker: Ziwei (Sara) Gong - ARNI Language and Vision Working Group
DESCRIPTION:Title: Decoding Human Emotions: From Psychological Theories to Multimodal NLP Models\nAbstract: Understanding and modeling human emotions is essential for natural language processing (NLP) applications\, from conversational AI to mental health assessment. This talk explores the intersection of emotion theory\, dataset development\, and multimodal machine learning\, highlighting key challenges and innovations in emotion recognition. We discuss the alignment of psychological emotion frameworks with computational models\, strategies for improving multimodal emotion recognition\, and advances in self-supervised learning for low-resource languages. Additionally\, we examine how multimodal signals enhance model performance and interpretability.
URL:https://arni-institute.org/event/speaker-ziwei-sara-gong-arni-language-and-vision-working-group/
LOCATION:Virtual
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260512T130000
DTEND;TZID=America/New_York:20260512T190000
DTSTAMP:20260404T113309
CREATED:20260327T171344Z
LAST-MODIFIED:20260327T171344Z
UID:2415-1778590800-1778612400@arni-institute.org
SUMMARY:Memory\, Neuroscience and AI: Zuckerman Institute’s Local Circuits Symposium
DESCRIPTION:Register Here! \nHow are memories formed\, organized\, and used to guide behavior? And what can artificial intelligence teach us about how the brain remembers? \nJoin faculty and early-career researchers from across Columbia for the Local Circuits symposium\, exploring the science of memory across biological and artificial systems. Talks will span systems and cognitive neuroscience\, machine learning\, and theoretical modeling\, examining how brain circuits encode and retrieve memories and how AI is helping researchers probe these processes in new ways. \nPart of the Zuckerman Institute’s Local Circuits series\, this symposium brings together researchers from across the university to spark collaboration around mind\, brain\, and behavior. \nAll Columbia ID holders are welcome. Registration is required. \nPresented by the Alan Kanzer Center for Cognition and Reasoning \nOpening Remarks:\nAngela V. Olinto\, Provost of the University; Professor of Astronomy and of Physics\, Columbia University \nSpeakers include:\nChris Baldassano\, PhD\, Associate Professor of Psychology\, Columbia University\nChristine Denny\, PhD\, Associate Professor of Clinical Neurobiology (in Psychiatry)\, Columbia University Irving Medical Center\nStefano Fusi\, PhD\, Professor of Neuroscience\, Principal Investigator in the Zuckerman Institute\, Columbia University\nScott Small\, MD\, Boris and Rose Katz Professor of Neurology\, Director of the Alzheimer’s Disease Research Center\, Columbia University Irving Medical Center\nKim Stachenfeld\, PhD\, Senior Research Scientist at Google DeepMind in NYC and Adjunct Assistant Professor at the Center for Theoretical Neuroscience\, Columbia University\nRichard Zemel\, PhD\, Trianthe Dakolias Professor of Engineering and Applied Science; Professor of Computer Science; Director of the NSF AI Institute for Artificial and Natural Intelligence (ARNI)\, Columbia University \nModerator:\nDaphna Shohamy\, PhD\, Kavli Professor of Brain Science; Director of the Zuckerman Institute; Co-director of the Kavli Institute for Brain Science\, Columbia University
URL:https://arni-institute.org/event/memory-neuroscience-and-ai-zuckerman-institutes-local-circuits-symposium/
LOCATION:Zuckerman Institute- Kavli Auditorium 9th Fl\, 3227 Broadway\, NY
ORGANIZER;CN="Zuckerman Institute":MAILTO:events@zi.columbia.edu
END:VEVENT
END:VCALENDAR