BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//ARNI - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:ARNI
X-ORIGINAL-URL:https://arni-institute.org
X-WR-CALDESC:Events for ARNI
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20260308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20261101T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20270314T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20271107T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260407T130000
DTEND;TZID=America/New_York:20260407T150000
DTSTAMP:20260418T131817
CREATED:20260406T160419Z
LAST-MODIFIED:20260406T191708Z
UID:2433-1775566800-1775574000@arni-institute.org
SUMMARY:Speaker: Hadi Vafaii ARNI WG Multi-resource-cost optimization of neural network models
DESCRIPTION:Location: ZI L3-079\nTime: 1:00pm \nTitle: Metabolic cost of information processing in Poisson variational autoencoders \nAbstract:Computation in biological systems is fundamentally energy-constrained\, yet standard theories of computation treat energy as freely available. Here\, we argue that variational free energy minimization under a Poisson assumption offers a principled path toward an energy-aware theory of computation. Our key observation is that the Kullback-Leibler (KL) divergence term in the Poisson free energy objective becomes proportional to the prior firing rates of model neurons\, yielding an emergent metabolic cost term that penalizes high baseline activity. This structure couples an abstract information-theoretic quantity — the coding rate — to a concrete biophysical variable — the firing rate — which enables a trade-off between coding fidelity and energy expenditure. Such a coupling arises naturally in the Poisson variational autoencoder (P-VAE; a brain-inspired generative model that encodes inputs as discrete spike counts and recovers a spiking form of sparse coding as a special case) but is absent from standard Gaussian VAEs. To demonstrate that this metabolic cost structure is unique to the Poisson formulation\, we compare the P-VAE against GReLU-VAE\, a Gaussian VAE with ReLU rectification applied to latent samples\, which controls for the non-negativity constraint. Across a systematic sweep of the KL term weighting coefficient β and latent dimensionality\, we find that increasing β monotonically increases sparsity and reduces average spiking activity in the P-VAE. In contrast\, GReLU-VAE representations remain unchanged\, confirming that the effect is specific to Poisson statistics rather than a byproduct of non-negative representations. These results establish Poisson variational inference as a promising foundation for a resource-constrained theory of computation.
URL:https://arni-institute.org/event/speaker-hadi-vafaii-arni-wg-multi-resource-cost-optimization-of-neural-network-models/
LOCATION:Zuckerman Institute – L3-079\, 3227 Broadway\, New York\, NY\, 10027\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260409T150000
DTEND;TZID=America/New_York:20260409T160000
DTSTAMP:20260418T131817
CREATED:20260327T173721Z
LAST-MODIFIED:20260408T191305Z
UID:2418-1775746800-1775750400@arni-institute.org
SUMMARY:Speaker: Mengye Ren - ARNI Continual Learning Working Group Meeting
DESCRIPTION:Mengye Ren \nMengye will also be giving a talk on continual learning at the Zemel group meeting an hour prior (at 2pm) that working group attendees are welcome to join if interested. Here’s the abstract of his talk:\n\nToday’s AI models primarily acquire knowledge through offline\, i.i.d. learning. While in-context learning offers some capacity for online adaptation\, enabling models to continue learning at deployment or even from scratch through experiential streams remains a crucial question. In this talk\, I will introduce approaches toward always-learning machines\, beginning with video streams. We find that event segmentation—clustering event concepts in lifelong video—enables effective visual representation learning and grouping from scratch. We also show that pretrained VLMs can introspect over past memory and form event clusters through attention\, building hierarchical episodic memory for video question answering. Lastly\, I will discuss my recent theory linking continual learning and world modeling to self-consciousness.
URL:https://arni-institute.org/event/speaker-mengye-ren-arni-continual-learning-working-group-meeting/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260410T160000
DTEND;TZID=America/New_York:20260410T170000
DTSTAMP:20260418T131817
CREATED:20260330T140055Z
LAST-MODIFIED:20260330T140055Z
UID:2421-1775836800-1775840400@arni-institute.org
SUMMARY:ARNI Biological Learning Working Group
DESCRIPTION:Continuation of prior meetings. \nZoom: Upon request @arni@columbia.edu
URL:https://arni-institute.org/event/arni-biological-learning-working-group-9/
LOCATION:Virtual
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260414T140000
DTEND;TZID=America/New_York:20260414T150000
DTSTAMP:20260418T131817
CREATED:20260402T193254Z
LAST-MODIFIED:20260402T193254Z
UID:2428-1776175200-1776178800@arni-institute.org
SUMMARY:CTN: Jack Lindsey (Anthropic)
DESCRIPTION:Title: The inner lives of language models \nAbstract: In recent years\, LLMs have evolved from bad text completion engines\, to decent chatbots\, to digital genies that work miracles on your computer (while making the occasional catastrophic error). The increasing sophistication of AI models’ behavior has been accompanied by a commensurate enrichment of their internal representations and computations. In this talk\, I’ll give an overview of what’s known about LLM cognition\, and the ways in which it emulates components of human psychology: emotional reactions\, strategic manipulation\, and forms of introspection. I’ll also cover aspects of LLM behavior that are fundamentally un-human-like\, owing to features of their architecture and training process\, and how these give rise to odd failure modes—for instance\, a weakly anchored sense of self. Finally\, I’ll discuss the urgency of addressing pathologies\, both human-like and alien\, of LLM psychology\, and some ideas for doing so. \nThe talk is in-person. If you do not have card access to the Jerome L. Greene Science center building\, you can email Arianna Pepin <ap4287@columbia.edu> to be added to the guest list for the seminar.
URL:https://arni-institute.org/event/ctn-jack-lindsey-anthropic/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260424T150000
DTEND;TZID=America/New_York:20260424T163000
DTSTAMP:20260418T131817
CREATED:20260406T154309Z
LAST-MODIFIED:20260406T154309Z
UID:2430-1777042800-1777048200@arni-institute.org
SUMMARY:ARNI Distinguished Seminar Series: Ellie Pavlick (Brown University)
DESCRIPTION:Ellie Pavlick \n(Assistant Professor of Computer Science and Linguistics\, Brown University and Director\, NSF Institute on Interaction for AI Assistants (ARIA)) \nLocation: ZI Kavli Auditorium 9th Floor\nTime: 3:00pm \nTitle: (How) Does AI Think? \nAbstract: The increasingly human-like behavior of AI has led to a fascination with ascribing it human-like internal properties — notions like thinking\, understanding\, and reasoning. In this talk\, I will take a step back and discuss a range of results from the past several years of interpretability which present an increasingly consistent view of AI’s internal processing. Specifically\, I will discuss how AI represents a surprising amount of internal structure\, and yet remains primarily idiosyncratic and context-sensitive. I use the opportunity to talk about the philosophical nature of human-AI comparison–when such analogies are fruitful for scientific and technological progress\, and when they mislead. \nZoom: Upon request @arni@columbia.edu
URL:https://arni-institute.org/event/arni-distinguished-seminar-series-ellie-pavlick-brown-university/
LOCATION:Zuckerman Institute- Kavli Auditorium 9th Fl\, 3227 Broadway\, NY
ORGANIZER;CN="ARNI":MAILTO:arni@columbia.edu
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260427T150000
DTEND;TZID=America/New_York:20260427T160000
DTSTAMP:20260418T131817
CREATED:20260402T193102Z
LAST-MODIFIED:20260402T193102Z
UID:2426-1777302000-1777305600@arni-institute.org
SUMMARY:Speaker: Ziwei (Sara) Gong - ARNI Language and Vision Working Group
DESCRIPTION:Title: Decoding Human Emotions: From Psychological Theories to Multimodal NLP Models\nAbstract: Understanding and modeling human emotions is essential for natural language processing (NLP) applications\, from conversational AI to mental health assessment. This talk explores the intersection of emotion theory\, dataset development\, and multimodal machine learning\, highlighting key challenges and innovations in emotion recognition. We discuss the alignment of psychological emotion frameworks with computational models\, strategies for improving multimodal emotion recognition\, and advances in self-supervised learning for low-resource languages. Additionally\, we examine how multimodal signals enhance model performance and interpretability.
URL:https://arni-institute.org/event/speaker-ziwei-sara-gong-arni-language-and-vision-working-group/
LOCATION:Virtual
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260511T150000
DTEND;TZID=America/New_York:20260511T160000
DTSTAMP:20260418T131817
CREATED:20260402T192942Z
LAST-MODIFIED:20260414T131131Z
UID:2425-1778511600-1778515200@arni-institute.org
SUMMARY:Speaker: Hubert Banville\, Meta – ARNI Frontier Models for Neuroscience and Behavior Working Group
DESCRIPTION:Hubert Banville\, Meta\n\nTitle: A foundation model of vision\, audition\, and language for in-silico neuroscience\n\nAbstract: Cognitive neuroscience is fragmented into specialized models\, each tailored to specific experimental paradigms\, hence preventing a unified model of cognition in the human brain. Here\, we introduce TRIBE v2\, a tri-modal (video\, audio and language) foundation model capable of predicting human brain activity in a variety of naturalistic and experimental conditions. Leveraging a unified dataset of over 1\,000 hours of fMRI across 720 subjects\, we demonstrate that our model accurately predicts high-resolution brain responses for novel stimuli\, tasks and subjects\, superseding traditional linear encoding models\, delivering several-fold improvements in accuracy. Critically\, TRIBE v2 enables in silico experimentation: tested on seminal visual and neuro-linguistic paradigms\, it recovers a variety of results established by decades of empirical research. Finally\, by extracting interpretable latent features\, TRIBE v2 reveals the fine-grained topography of multisensory integration. These results establish artificial intelligence as a unifying framework for exploring the functional organization of the human brain. We will be hosting Hubert Banville from Meta who will discuss their latest TRIBE fMRI foundation model.
URL:https://arni-institute.org/event/speaker-tbd-arni-frontier-models-for-neuroscience-and-behavior-working-group/
LOCATION:Virtual
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260512T130000
DTEND;TZID=America/New_York:20260512T190000
DTSTAMP:20260418T131817
CREATED:20260327T171344Z
LAST-MODIFIED:20260327T171344Z
UID:2415-1778590800-1778612400@arni-institute.org
SUMMARY:Memory\, Neuroscience and AI: Zuckerman Institute’s Local Circuits Symposium
DESCRIPTION:Register Here! \nHow are memories formed\, organized\, and used to guide behavior? And what can artificial intelligence teach us about how the brain remembers? \nJoin faculty and early-career researchers from across Columbia for the Local Circuits symposium\, exploring the science of memory across biological and artificial systems. Talks will span systems and cognitive neuroscience\, machine learning\, and theoretical modeling\, examining how brain circuits encode and retrieve memories and how AI is helping researchers probe these processes in new ways. \nPart of the Zuckerman Institute’s Local Circuits series\, this symposium brings together researchers from across the university to spark collaboration around mind\, brain\, and behavior. \nAll Columbia ID holders are welcome. Registration is required. \nPresented by the Alan Kanzer Center for Cognition and Reasoning \nOpening Remarks:\nAngela V. Olinto\, Provost of the University; Professor of Astronomy and of Physics\, Columbia University \nSpeakers include:\nChris Baldassano\, PhD\, Associate Professor of Psychology\, Columbia University\nChristine Denny\, PhD\, Associate Professor of Clinical Neurobiology (in Psychiatry)\, Columbia University Irving Medical Center\nStefano Fusi\, PhD\, Professor of Neuroscience\, Principal Investigator in the Zuckerman Institute\, Columbia University\nScott Small\, MD\, Boris and Rose Katz Professor of Neurology\, Director of the Alzheimer’s Disease Research Center\, Columbia University Irving Medical Center\nKim Stachenfeld\, PhD\, Senior Research Scientist at Google DeepMind in NYC and Adjunct Assistant Professor at the Center for Theoretical Neuroscience\, Columbia University\nRichard Zemel\, PhD\, Trianthe Dakolias Professor of Engineering and Applied Science; Professor of Computer Science; Director of the NSF AI Institute for Artificial and Natural Intelligence (ARNI)\, Columbia University \nModerator:\nDaphna Shohamy\, PhD\, Kavli Professor of Brain Science; Director of the Zuckerman Institute; Co-director of the Kavli Institute for Brain Science\, Columbia University
URL:https://arni-institute.org/event/memory-neuroscience-and-ai-zuckerman-institutes-local-circuits-symposium/
LOCATION:Zuckerman Institute- Kavli Auditorium 9th Fl\, 3227 Broadway\, NY
ORGANIZER;CN="Zuckerman Institute":MAILTO:events@zi.columbia.edu
END:VEVENT
END:VCALENDAR