BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//ARNI - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:ARNI
X-ORIGINAL-URL:https://arni-institute.org
X-WR-CALDESC:Events for ARNI
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20230312T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20231105T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20260308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20261101T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VTIMEZONE
TZID:UTC
BEGIN:STANDARD
TZOFFSETFROM:+0000
TZOFFSETTO:+0000
TZNAME:UTC
DTSTART:20230101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250117T113000
DTEND;TZID=America/New_York:20250117T130000
DTSTAMP:20260403T154345
CREATED:20250113T163553Z
LAST-MODIFIED:20250114T161426Z
UID:1338-1737113400-1737118800@arni-institute.org
SUMMARY:CTN: Adam Cohen
DESCRIPTION:Title: Mapping bioelectrical signals\, from dendrites to circuits\n\n\nAbstract:\nNeuronal dendrites are excitable\, but what are these excitations for?  Are dendritic excitations involved in integration?  Or in mediating back-propagation?  What are their footprints\, and what patterns of spiking and synaptic inputs can activate them?  We mapped bioelectrical signals throughout dendritic arbors of pyramidal cells in behaving mice and developed simple models relating dendritic biophysics to computation.  I will also describe all-optical circuit mapping in behaving mice\, and experiments recording voltage simultaneously from hundreds of genetically defined neurons during behavior.  These new data sets open possibilities for modeling how cellular intrinsic properties and local circuits process information.
URL:https://arni-institute.org/event/ctn-adam-cohen/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250113T113000
DTEND;TZID=America/New_York:20250113T130000
DTSTAMP:20260403T154345
CREATED:20250108T204804Z
LAST-MODIFIED:20250108T204828Z
UID:1325-1736767800-1736773200@arni-institute.org
SUMMARY:CTN: Mehdi Azabou\, ARNI Postdoctorate Research Scientist
DESCRIPTION:Title: Building foundation models for neuroscience\n\nAbstract: Current methodologies for recording brain activity often provide narrow views of the brain’s function. This fragmentation of datasets has hampered the development of robust and comprehensive computational models that generalize across diverse conditions\, tasks\, and individuals. Our work is motivated by the need for a large-scale foundation model in neuroscience–one that can go beyond the limitations of single-dataset approaches and offer a fuller\, more comprehensive picture of brain function. We propose a novel\, scalable and unified approach for training on diverse neural datasets. We test our model across two large collections of data: 1. on nonhuman primates performing diverse motor tasks\, spanning over 158 different sessions from over 27\,373 neural units\, and 2. the entirety of the Allen Institute’s Brain Observatory dataset\, containing responses from over 100\,000 neurons in 6 areas of the brains of mice\, observed with two-photon calcium imaging\, recorded while the mice observed different types of visual stimuli.
URL:https://arni-institute.org/event/ctn-mehdi-azabou-arni-postdoctorate-research-scientists/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250110T113000
DTEND;TZID=America/New_York:20250110T130000
DTSTAMP:20260403T154345
CREATED:20250108T164302Z
LAST-MODIFIED:20250108T164302Z
UID:1316-1736508600-1736514000@arni-institute.org
SUMMARY:CTN: Mazviita Chirimuuta
DESCRIPTION:Title: Neuromorphic Computing and the Significance of Medium Dependence\n\n \nAbstract:\nThe increasingly prohibitive cost of energy demanded by large artificial neural networks (ANNs) is giving new impetus to research and development on neuromorphic computing. Importantly\, there is an open question over how brain-like the hardware will have to be in order for an artificial intelligence to match the brain in its combination of robustness\, adaptability\, and energy efficiency. If biological cognition is heavily dependent on the specific properties of the material that instantiates it (i.e. living cells)\, then neuromorphic computing will have to merge with synthetic biology in order to achieve its ultimate goal of brain-like performance. If it is not\, neuromorphic computing holds out the promise of some gains in efficiency but there is no pressure for hardware to become increasingly neuro-mimetic in order to match the functionality of the nervous system. In this talk I introduce the concept of practical medium dependence/independence in order to explore the likelihood of these two scenarios. I present the argument that practically medium independent approaches to information processing\, such as digital computing\, are inherently less efficient than ones dependent on the specifics of implementing media\, and for that reason will not have evolved. This result has implications for how we rate the near-term possibility of human-like artificial general intelligence\, and offers a new way to understand how cognition is rooted\, more generally\, in biological processes.
URL:https://arni-institute.org/event/ctn-mazviita-chirimuuta/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241209T113000
DTEND;TZID=America/New_York:20241209T130000
DTSTAMP:20260403T154345
CREATED:20241202T205723Z
LAST-MODIFIED:20241202T205723Z
UID:1214-1733743800-1733749200@arni-institute.org
SUMMARY:CTN Lab: Ashok Litwin-Kumar
DESCRIPTION:Title: Searching for symmetries in connectome data\n\nAbstract: I will talk about work with Haozhe Shan on identifying structure in connectome data that suggests a cell type encodes one or a handful of variables\, like heading direction or retinotopy. We are framing the problem as learning a graph embedding\, but I will also mention other things we have considered which\, at least for me\, were educational. The project is at an early stage\, so we would welcome suggestions and ideas.
URL:https://arni-institute.org/event/ctn-lab-ashok-litwin-kumar/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241206T140000
DTEND;TZID=America/New_York:20241206T150000
DTSTAMP:20260403T154345
CREATED:20241111T160235Z
LAST-MODIFIED:20241204T160339Z
UID:1159-1733493600-1733497200@arni-institute.org
SUMMARY:Continual Learning Working Group: Lea Duncker
DESCRIPTION:Title: Task-dependent low-dimensional population dynamics for robustness and learning \nAbstract: Biological systems face dynamic environments that require flexibly deploying learned skills and continual learning of new tasks. It is not well understood how these systems balance the tension between flexibility for learning and robustness for memory of previous behaviors. Neural activity underlying single\, highly controlled experimental tasks has repeatedly been observed to exhibit low-dimensional structure. However\, it is unclear how this organization arises and is maintained throughout learning\, and how it might differ when networks are exposed to multiple tasks. In this talk\, I will present work on a continual learning rule designed to minimize interference between sequentially learned tasks in recurrent networks. The learning rule preserves network dynamics within activity-defined low-dimensional subspaces used for previously learned tasks. It encourages recurrent dynamics associated with interfering tasks to explore orthogonal subspaces. Employing a set of tasks used in neuroscience\, I will show that this approach can successfully eliminate catastrophic interference\, while allowing for reuse of similar low-dimensional dynamics across similar tasks. This possibility for shared computation allows for faster learning during sequential training. Finally\, I will highlight limitations of this approach in fully exploiting task-similarity for optimal re-use of previously learned solutions\, and outline new work we are starting in my group now to address this. \nZoom Link: https://columbiauniversity.zoom.us/j/97176853843?pwd=VLZdh6yqHBcOQhdf816lkN5ByIpIsF.1
URL:https://arni-institute.org/event/continual-learning-working-group-lea-duncker/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241206T110000
DTEND;TZID=America/New_York:20241206T130000
DTSTAMP:20260403T154345
CREATED:20241202T175520Z
LAST-MODIFIED:20241202T175520Z
UID:1211-1733482800-1733490000@arni-institute.org
SUMMARY:CTN Seminar: Andrew Leifer
DESCRIPTION:Title: TBD \nAbstract: TBD
URL:https://arni-institute.org/event/ctn-seminar-andrew-leifer/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241206T110000
DTEND;TZID=America/New_York:20241206T120000
DTSTAMP:20260403T154345
CREATED:20241210T193448Z
LAST-MODIFIED:20241210T193448Z
UID:1245-1733482800-1733486400@arni-institute.org
SUMMARY:Lecture in AI: Danqi Chen
DESCRIPTION:Title: Training Language Models in Academic: Research Questions and Opportunities \nAbstract: Large language models have emerged as transformative tools in artificial intelligence\, demonstrating unprecedented capabilities in understanding and generating human language. While these models have achieved remarkable performance across a wide range of benchmarks and enabled groundbreaking applications\, their development has been predominantly concentrated within large technology companies due to substantial computational and proprietary data requirements. In this talk\, I will present a vision for how academic research can play a critical role in advancing the open language model ecosystem\, particularly by developing smaller yet highly capable models and advancing our fundamental understanding of training practices. Drawing from our research group’s recent projects\, I will examine key research questions and challenges in both pre-training and post-training stages. Our work spans developing small language models (Sheared LLaMA; 1-3B parameters)\, the state-of-the-art <10B model on Chatbot Arena (gemma-2-SimPO)\, and long-context models supporting up to 512K tokens (ProLong). These examples illustrate how academic research can push the boundaries of model efficiency\, capability\, and scalability. I will conclude by exploring future directions and highlighting opportunities to shape the development of more accessible and powerful language models.
URL:https://arni-institute.org/event/lecture-in-ai-danqi-chen/
LOCATION:Davis Auditorium\, 530 W 120th St\, New York\, NY 10027\, New York\, NY\, 10027
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241120T103000
DTEND;TZID=America/New_York:20241120T113000
DTSTAMP:20260403T154345
CREATED:20241113T161354Z
LAST-MODIFIED:20241114T183223Z
UID:1162-1732098600-1732102200@arni-institute.org
SUMMARY:CTN: Seminar Speaker Alessandro Ingrosso
DESCRIPTION:Title:\nStatistical mechanics of transfer learning in the proportional limit\n\nAbstract:\nTransfer learning (TL) is a well-established machine learning technique to boost the generalization performance on a specific (target) task using information gained from a related (source) task\, and it crucially depends on the ability of a network to learn useful features. I will present a recent work that leverages analytical progress in the proportional regime of deep learning theory (i.e. the limit where the size of the training set P and the size of the hidden layers N are taken to infinity keeping their ratio P/N finite) to develop a novel statistical mechanics formalism for TL in Bayesian neural networks. I’ll show how such single-instance Franz-Parisi formalism can yield an effective theory for TL in one-hidden-layer fully-connected neural networks. Unlike the (lazy-training) infinite-width limit\, where TL is ineffective\, in the proportional limit TL occurs due to a renormalized source-target kernel that quantifies their relatedness and determines whether TL is beneficial for generalization.
URL:https://arni-institute.org/event/ctn-seminar-speaker-alessandro-ingrosso/
LOCATION:Zuckerman Institute – L3-079\, 3227 Broadway\, New York\, NY\, 10027\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241115T113000
DTEND;TZID=America/New_York:20241115T130000
DTSTAMP:20260403T154345
CREATED:20241108T014804Z
LAST-MODIFIED:20241108T014804Z
UID:1148-1731670200-1731675600@arni-institute.org
SUMMARY:CTN: Catherine Hartley
DESCRIPTION:Title: TBD \nAbstract: TBD
URL:https://arni-institute.org/event/ctn-catherine-hartley/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241113T120000
DTEND;TZID=America/New_York:20241113T130000
DTSTAMP:20260403T154345
CREATED:20241108T015156Z
LAST-MODIFIED:20241111T155937Z
UID:1151-1731499200-1731502800@arni-institute.org
SUMMARY:Continual Learning Working Group: Nikita Rajaneesh
DESCRIPTION:Title: Wandering Within a World \nA discussion on Wandering Within a World: Online Contextualized Few-Shot Learning\, this 2021 paper by our very own Rich Zemel leverages contextual information in a continually changing environment to improve model performance in realistic settings. \nZoom Link: https://columbiauniversity.zoom.us/j/97176853843?pwd=VLZdh6yqHBcOQhdf816lkN5ByIpIsF.1
URL:https://arni-institute.org/event/continual-learning-working-group-curriculum-review/
LOCATION:CEPSR 6LW4\, Computer Science Department 500 West 120 Street
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241108T113000
DTEND;TZID=America/New_York:20241108T130000
DTSTAMP:20260403T154345
CREATED:20241031T201012Z
LAST-MODIFIED:20241106T202546Z
UID:1135-1731065400-1731070800@arni-institute.org
SUMMARY:CTN: Tanya Sharpee
DESCRIPTION:Seminar Time: 11:30am\nDate: 11/8/2024\nLocation: JLG\, L5-084 \nHost: Krishan Kumar\n \n\n\nTitle: Building mechanistic models of neural computations with simulation-based machine learnin
URL:https://arni-institute.org/event/ctn-tanya-sharpee/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241106T123000
DTEND;TZID=America/New_York:20241106T130000
DTSTAMP:20260403T154345
CREATED:20241031T201241Z
LAST-MODIFIED:20241106T202632Z
UID:1138-1730896200-1730898000@arni-institute.org
SUMMARY:Continual Learning Working Group: Brainstorming
DESCRIPTION:Zoom Link:  https://columbiauniversity.zoom.us/j/97176853843?pwd=VLZdh6yqHBcOQhdf816lkN5ByIpIsF.1
URL:https://arni-institute.org/event/continual-learning-working-group-brainstorming/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241101T150000
DTEND;TZID=America/New_York:20241101T170000
DTSTAMP:20260403T154345
CREATED:20241016T230120Z
LAST-MODIFIED:20241023T190913Z
UID:1104-1730473200-1730480400@arni-institute.org
SUMMARY:ARNI Seminar Series Kick Off: Speaker Jim DiCarlo
DESCRIPTION:Title: Do contemporary\, machine-executable models (aka digital twins) of the primate ventral visual system unlock the ability to non-invasively\, beneficially modulate high level brain states? \nAbstract: \nIn this talk\, I will first briefly review the story of how neuroscience\, cognitive science and computer science (“AI”) converged to create specific\, image-computable\, deep neural network models intended to appropriately abstract\, emulate and explain the mechanisms of primate core visual object identification and categorization behaviors. Based on a large body of primate neurophysiological and behavioral data\, some of these network models are now the most accurate emulators of the primate ventral visual stream — they well-approximate both its internal neural mechanisms and how those mechanisms support the ability of humans and other primates to rapidly and accurately infer object identity\, position\, pose\, etc. from the set of pixels (image) received during typical natural viewing. \nBecause these leading neuroscientific emulator models — aka “digital twins” — are fully observable and machine-executable\, they offer predictive and potential application power that our field’s prior conceptual models did not. I will describe two recent examples from our team. First\, the current leading digital twins predict that the brain’s high level visual neurons (inferior temporal cortex\, IT) should be highly susceptible to “adversarial attacks” in which an agent (the adversary) aims to strongly disrupt the normal neural response (here\, neural firing rate) to any given natural image via small magnitude\, targeted changes to that image. We verified this surprising prediction in monkey IT neurons. Second\, we show how we can turn this result around and extend it: instead of making adversarial “attacks”\, we propose using digital twin models to support non-invasive\, beneficial brain modulation. Specifically\, we show that we can use a digital twin to design spatial patterns of light energy that\, when applied to the organism’s retina in the context of ongoing natural visual processing\, results in precise modulation (i.e. rate bias) of the pattern of a population of IT neurons (where any intended modulation pattern is chosen ahead of time by the scientist). Because the IT visual neural populations are known to directly modulate downstream neural circuits involved in mood and anxiety\, we speculate that this could provide a new\, non-invasive application avenue of potential future human clinical benefit. \nZoom Link: https://columbiauniversity.zoom.us/j/97757217278?pwd=3iRcxbHOY4z4giEiEGo1peEC8EIfK1.1
URL:https://arni-institute.org/event/arni-seminar-series-kick-off-speaker-jim-dicarlo/
LOCATION:Zuckerman Institute – L7-119\, 3227 Broadway\, New York\, NY\, 10027\, United States
ORGANIZER;CN="ARNI":MAILTO:arni@columbia.edu
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241101T113000
DTEND;TZID=America/New_York:20241101T130000
DTSTAMP:20260403T154345
CREATED:20241031T200759Z
LAST-MODIFIED:20241031T201035Z
UID:1131-1730460600-1730466000@arni-institute.org
SUMMARY:CTN: Jacob Macke
DESCRIPTION:Title: Building mechanistic models of neural computations with simulation-based machine learning\n\nAbstract: Experimental techniques now make it possible to measure the structure and function of neural circuits at an unprecedented scale and resolution. How can we leverage this wealth of data to understand how neural circuits perform computations underlying behaviour? A mechanistic understanding will require models that align with experimental measurements and biophysical mechanisms\, while also being capable of performing behaviorally relevant computations. Building such models has remained a central challenge.\nI will present our work on addressing his challenge: We have developed machine learning methods and differentiable simulators that make it possible to algorithmically identify models that link biophysical mechanisms\, neural data\, and behaviour. I will show how these approaches—in combination with modern connectomic measurements—make it possible to build large-scale mechanistic models of the fruit fly visual system\, and how such a model can make experimentally testable predictions for each neuron in the system.
URL:https://arni-institute.org/event/ctn-jacob/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241031T110000
DTEND;TZID=America/New_York:20241031T130000
DTSTAMP:20260403T154345
CREATED:20241025T211919Z
LAST-MODIFIED:20241025T211919Z
UID:1116-1730372400-1730379600@arni-institute.org
SUMMARY:CTN: Naureen Ghani
DESCRIPTION:Title: Mice wiggle a wheel to boost the salience of low visual contrast stimuli \n\nAbstract: From the Welsh tidy mouse to the New York City pizza rat\, movement belies rodent intelligence. We show that head-fixed mice develop an active sensing strategy while performing a visual perceptual decision-making task (The International Brain Laboratory\, 2021). Akin to humans shaking a computer mouse to find the cursor on a screen\, we demonstrate that mice wiggle the wheel that controls the movement of a visual stimulus to boost low contrast salience. Moreover\, mice wiggle the wheel at a temporal frequency (11.9 ± 2.9 Hz) optimal for their visual systems (Umino et al\, 2018). With the “old method of watching and wondering about behavior\,” we reveal that mice exploit that it is easier to see something moving than something stationary by wiggling (Tinbergen\, 1973).
URL:https://arni-institute.org/event/ctn-naureen-ghani/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20241025T113000
DTEND;TZID=UTC:20241025T130000
DTSTAMP:20260403T154345
CREATED:20241022T222511Z
LAST-MODIFIED:20241022T224826Z
UID:1109-1729855800-1729861200@arni-institute.org
SUMMARY:CTN: Tatiana Engel
DESCRIPTION:Title: Unifying neural population dynamics\, manifold geometry\, and circuit structure.\n\nAbstract: Single neurons show complex\, heterogeneous responses during cognitive tasks\, often forming low-dimensional manifolds in the population state space. Consequently\, it is widely accepted that neural computations arise from low-dimensional population dynamics while attributing functional properties to individual neurons is impossible. I will present recent work from our lab that bridges single-neuron heterogeneity to manifold geometry and population dynamics. First\, we developed a flexible modeling approach for simultaneously inferring single-trial population dynamics and tuning functions of individual neurons to the latent population state. Applied to spike data recorded during decision-making\, our model revealed that all neurons encode the same dynamic decision variable\, and heterogeneous firing rates result from diverse tuning of single neurons to this decision variable. Second\, using a firing-rate recurrent network model\, we mathematically prove that responses of single neurons cluster into functional types when population dynamics are confined to a low-dimensional linear subspace\, with the number of distinct response types equal to the linear dimension of the neural manifold. We confirm these predictions in recurrent neural networks trained on cognitive tasks and brain-wide neural recordings from mice during a decision-making behavior. Our findings show that low-dimensional population dynamics can be understood in terms of functional cell types\, and random mixed selectivity emerges only in the limit of high-dimensional dynamics.
URL:https://arni-institute.org/event/ctn-tatiana-engel/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241021T090000
DTEND;TZID=America/New_York:20241022T130000
DTSTAMP:20260403T154345
CREATED:20240913T201213Z
LAST-MODIFIED:20241011T202544Z
UID:1052-1729501200-1729602000@arni-institute.org
SUMMARY:ARNI Annual Retreat
DESCRIPTION:General Agenda \nOctober 21st\, Day 1 from 8:45am to 5pm \n\nBreakfast and Lunch Provided\nOpening\n3 Keynote Speakers from ARNI Faculty\nResearch Brainstorming and Discussions\nProject/Student Poster Session\nEducation and Broader Impact Discussions\n\nOctober 22nd\, Day 2 from 9am to 1pm \n\nBreakfast and Lunch Provided\n1 Keynote Speaker\nBrainstorming and Discussion on Collaborations & Knowledge Transfer\nAI Industry Panel (with industry partners)\nClosing
URL:https://arni-institute.org/event/arni-annual-retreat/
LOCATION:Faculty House\, 64 Morningside Dr
ORGANIZER;CN="ARNI":MAILTO:arni@columbia.edu
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20241018T133000
DTEND;TZID=UTC:20241018T150000
DTSTAMP:20260403T154345
CREATED:20241004T202033Z
LAST-MODIFIED:20251016T132455Z
UID:1094-1729258200-1729263600@arni-institute.org
SUMMARY:Continual Learning Working Group: Yasaman Mahdaviyeh
DESCRIPTION:  \nTitle: Meta Continual Learning Revisited: Implicitly Enhancing Online Hessian Approximation via Variance Reduction \nReading: https://openreview.net/pdf?id=TpD2aG1h0D \nZoom: https://columbiauniversity.zoom.us/j/97176853843?pwd=VLZdh6yqHBcOQhdf816lkN5ByIpIsF.1
URL:https://arni-institute.org/event/continual-learning-working-group-yasaman-mahdaviyeh/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241016T103000
DTEND;TZID=America/New_York:20241016T120000
DTSTAMP:20260403T154345
CREATED:20241011T201622Z
LAST-MODIFIED:20241011T201715Z
UID:1100-1729074600-1729080000@arni-institute.org
SUMMARY:CTN: Benjamin Grewe
DESCRIPTION:Title: Target Learning rather than Backpropagation Explains Learning in the Mammalian Neocortex \nAbstract: Modern computational neuroscience presents two competing hypotheses for hierarchical learning in the neocortex: (1) deep learning-inspired approximations of the backpropagation algorithm\, where neurons adjust synapses to minimize error\, and (2) target learning algorithms\, where neurons reduce the feedback required to achieve a desired activity. In this talk\, I will explore this fundamental question by examining the relationship between synaptic plasticity and the somatic activity of pyramidal neurons. Using a combination of single-neuron modeling\, in vitro experiments\, and deep learning theory\, we predict distinct neuronal dynamics for each hypothesis. We then test these predictions using in vivo data from the mouse visual cortex. Our results reveal that cortical learning aligns more closely with target learning\, underscoring a significant discrepancy between conventional deep learning approaches and the mechanisms underlying cortical hierarchical learning. This work provides new insights into the neural processes that drive learning in the brain and challenges current models inspired by deep learning.
URL:https://arni-institute.org/event/ctn-benjamin-grewe/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20241011T133000
DTEND;TZID=UTC:20241011T150000
DTSTAMP:20260403T154345
CREATED:20241004T201714Z
LAST-MODIFIED:20241004T201723Z
UID:1089-1728653400-1728658800@arni-institute.org
SUMMARY:Continual Learning Working Group: Lindsay Smith
DESCRIPTION:Title: A Practitioner’s Guide to Continual Multimodal Pretraining \nReading: https://arxiv.org/pdf/2408.1447 \nZoom: https://columbiauniversity.zoom.us/j/97176853843?pwd=VLZdh6yqHBcOQhdf816lkN5ByIpIsF.1
URL:https://arni-institute.org/event/continual-learning-working-group-10/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241011T113000
DTEND;TZID=America/New_York:20241011T130000
DTSTAMP:20260403T154345
CREATED:20240913T200808Z
LAST-MODIFIED:20241008T172933Z
UID:1049-1728646200-1728651600@arni-institute.org
SUMMARY:CTN: Brenden Lake
DESCRIPTION:Title: Meta-learning for more powerful behavioral modeling \nAbstract: Two modeling paradigms have historically been in tension: Bayesian models provide an elegant way to incorporate prior knowledge\, but they make simplifying and constraining assumptions; on the other hand\, neural networks provide great modeling flexibility\, but they make it difficult to incorporate prior knowledge. Here I describe how to get the best of both approaches through Behaviorally-Informed Meta-Learning (BIML). BIML allows for modeling behavior with flexible Transformers\, even with only minimal data\, by distilling Bayesian priors into neural networks and then further fine-tuning the networks on behavioral data. I’ll show some initial successes using BIML to model human concept learning\, resulting in superior fits by capturing behavioral heuristics and biases that violate simple Bayesian assumptions. At the end\, I would love to discuss how to overcome the challenges of interpreting this new class of models. \nZoom: https://columbiauniversity.zoom.us/j/93740145362?pwd=GgoanUbc3Kc4rWdux2doLOiciiAaO2.1\nmeeting ID: 937 4014 5362\npasscode: ctn
URL:https://arni-institute.org/event/ctn-brenden-lake/
LOCATION:Zuckerman Institute – L3-079\, 3227 Broadway\, New York\, NY\, 10027\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20241004T140000
DTEND;TZID=UTC:20241004T160000
DTSTAMP:20260403T154345
CREATED:20240924T223032Z
LAST-MODIFIED:20240924T223032Z
UID:1065-1728050400-1728057600@arni-institute.org
SUMMARY:Continual Learning Working Group: Amogh Inamdar
DESCRIPTION:Title: Taskonomy: Disentangling Task Transfer Learning \nAbstract: TBD  \nLink: http://taskonomy.stanford.edu/taskonomy_CVPR2018.pdf
URL:https://arni-institute.org/event/continual-learning-working-group-amogh-inamdar/
LOCATION:CSB 488
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241003T130000
DTEND;TZID=America/New_York:20241003T150000
DTSTAMP:20260403T154345
CREATED:20240912T211938Z
LAST-MODIFIED:20241003T221343Z
UID:1039-1727960400-1727967600@arni-institute.org
SUMMARY:Multi-resource-cost Optimization for Neural Networks Models Working Group (NNMS): Simon Laughlin
DESCRIPTION:Title: Neuronal energy consumption: basic measures and trade-offs\, and their effects on efficiency \nZoom: https://columbiauniversity.zoom.us/j/98299154214?pwd=1J3J0lEpF6XdqHkHy02c7LuD6xUWx2.1
URL:https://arni-institute.org/event/multi-resource-cost-optimization-for-neural-networks-models-working-group-nnms-simon-laughlin/
LOCATION:Zuckerman Institute – L3-079\, 3227 Broadway\, New York\, NY\, 10027\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240925T140000
DTEND;TZID=America/New_York:20240925T163000
DTSTAMP:20260403T154345
CREATED:20240930T175148Z
LAST-MODIFIED:20240930T175148Z
UID:1078-1727272800-1727281800@arni-institute.org
SUMMARY:Multi-resource-cost Optimization for Neural Networks Models Working Group (NNMS): Tom Griffiths
DESCRIPTION:Title: Bounded optimality: A cognitive perspective on neural computation with resource limitations
URL:https://arni-institute.org/event/multi-resource-cost-optimization-for-neural-networks-models-working-group-nnms-tom-griffiths/
LOCATION:Zuckerman Institute – L3-079\, 3227 Broadway\, New York\, NY\, 10027\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240920T153000
DTEND;TZID=America/New_York:20240920T170000
DTSTAMP:20260403T154345
CREATED:20240905T195458Z
LAST-MODIFIED:20240917T214549Z
UID:1025-1726846200-1726851600@arni-institute.org
SUMMARY:Continual Learning Working Group: Haozhe Shan
DESCRIPTION:Speaker: Haozhe Shan \n\nTitle: A theory of continual learning in deep neural networks: task relations\, network architecture and learning procedure\n\nAbstract: Imagine listening to this talk and afterwards forgetting everything else you’ve ever learned. This absurd scenario would be commonplace if the brain could not perform continual learning (CL) – acquiring new skills and knowledge without dramatically forgetting old ones. Ubiquitous and essential in our daily life\, CL has proven a daunting computational challenge for neural networks (NN) in machine learning. When is CL especially easy or difficult for neural systems\, and why?\n\nTowards answering these questions\, we developed a statistical mechanics theory of CL dynamics in deep NNs. The theory exactly describes how the network’s input-output mapping evolves as it learns a sequence of tasks\, as a function of the training data\, NN architecture\, and the strength of a penalty applied to between-task weight changes. We first analyzed how task relations affect CL performance\, finding that they can be efficiently described by two metrics: similarity between inputs from two tasks in the NN’s feature space (“input overlap”) and consistency of input-output rules of different tasks (“rule congruency”). Higher input overlap leads to faster forgetting while lower congruency leads to stronger asymptotic forgetting – predictions which we validated with both synthetic tasks and popular benchmark datasets. Surprisingly\, we found that increasing the network depth reshapes geometry of the network’s feature space to decrease input overlap between tasks and slow forgetting. The reduced cross-task overlap in deeper networks also leads to less anterograde interference during CL but at the same time hinders their ability to accumulate knowledge across tasks. Finally\, our theory can well match CL dynamics in NNs trained with stochastic gradient descent (SGD). Using noisier\, faster learning during CL is equivalent to weakening the weight-change penalty. Link to preprint: https://arxiv.org/abs/2407.10315. \nBio: Haozhe Shan joined Columbia University as an ARNI Postdoctoral Fellow in August 2024. He recently received a Ph.D. in Neuroscience from Harvard\, advised by Haim Sompolinsky. His research applies quantitative tools from physics\, statistics and other fields to discover computational principles behind neural systems\, both biological and artificial. A recent research interest is the ability of neural systems to continually learn and perform multiple tasks in a flexible manner. \nZoom Link: https://columbiauniversity.zoom.us/j/97176853843?pwd=VLZdh6yqHBcOQhdf816lkN5ByIpIsF.1
URL:https://arni-institute.org/event/continual-learning-working-group-haozhe-shan/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240920T113000
DTEND;TZID=America/New_York:20240920T130000
DTSTAMP:20260403T154345
CREATED:20240910T180610Z
LAST-MODIFIED:20240917T214456Z
UID:1036-1726831800-1726837200@arni-institute.org
SUMMARY:CTN: Eva Dyer 
DESCRIPTION:Title: Large-scale pretraining on neural data allows for transfer across individuals\, tasks and species \nAbstract: As neuroscience datasets grow in size and complexity\, integrating diverse data sources to achieve a comprehensive understanding of brain function presents both an opportunity and a challenge. In this talk\, I will introduce our approach to developing a multi-source foundation model for neuroscience\, utilizing large-scale pretraining on neural data from various tasks\, brain regions\, and species. These models are designed to enable seamless transfer learning across individuals\, tasks\, and species\, thereby enhancing data efficiency and advancing the capabilities of neural decoding technologies. By integrating diverse datasets\, our aim is to uncover the common neural functions that underlie a wide range of tasks and brain regions\, providing a deeper understanding of brain function and informing future brain-machine interface applications. \nZoom:\nhttps://columbiauniversity.zoom.us/j/97505761667?pwd=KkvqBSag7VPFebf8eyqKpqvdVPbaHn.1\npasscode: ctn
URL:https://arni-institute.org/event/ctn-eva-dyer/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240916T080000
DTEND;TZID=America/New_York:20240916T163000
DTSTAMP:20260403T154345
CREATED:20240913T190408Z
LAST-MODIFIED:20240914T042847Z
UID:1044-1726473600-1726504200@arni-institute.org
SUMMARY:ARNI NSF Site Visit
DESCRIPTION:NSF Site Visit – The NSF team will evaluate the progress and achievements of ARNI’s projects to date and provide recommendations to steer future directions and funding for the project. \nIf you are interested in learning more about ARNI over-all\, join this Zoom link from 9am to 12pm or 2pm to 4:30pm.
URL:https://arni-institute.org/event/arni-nsf-site-visit/
LOCATION:Innovation Hub\, Tang Family Hall - 2276 12TH AVENUE – FLOOR 02
ORGANIZER;CN="ARNI":MAILTO:arni@columbia.edu
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240913T153000
DTEND;TZID=America/New_York:20240913T170000
DTSTAMP:20260403T154345
CREATED:20240905T195120Z
LAST-MODIFIED:20240914T042826Z
UID:1023-1726241400-1726246800@arni-institute.org
SUMMARY:Continual Learning Working Group: Kick Off
DESCRIPTION:Speaker: Mengye Ren\n\n\nTitle: Lifelong and Human-like Learning in Foundation Models\n\nAbstract: Real-world agents\, including humans\, learn from online\, lifelong experiences. However\, today’s foundation models primarily acquire knowledge through offline\, iid learning\, while relying on in-context learning for most online adaptation. It is crucial to equip foundation models with lifelong and human-like learning abilities to enable more flexible use of AI in real-world applications. In this talk\, I will discuss recent works exploring interesting phenomena in foundation models when learning in online\, structured environments. Notably\, foundation models exhibit anticipatory and semantically-aware memorization and forgetting behaviors. Furthermore\, I will introduce a new method that combines pretraining and meta-learning for learning and consolidating new concepts in large language models. This approach has the potential to lead to future foundation models with incremental consolidation and abstraction capabilities.
URL:https://arni-institute.org/event/continual-learning-working-group-kick-off/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240913T113000
DTEND;TZID=UTC:20240913T130000
DTSTAMP:20260403T154345
CREATED:20240910T180255Z
LAST-MODIFIED:20240910T180503Z
UID:1032-1726227000-1726232400@arni-institute.org
SUMMARY:CTN: Stephanie Palmer
DESCRIPTION:Title: How behavioral and evolutionary constraints sculpt early visual processing \nAbstract: Biological systems must selectively encode partial information about the environment\, as dictated by the capacity constraints at work in all living organisms. For example\, we cannot see every feature of the light field that reaches our eyes; temporal resolution is limited by transmission noise and delays\, and spatial resolution is limited by the finite number of photoreceptors and output cells in the retina. Classical efficient coding theory describes how sensory systems can maximize information transmission given such capacity constraints\, but it treats all input features equally. Not all inputs are\, however\, of equal value to the organism. Our work quantifies whether and how the brain selectively encodes stimulus features\, specifically predictive features\, that are most useful for fast and effective movements. We have shown that efficient predictive computation starts at the earliest stages of the visual system\, in the retina. We borrow techniques from statistical physics and information theory to assess how we get terrific\, predictive vision from these imperfect (lagged and noisy) component parts. In broader terms\, we aim to build a more complete theory of efficient encoding in the brain\, and along the way have found some intriguing connections between formal notions of coarse graining in biology and physics.
URL:https://arni-institute.org/event/stephanie-palmer/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240906T113000
DTEND;TZID=America/New_York:20240906T130000
DTSTAMP:20260403T154345
CREATED:20240903T194843Z
LAST-MODIFIED:20240914T042726Z
UID:1019-1725622200-1725627600@arni-institute.org
SUMMARY:CTN: Sebastian Seung
DESCRIPTION:Title:  Insights into vision from interpreting a neuronal wiring diagram\nHost: Marcus Triplett \nAbstract:  In 2023\, the FlyWire Consortium released the neuronal wiring diagram of an adult fly brain. This contains as a corollary the first complete wiring diagram of a visual system\, which has been used to identify all 200+ cell types that are intrinsic to the Drosophila optic lobe. About half of these cell types were previously unknown\, and less than 20% have ever been recorded by a physiologist. I will argue that plausible functions for many cell types can be guessed by interpreting the wiring diagram.
URL:https://arni-institute.org/event/cnt-sebastian-seung/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
END:VCALENDAR