BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//ARNI - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:ARNI
X-ORIGINAL-URL:https://arni-institute.org
X-WR-CALDESC:Events for ARNI
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20230312T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20231105T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VTIMEZONE
TZID:UTC
BEGIN:STANDARD
TZOFFSETFROM:+0000
TZOFFSETTO:+0000
TZNAME:UTC
DTSTART:20230101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241101T150000
DTEND;TZID=America/New_York:20241101T170000
DTSTAMP:20260403T161134
CREATED:20241016T230120Z
LAST-MODIFIED:20241023T190913Z
UID:1104-1730473200-1730480400@arni-institute.org
SUMMARY:ARNI Seminar Series Kick Off: Speaker Jim DiCarlo
DESCRIPTION:Title: Do contemporary\, machine-executable models (aka digital twins) of the primate ventral visual system unlock the ability to non-invasively\, beneficially modulate high level brain states? \nAbstract: \nIn this talk\, I will first briefly review the story of how neuroscience\, cognitive science and computer science (“AI”) converged to create specific\, image-computable\, deep neural network models intended to appropriately abstract\, emulate and explain the mechanisms of primate core visual object identification and categorization behaviors. Based on a large body of primate neurophysiological and behavioral data\, some of these network models are now the most accurate emulators of the primate ventral visual stream — they well-approximate both its internal neural mechanisms and how those mechanisms support the ability of humans and other primates to rapidly and accurately infer object identity\, position\, pose\, etc. from the set of pixels (image) received during typical natural viewing. \nBecause these leading neuroscientific emulator models — aka “digital twins” — are fully observable and machine-executable\, they offer predictive and potential application power that our field’s prior conceptual models did not. I will describe two recent examples from our team. First\, the current leading digital twins predict that the brain’s high level visual neurons (inferior temporal cortex\, IT) should be highly susceptible to “adversarial attacks” in which an agent (the adversary) aims to strongly disrupt the normal neural response (here\, neural firing rate) to any given natural image via small magnitude\, targeted changes to that image. We verified this surprising prediction in monkey IT neurons. Second\, we show how we can turn this result around and extend it: instead of making adversarial “attacks”\, we propose using digital twin models to support non-invasive\, beneficial brain modulation. Specifically\, we show that we can use a digital twin to design spatial patterns of light energy that\, when applied to the organism’s retina in the context of ongoing natural visual processing\, results in precise modulation (i.e. rate bias) of the pattern of a population of IT neurons (where any intended modulation pattern is chosen ahead of time by the scientist). Because the IT visual neural populations are known to directly modulate downstream neural circuits involved in mood and anxiety\, we speculate that this could provide a new\, non-invasive application avenue of potential future human clinical benefit. \nZoom Link: https://columbiauniversity.zoom.us/j/97757217278?pwd=3iRcxbHOY4z4giEiEGo1peEC8EIfK1.1
URL:https://arni-institute.org/event/arni-seminar-series-kick-off-speaker-jim-dicarlo/
LOCATION:Zuckerman Institute – L7-119\, 3227 Broadway\, New York\, NY\, 10027\, United States
ORGANIZER;CN="ARNI":MAILTO:arni@columbia.edu
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241101T113000
DTEND;TZID=America/New_York:20241101T130000
DTSTAMP:20260403T161134
CREATED:20241031T200759Z
LAST-MODIFIED:20241031T201035Z
UID:1131-1730460600-1730466000@arni-institute.org
SUMMARY:CTN: Jacob Macke
DESCRIPTION:Title: Building mechanistic models of neural computations with simulation-based machine learning\n\nAbstract: Experimental techniques now make it possible to measure the structure and function of neural circuits at an unprecedented scale and resolution. How can we leverage this wealth of data to understand how neural circuits perform computations underlying behaviour? A mechanistic understanding will require models that align with experimental measurements and biophysical mechanisms\, while also being capable of performing behaviorally relevant computations. Building such models has remained a central challenge.\nI will present our work on addressing his challenge: We have developed machine learning methods and differentiable simulators that make it possible to algorithmically identify models that link biophysical mechanisms\, neural data\, and behaviour. I will show how these approaches—in combination with modern connectomic measurements—make it possible to build large-scale mechanistic models of the fruit fly visual system\, and how such a model can make experimentally testable predictions for each neuron in the system.
URL:https://arni-institute.org/event/ctn-jacob/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241031T110000
DTEND;TZID=America/New_York:20241031T130000
DTSTAMP:20260403T161134
CREATED:20241025T211919Z
LAST-MODIFIED:20241025T211919Z
UID:1116-1730372400-1730379600@arni-institute.org
SUMMARY:CTN: Naureen Ghani
DESCRIPTION:Title: Mice wiggle a wheel to boost the salience of low visual contrast stimuli \n\nAbstract: From the Welsh tidy mouse to the New York City pizza rat\, movement belies rodent intelligence. We show that head-fixed mice develop an active sensing strategy while performing a visual perceptual decision-making task (The International Brain Laboratory\, 2021). Akin to humans shaking a computer mouse to find the cursor on a screen\, we demonstrate that mice wiggle the wheel that controls the movement of a visual stimulus to boost low contrast salience. Moreover\, mice wiggle the wheel at a temporal frequency (11.9 ± 2.9 Hz) optimal for their visual systems (Umino et al\, 2018). With the “old method of watching and wondering about behavior\,” we reveal that mice exploit that it is easier to see something moving than something stationary by wiggling (Tinbergen\, 1973).
URL:https://arni-institute.org/event/ctn-naureen-ghani/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20241025T113000
DTEND;TZID=UTC:20241025T130000
DTSTAMP:20260403T161134
CREATED:20241022T222511Z
LAST-MODIFIED:20241022T224826Z
UID:1109-1729855800-1729861200@arni-institute.org
SUMMARY:CTN: Tatiana Engel
DESCRIPTION:Title: Unifying neural population dynamics\, manifold geometry\, and circuit structure.\n\nAbstract: Single neurons show complex\, heterogeneous responses during cognitive tasks\, often forming low-dimensional manifolds in the population state space. Consequently\, it is widely accepted that neural computations arise from low-dimensional population dynamics while attributing functional properties to individual neurons is impossible. I will present recent work from our lab that bridges single-neuron heterogeneity to manifold geometry and population dynamics. First\, we developed a flexible modeling approach for simultaneously inferring single-trial population dynamics and tuning functions of individual neurons to the latent population state. Applied to spike data recorded during decision-making\, our model revealed that all neurons encode the same dynamic decision variable\, and heterogeneous firing rates result from diverse tuning of single neurons to this decision variable. Second\, using a firing-rate recurrent network model\, we mathematically prove that responses of single neurons cluster into functional types when population dynamics are confined to a low-dimensional linear subspace\, with the number of distinct response types equal to the linear dimension of the neural manifold. We confirm these predictions in recurrent neural networks trained on cognitive tasks and brain-wide neural recordings from mice during a decision-making behavior. Our findings show that low-dimensional population dynamics can be understood in terms of functional cell types\, and random mixed selectivity emerges only in the limit of high-dimensional dynamics.
URL:https://arni-institute.org/event/ctn-tatiana-engel/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241021T090000
DTEND;TZID=America/New_York:20241022T130000
DTSTAMP:20260403T161134
CREATED:20240913T201213Z
LAST-MODIFIED:20241011T202544Z
UID:1052-1729501200-1729602000@arni-institute.org
SUMMARY:ARNI Annual Retreat
DESCRIPTION:General Agenda \nOctober 21st\, Day 1 from 8:45am to 5pm \n\nBreakfast and Lunch Provided\nOpening\n3 Keynote Speakers from ARNI Faculty\nResearch Brainstorming and Discussions\nProject/Student Poster Session\nEducation and Broader Impact Discussions\n\nOctober 22nd\, Day 2 from 9am to 1pm \n\nBreakfast and Lunch Provided\n1 Keynote Speaker\nBrainstorming and Discussion on Collaborations & Knowledge Transfer\nAI Industry Panel (with industry partners)\nClosing
URL:https://arni-institute.org/event/arni-annual-retreat/
LOCATION:Faculty House\, 64 Morningside Dr
ORGANIZER;CN="ARNI":MAILTO:arni@columbia.edu
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20241018T133000
DTEND;TZID=UTC:20241018T150000
DTSTAMP:20260403T161134
CREATED:20241004T202033Z
LAST-MODIFIED:20251016T132455Z
UID:1094-1729258200-1729263600@arni-institute.org
SUMMARY:Continual Learning Working Group: Yasaman Mahdaviyeh
DESCRIPTION:  \nTitle: Meta Continual Learning Revisited: Implicitly Enhancing Online Hessian Approximation via Variance Reduction \nReading: https://openreview.net/pdf?id=TpD2aG1h0D \nZoom: https://columbiauniversity.zoom.us/j/97176853843?pwd=VLZdh6yqHBcOQhdf816lkN5ByIpIsF.1
URL:https://arni-institute.org/event/continual-learning-working-group-yasaman-mahdaviyeh/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241016T103000
DTEND;TZID=America/New_York:20241016T120000
DTSTAMP:20260403T161134
CREATED:20241011T201622Z
LAST-MODIFIED:20241011T201715Z
UID:1100-1729074600-1729080000@arni-institute.org
SUMMARY:CTN: Benjamin Grewe
DESCRIPTION:Title: Target Learning rather than Backpropagation Explains Learning in the Mammalian Neocortex \nAbstract: Modern computational neuroscience presents two competing hypotheses for hierarchical learning in the neocortex: (1) deep learning-inspired approximations of the backpropagation algorithm\, where neurons adjust synapses to minimize error\, and (2) target learning algorithms\, where neurons reduce the feedback required to achieve a desired activity. In this talk\, I will explore this fundamental question by examining the relationship between synaptic plasticity and the somatic activity of pyramidal neurons. Using a combination of single-neuron modeling\, in vitro experiments\, and deep learning theory\, we predict distinct neuronal dynamics for each hypothesis. We then test these predictions using in vivo data from the mouse visual cortex. Our results reveal that cortical learning aligns more closely with target learning\, underscoring a significant discrepancy between conventional deep learning approaches and the mechanisms underlying cortical hierarchical learning. This work provides new insights into the neural processes that drive learning in the brain and challenges current models inspired by deep learning.
URL:https://arni-institute.org/event/ctn-benjamin-grewe/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20241011T133000
DTEND;TZID=UTC:20241011T150000
DTSTAMP:20260403T161134
CREATED:20241004T201714Z
LAST-MODIFIED:20241004T201723Z
UID:1089-1728653400-1728658800@arni-institute.org
SUMMARY:Continual Learning Working Group: Lindsay Smith
DESCRIPTION:Title: A Practitioner’s Guide to Continual Multimodal Pretraining \nReading: https://arxiv.org/pdf/2408.1447 \nZoom: https://columbiauniversity.zoom.us/j/97176853843?pwd=VLZdh6yqHBcOQhdf816lkN5ByIpIsF.1
URL:https://arni-institute.org/event/continual-learning-working-group-10/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241011T113000
DTEND;TZID=America/New_York:20241011T130000
DTSTAMP:20260403T161134
CREATED:20240913T200808Z
LAST-MODIFIED:20241008T172933Z
UID:1049-1728646200-1728651600@arni-institute.org
SUMMARY:CTN: Brenden Lake
DESCRIPTION:Title: Meta-learning for more powerful behavioral modeling \nAbstract: Two modeling paradigms have historically been in tension: Bayesian models provide an elegant way to incorporate prior knowledge\, but they make simplifying and constraining assumptions; on the other hand\, neural networks provide great modeling flexibility\, but they make it difficult to incorporate prior knowledge. Here I describe how to get the best of both approaches through Behaviorally-Informed Meta-Learning (BIML). BIML allows for modeling behavior with flexible Transformers\, even with only minimal data\, by distilling Bayesian priors into neural networks and then further fine-tuning the networks on behavioral data. I’ll show some initial successes using BIML to model human concept learning\, resulting in superior fits by capturing behavioral heuristics and biases that violate simple Bayesian assumptions. At the end\, I would love to discuss how to overcome the challenges of interpreting this new class of models. \nZoom: https://columbiauniversity.zoom.us/j/93740145362?pwd=GgoanUbc3Kc4rWdux2doLOiciiAaO2.1\nmeeting ID: 937 4014 5362\npasscode: ctn
URL:https://arni-institute.org/event/ctn-brenden-lake/
LOCATION:Zuckerman Institute – L3-079\, 3227 Broadway\, New York\, NY\, 10027\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20241004T140000
DTEND;TZID=UTC:20241004T160000
DTSTAMP:20260403T161134
CREATED:20240924T223032Z
LAST-MODIFIED:20240924T223032Z
UID:1065-1728050400-1728057600@arni-institute.org
SUMMARY:Continual Learning Working Group: Amogh Inamdar
DESCRIPTION:Title: Taskonomy: Disentangling Task Transfer Learning \nAbstract: TBD  \nLink: http://taskonomy.stanford.edu/taskonomy_CVPR2018.pdf
URL:https://arni-institute.org/event/continual-learning-working-group-amogh-inamdar/
LOCATION:CSB 488
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241003T130000
DTEND;TZID=America/New_York:20241003T150000
DTSTAMP:20260403T161134
CREATED:20240912T211938Z
LAST-MODIFIED:20241003T221343Z
UID:1039-1727960400-1727967600@arni-institute.org
SUMMARY:Multi-resource-cost Optimization for Neural Networks Models Working Group (NNMS): Simon Laughlin
DESCRIPTION:Title: Neuronal energy consumption: basic measures and trade-offs\, and their effects on efficiency \nZoom: https://columbiauniversity.zoom.us/j/98299154214?pwd=1J3J0lEpF6XdqHkHy02c7LuD6xUWx2.1
URL:https://arni-institute.org/event/multi-resource-cost-optimization-for-neural-networks-models-working-group-nnms-simon-laughlin/
LOCATION:Zuckerman Institute – L3-079\, 3227 Broadway\, New York\, NY\, 10027\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240925T140000
DTEND;TZID=America/New_York:20240925T163000
DTSTAMP:20260403T161134
CREATED:20240930T175148Z
LAST-MODIFIED:20240930T175148Z
UID:1078-1727272800-1727281800@arni-institute.org
SUMMARY:Multi-resource-cost Optimization for Neural Networks Models Working Group (NNMS): Tom Griffiths
DESCRIPTION:Title: Bounded optimality: A cognitive perspective on neural computation with resource limitations
URL:https://arni-institute.org/event/multi-resource-cost-optimization-for-neural-networks-models-working-group-nnms-tom-griffiths/
LOCATION:Zuckerman Institute – L3-079\, 3227 Broadway\, New York\, NY\, 10027\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240920T153000
DTEND;TZID=America/New_York:20240920T170000
DTSTAMP:20260403T161134
CREATED:20240905T195458Z
LAST-MODIFIED:20240917T214549Z
UID:1025-1726846200-1726851600@arni-institute.org
SUMMARY:Continual Learning Working Group: Haozhe Shan
DESCRIPTION:Speaker: Haozhe Shan \n\nTitle: A theory of continual learning in deep neural networks: task relations\, network architecture and learning procedure\n\nAbstract: Imagine listening to this talk and afterwards forgetting everything else you’ve ever learned. This absurd scenario would be commonplace if the brain could not perform continual learning (CL) – acquiring new skills and knowledge without dramatically forgetting old ones. Ubiquitous and essential in our daily life\, CL has proven a daunting computational challenge for neural networks (NN) in machine learning. When is CL especially easy or difficult for neural systems\, and why?\n\nTowards answering these questions\, we developed a statistical mechanics theory of CL dynamics in deep NNs. The theory exactly describes how the network’s input-output mapping evolves as it learns a sequence of tasks\, as a function of the training data\, NN architecture\, and the strength of a penalty applied to between-task weight changes. We first analyzed how task relations affect CL performance\, finding that they can be efficiently described by two metrics: similarity between inputs from two tasks in the NN’s feature space (“input overlap”) and consistency of input-output rules of different tasks (“rule congruency”). Higher input overlap leads to faster forgetting while lower congruency leads to stronger asymptotic forgetting – predictions which we validated with both synthetic tasks and popular benchmark datasets. Surprisingly\, we found that increasing the network depth reshapes geometry of the network’s feature space to decrease input overlap between tasks and slow forgetting. The reduced cross-task overlap in deeper networks also leads to less anterograde interference during CL but at the same time hinders their ability to accumulate knowledge across tasks. Finally\, our theory can well match CL dynamics in NNs trained with stochastic gradient descent (SGD). Using noisier\, faster learning during CL is equivalent to weakening the weight-change penalty. Link to preprint: https://arxiv.org/abs/2407.10315. \nBio: Haozhe Shan joined Columbia University as an ARNI Postdoctoral Fellow in August 2024. He recently received a Ph.D. in Neuroscience from Harvard\, advised by Haim Sompolinsky. His research applies quantitative tools from physics\, statistics and other fields to discover computational principles behind neural systems\, both biological and artificial. A recent research interest is the ability of neural systems to continually learn and perform multiple tasks in a flexible manner. \nZoom Link: https://columbiauniversity.zoom.us/j/97176853843?pwd=VLZdh6yqHBcOQhdf816lkN5ByIpIsF.1
URL:https://arni-institute.org/event/continual-learning-working-group-haozhe-shan/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240920T113000
DTEND;TZID=America/New_York:20240920T130000
DTSTAMP:20260403T161134
CREATED:20240910T180610Z
LAST-MODIFIED:20240917T214456Z
UID:1036-1726831800-1726837200@arni-institute.org
SUMMARY:CTN: Eva Dyer 
DESCRIPTION:Title: Large-scale pretraining on neural data allows for transfer across individuals\, tasks and species \nAbstract: As neuroscience datasets grow in size and complexity\, integrating diverse data sources to achieve a comprehensive understanding of brain function presents both an opportunity and a challenge. In this talk\, I will introduce our approach to developing a multi-source foundation model for neuroscience\, utilizing large-scale pretraining on neural data from various tasks\, brain regions\, and species. These models are designed to enable seamless transfer learning across individuals\, tasks\, and species\, thereby enhancing data efficiency and advancing the capabilities of neural decoding technologies. By integrating diverse datasets\, our aim is to uncover the common neural functions that underlie a wide range of tasks and brain regions\, providing a deeper understanding of brain function and informing future brain-machine interface applications. \nZoom:\nhttps://columbiauniversity.zoom.us/j/97505761667?pwd=KkvqBSag7VPFebf8eyqKpqvdVPbaHn.1\npasscode: ctn
URL:https://arni-institute.org/event/ctn-eva-dyer/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240916T080000
DTEND;TZID=America/New_York:20240916T163000
DTSTAMP:20260403T161134
CREATED:20240913T190408Z
LAST-MODIFIED:20240914T042847Z
UID:1044-1726473600-1726504200@arni-institute.org
SUMMARY:ARNI NSF Site Visit
DESCRIPTION:NSF Site Visit – The NSF team will evaluate the progress and achievements of ARNI’s projects to date and provide recommendations to steer future directions and funding for the project. \nIf you are interested in learning more about ARNI over-all\, join this Zoom link from 9am to 12pm or 2pm to 4:30pm.
URL:https://arni-institute.org/event/arni-nsf-site-visit/
LOCATION:Innovation Hub\, Tang Family Hall - 2276 12TH AVENUE – FLOOR 02
ORGANIZER;CN="ARNI":MAILTO:arni@columbia.edu
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240913T153000
DTEND;TZID=America/New_York:20240913T170000
DTSTAMP:20260403T161134
CREATED:20240905T195120Z
LAST-MODIFIED:20240914T042826Z
UID:1023-1726241400-1726246800@arni-institute.org
SUMMARY:Continual Learning Working Group: Kick Off
DESCRIPTION:Speaker: Mengye Ren\n\n\nTitle: Lifelong and Human-like Learning in Foundation Models\n\nAbstract: Real-world agents\, including humans\, learn from online\, lifelong experiences. However\, today’s foundation models primarily acquire knowledge through offline\, iid learning\, while relying on in-context learning for most online adaptation. It is crucial to equip foundation models with lifelong and human-like learning abilities to enable more flexible use of AI in real-world applications. In this talk\, I will discuss recent works exploring interesting phenomena in foundation models when learning in online\, structured environments. Notably\, foundation models exhibit anticipatory and semantically-aware memorization and forgetting behaviors. Furthermore\, I will introduce a new method that combines pretraining and meta-learning for learning and consolidating new concepts in large language models. This approach has the potential to lead to future foundation models with incremental consolidation and abstraction capabilities.
URL:https://arni-institute.org/event/continual-learning-working-group-kick-off/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240913T113000
DTEND;TZID=UTC:20240913T130000
DTSTAMP:20260403T161134
CREATED:20240910T180255Z
LAST-MODIFIED:20240910T180503Z
UID:1032-1726227000-1726232400@arni-institute.org
SUMMARY:CTN: Stephanie Palmer
DESCRIPTION:Title: How behavioral and evolutionary constraints sculpt early visual processing \nAbstract: Biological systems must selectively encode partial information about the environment\, as dictated by the capacity constraints at work in all living organisms. For example\, we cannot see every feature of the light field that reaches our eyes; temporal resolution is limited by transmission noise and delays\, and spatial resolution is limited by the finite number of photoreceptors and output cells in the retina. Classical efficient coding theory describes how sensory systems can maximize information transmission given such capacity constraints\, but it treats all input features equally. Not all inputs are\, however\, of equal value to the organism. Our work quantifies whether and how the brain selectively encodes stimulus features\, specifically predictive features\, that are most useful for fast and effective movements. We have shown that efficient predictive computation starts at the earliest stages of the visual system\, in the retina. We borrow techniques from statistical physics and information theory to assess how we get terrific\, predictive vision from these imperfect (lagged and noisy) component parts. In broader terms\, we aim to build a more complete theory of efficient encoding in the brain\, and along the way have found some intriguing connections between formal notions of coarse graining in biology and physics.
URL:https://arni-institute.org/event/stephanie-palmer/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240906T113000
DTEND;TZID=America/New_York:20240906T130000
DTSTAMP:20260403T161134
CREATED:20240903T194843Z
LAST-MODIFIED:20240914T042726Z
UID:1019-1725622200-1725627600@arni-institute.org
SUMMARY:CTN: Sebastian Seung
DESCRIPTION:Title:  Insights into vision from interpreting a neuronal wiring diagram\nHost: Marcus Triplett \nAbstract:  In 2023\, the FlyWire Consortium released the neuronal wiring diagram of an adult fly brain. This contains as a corollary the first complete wiring diagram of a visual system\, which has been used to identify all 200+ cell types that are intrinsic to the Drosophila optic lobe. About half of these cell types were previously unknown\, and less than 20% have ever been recorded by a physiologist. I will argue that plausible functions for many cell types can be guessed by interpreting the wiring diagram.
URL:https://arni-institute.org/event/cnt-sebastian-seung/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240816T113000
DTEND;TZID=UTC:20240816T130000
DTSTAMP:20260403T161134
CREATED:20240813T191128Z
LAST-MODIFIED:20240813T215322Z
UID:1014-1723807800-1723813200@arni-institute.org
SUMMARY:CTN Claudia Clopath
DESCRIPTION:Title: Feedback-based motor control can guide plasticity and drive rapid learning \nAbstract: Animals use afferent feedback to rapidly correct ongoing movements in the presence of a perturbation. Repeated exposure to a predictable perturbation leads to behavioural adaptation that counteracts its effects. Primary motor cortex (M1) is intimately involved in both processes\, integrating inputs from various sensorimotor brain regions to update the motor output. Here\, we investigate whether feedback-based motor control and motor adaptation may share a common implementation in M1 circuits. We trained a recurrent neural network to control its own output through an error feedback signal\, which allowed it to recover rapidly from external perturbations. Implementing a biologically plausible plasticity rule based on this same feedback signal also enabled the network to learn to counteract persistent perturbations through a trial-by-trial process\, in a manner that reproduced several key aspects of human adaptation. Moreover\, the resultant network activity changes were also present in neural population recordings from monkey M1. Online movement correction and longer-term motor adaptation may thus share a common implementation in neural circuits.
URL:https://arni-institute.org/event/claudia-clopath/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240730T150000
DTEND;TZID=UTC:20240730T170000
DTSTAMP:20260403T161134
CREATED:20240729T213123Z
LAST-MODIFIED:20240729T213418Z
UID:1009-1722351600-1722358800@arni-institute.org
SUMMARY:Continual Learning Working Group Talk
DESCRIPTION:Title: Continual learning\, machine self-reference\, and the problem of problem-awareness \nAbstract: Continual learning (CL) without forgetting has been a long-standing problem in machine learning with neural networks. Here I will bring a new perspective by looking at learning algorithms (LAs) as memory mechanisms with their own decision making problem. I will present a natural solution to CL under this view: instead of handcrafting such LAs\, we metalearn continual in-context LAs using self-referential weight matrices. Experiments confirm that this method effectively achieves CL without forgetting\, outperforming handcrafted algorithms on classic benchmarks. While this is a promising result on its own\, in this talk\, I will go beyond this limited scope of CL. I will serve this CL setting as an example to introduce a broader perspective of “problem awareness” in machine learning. I will argue that in many prior CL methods\, systems fail in CL because they do not know what it means to continually learn without forgetting. I will show that the same argument can explain the previous failures of neural networks on other classic challenges—historically pointed out by cognitive scientists in comparison to human intelligence—\, such as systematic generalization and few-shot learning. I will highlight how similar metalearning methods provide a promising solution to these challenges too. \nBio: is a post-postdoc at Harvard University\, Center for Brain Science.\nPreviously\, he was a postdoc and lecturer at the Swiss AI Lab IDSIA\, University of Lugano (Switzerland) from 2020 to 2023\, where he taught a popular course on practical deep learning. He received his PhD in Computer Science from RWTH Aachen University (Germany) in 2020\, and undergraduate and Master’s degrees in Applied Mathematics from École Centrale Paris and ENS Cachan (France). He was also a research intern at Google in NYC and Mountain View\, in 2017 and 2018. He is broadly interested in the computational principles of learning\, memory\, perception\, self-reference\, and decision making\, as ingredients for building and understanding general-purpose intelligence. The scope of his research interests has expanded from language modeling (PhD) to general sequence and program learning (postdoc)\, and currently to neuroscience and cognitive science (post-postdoc).
URL:https://arni-institute.org/event/continual-learning-machine-self-reference-and-the-problem-of-problem-awareness/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240725T150000
DTEND;TZID=UTC:20240725T170000
DTSTAMP:20260403T161134
CREATED:20240723T230443Z
LAST-MODIFIED:20240723T230443Z
UID:1006-1721919600-1721926800@arni-institute.org
SUMMARY:Dr. Richard Lange
DESCRIPTION:Title: “What Bayes can and cannot tell us about the neuroscience of vision” \nNikolaus Kriegeskorte’s Group is hosting Dr.Richard Lange\, Assistant Professor in the Department of Computer Science at Rochester Institute of Technology. He will be giving a talk at Zuckerman Institute.
URL:https://arni-institute.org/event/dr-richard-lange/
LOCATION:Zuckerman Institute – L3-079\, 3227 Broadway\, New York\, NY\, 10027\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240717T160000
DTEND;TZID=UTC:20240717T200000
DTSTAMP:20260403T161134
CREATED:20240712T212855Z
LAST-MODIFIED:20240712T212855Z
UID:998-1721232000-1721246400@arni-institute.org
SUMMARY:Zuckerman Institute Demo Day
DESCRIPTION:
URL:https://arni-institute.org/event/zuckerman-institute-demo-day/
LOCATION:Lightning AI\, 50 West 23 Street 7th FL\, New York\, NY\, 10010\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240621T113000
DTEND;TZID=UTC:20240621T130000
DTSTAMP:20260403T161134
CREATED:20240605T191850Z
LAST-MODIFIED:20240620T180152Z
UID:906-1718969400-1718974800@arni-institute.org
SUMMARY:CTN: Peter Dayan
DESCRIPTION:Title: Risking your Tail: Curiosity\, Danger & Exploration \nAbstract: Risk and reward are critical balancing determinants of adaptive behaviour\, associated respectively with neophobia and neophilia in the case of exploration. There are rather great differences in how individuals engage with novelty – with substantial consequences for what they are able to learn. Here\, we consider how a modern formal treatment of risk (called the conditional value at risk) and pessimistic prior expectations can model some of these differences. Although the effects of risk on isolated decisions are well understood\, additional issues arise in the context of sequences of choices\, something that is inevitable in the case of exploration. This is joint work with Chris Gagne. Kevin Shen\, Xin Sui and Kevin Lloyd. \nMeeting ID: 958 4779 3410\nPasscode: ctn\nhttps://columbiauniversity.zoom.us/j/95847793410?pwd=VtROykVM4N5ywvAL7t32aYNZsH0Yyr.1
URL:https://arni-institute.org/event/peter-dayan/
LOCATION:Jerome L. Greene Science Center\, 3227 Broadway 9th FL Lecture Hall\, New York\, NY\, 10027\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240617T113000
DTEND;TZID=UTC:20240617T130000
DTSTAMP:20260403T161134
CREATED:20240613T195130Z
LAST-MODIFIED:20240613T195130Z
UID:929-1718623800-1718629200@arni-institute.org
SUMMARY:CTN: Stefano Fusi
DESCRIPTION:Title: The Geometry of Abstraction\n\nAbstract: I’ll first discuss the theoretical framework introduced in Bernardi et al. 2020\, Cell\, in which we propose a possible definition of abstract representations. I’ll go into the details of the most up-to-date  conceptual framework\, discuss the computational relevance of the representational geometry and the cross-validated measures of representational geometry that we normally use to characterize neural data in artificial and biological networks. Then I’ll apply the analytical tools to the study of human electrophysiological data (see Courellis\, H.S.\, Mixha\, J.\, Cardenas\, A.R.\, Kimmel\, D.\, Reed\, C.M.\, Valiante\, T.A.\, Salzman\, C.D.\, Mamelak\, A.N.\, Fusi\, S. and Rutishauser\, U.\, 2023. Abstract representations emerge in human hippocampal neurons during inference behavior. bioRxiv\, pp.2023-11 for more details).
URL:https://arni-institute.org/event/ctn-stefano-fusi/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240614T113000
DTEND;TZID=UTC:20240614T130000
DTSTAMP:20260403T161134
CREATED:20240522T002311Z
LAST-MODIFIED:20240613T195532Z
UID:874-1718364600-1718370000@arni-institute.org
SUMMARY:CTN: Bob Datta
DESCRIPTION:Title and Abstract: TBD
URL:https://arni-institute.org/event/bob-datta/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240524T113000
DTEND;TZID=UTC:20240524T130000
DTSTAMP:20260403T161134
CREATED:20240514T200412Z
LAST-MODIFIED:20240522T002132Z
UID:868-1716550200-1716555600@arni-institute.org
SUMMARY:CTN: Guillaume Hennequin
DESCRIPTION:Title: A recurrent network model of planning explains hippocampal replay and human behaviour\n\nAbstract:  When faced with a novel situation\, humans often spend substantial periods of time contemplating possible futures. For such planning to be rational\, the benefits to behaviour must compensate for the time spent thinking. I will show how we recently captured these features of human behaviour by developing a neural network model where planning itself is controlled by prefrontal cortex. This model consists of a meta-reinforcement learning agent augmented with the ability to plan by sampling imagined action sequences from its own policy\, which we call ‘rollouts’. The agent learns to plan when planning is beneficial\, explaining empirical variability in human thinking times. Additionally\, the patterns of policy rollouts employed by the artificial agent closely resemble patterns of rodent hippocampal replays recently recorded during spatial navigation. Our work provides a new theory of how the brain could implement planning through prefrontal-hippocampal interactions\, where hippocampal replays are triggered by – and adaptively affect – prefrontal dynamics. This is joint work with Kristopher Jensen and Marcelo Mattar.
URL:https://arni-institute.org/event/ctn-guillaume-hennequin/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240520T113000
DTEND;TZID=UTC:20240520T130000
DTSTAMP:20260403T161134
CREATED:20240507T192656Z
LAST-MODIFIED:20240514T200254Z
UID:839-1716204600-1716210000@arni-institute.org
SUMMARY:CTN: Quentin Huys (Seminar Speaker)
DESCRIPTION:Title: Translating computational mechanisms to clinical applications \nComputational psychiatry is a rapidly growing field attempting to translate advances in computational neuroscience and machine learning into improved outcomes for patients suffering from mental illness. In this lecture\, I will provide an overview over recent approaches for translating computational research into an understanding of symptoms\, and mechanisms of treatments.  I will start with two studies taking a computational approach to understanding symptoms of depression and anxiety: the selection of thoughts and the derivation of meaning and pleasure. I will then describe a recent series of studies which take a computational approach to understanding the active components of psychotherapy\, and finally finish with an applied example\, examining  mechanisms and predictors of relapse after antidepressant discontinuation.  Overall\, I will hope to clarify the role computational approaches can play in identifying mechanisms\, and in harnessing these mechanisms for therapeutic purposes.
URL:https://arni-institute.org/event/quentin-huys-seminar-speaker/
LOCATION:To Be Determined
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240517T113000
DTEND;TZID=UTC:20240517T130000
DTSTAMP:20260403T161134
CREATED:20240506T215427Z
LAST-MODIFIED:20240513T233334Z
UID:834-1715945400-1715950800@arni-institute.org
SUMMARY:CTN: Wei Ji Ma
DESCRIPTION:Title: Efficient coding in reward neurons\n\nAbstract: Two of the greatest triumphs of computational neuroscience have been efficient coding accounts of tuning properties of sensory neurons and reinforcement learning accounts of dopaminergic neurons in the midbrain. At first glance\, these theories seem to have no connection\, but I will argue that they do. One can apply efficient coding principles to derive the optimal population of neurons to encode rewards drawn from a probability distribution. Similar to this optimal population\, dopaminergic reward prediction error neurons in the mouse have a\nbroad distribution of thresholds. We can make further predictions: that neurons with higher thresholds have higher gain and that the asymmetry of their responses depends on the\nthreshold. We also derive learning rules that can approximate the efficient code. Finally\, we apply the theory to monkey data. Taken together\, efficient coding might provide a normative underpinning to distributional reinforcement learning.
URL:https://arni-institute.org/event/wei-ji-ma/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240515T120000
DTEND;TZID=UTC:20240515T140000
DTSTAMP:20260403T161134
CREATED:20240429T202900Z
LAST-MODIFIED:20240509T224223Z
UID:819-1715774400-1715781600@arni-institute.org
SUMMARY:Multi-resource-cost Optimization for Neural Networks Models Working Group (NNMS)
DESCRIPTION:Title: Scope of the working group\, example project\, and literature \nShort Description: From Nikolaus Kriegeskorte’s (Professor of Psychology and of Neuroscience (in the Mortimer B. Zuckerman Mind Brain Behavior Institute) lab\, Eivinas Butkus (grad student) will show an example of a modeling project optimizing energetic demands along with accuracy in a vision task\, and Josh Ying (grad student) will give a sense of the literature \nMore about NNMS:\nNeural network models are typically set up with a fixed architecture that defines the number of nodes and the connectivity\, and are unrolled for a fixed number of timesteps to obtain a computational graph for backpropagation. This amounts to fixing the resources that a physical implementation in a biological brain or dedicated engineered system would require in terms of space (to accommodate nodes and connections)\, time (to execute the steps)\, and energy. The fixed architecture of neural network models allows us to limit the resource requirements and discover what level of performance is possible through optimization. However\, it makes it difficult to explore the tradeoffs between the multiple resources. For example\, would a smaller network that runs for more timesteps give preferable results according to a joint cost of nodes\, connections\, time\, energy\, and error? It would be useful to be able to flexibly trade off resources against each other and against task performance as part of the optimization of a single model\, rather than having to train many models (each with a fixed vector of costs) to explore the space of solutions. We will develop (1) ways to quantify space\, time\, and energy costs of neural network models and (2) differentiable objectives that enable efficient joint minimization of the costs of multiple resources. Such methods could help us understand biological neural mechanisms that emerge from particular profiles of resource costs and behavioral affordances and also to engineer more efficient AI for resource-limited devices.\n \nZoom Link: https://columbiauniversity.zoom.us/j/97052575063?pwd=SllDVFd4VlA2TnN4RDV3VVJ3b2lldz09
URL:https://arni-institute.org/event/multi-resource-cost-optimization-for-neural-networks-models-working-group-nnms/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240510T113000
DTEND;TZID=UTC:20240510T130000
DTSTAMP:20260403T161134
CREATED:20240502T215817Z
LAST-MODIFIED:20240507T192754Z
UID:828-1715340600-1715346000@arni-institute.org
SUMMARY:CTN: Adam Hantman
DESCRIPTION:Title: Neural basis for skilled movements \nAbstract: Generating behavior is an incredible achievement of the nervous system\, considering the range of possible actions and the complexity of musculoskeletal arrangements. Motor control involves understanding the surrounding environment\, selecting appropriate plans\, converting those plans into motor commands\, and adaptively reacting to feedback. This seminar will review efforts of the Hantman lab to dissect the neural circuits for skilled movements\, and will also feature new work examining the robustness and resilience of these motor systems.
URL:https://arni-institute.org/event/adam-hantman/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
END:VCALENDAR