BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//ARNI - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:ARNI
X-ORIGINAL-URL:https://arni-institute.org
X-WR-CALDESC:Events for ARNI
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20230312T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20231105T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VTIMEZONE
TZID:UTC
BEGIN:STANDARD
TZOFFSETFROM:+0000
TZOFFSETTO:+0000
TZNAME:UTC
DTSTART:20230101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240906T113000
DTEND;TZID=America/New_York:20240906T130000
DTSTAMP:20260430T173506
CREATED:20240903T194843Z
LAST-MODIFIED:20240914T042726Z
UID:1019-1725622200-1725627600@arni-institute.org
SUMMARY:CTN: Sebastian Seung
DESCRIPTION:Title:  Insights into vision from interpreting a neuronal wiring diagram\nHost: Marcus Triplett \nAbstract:  In 2023\, the FlyWire Consortium released the neuronal wiring diagram of an adult fly brain. This contains as a corollary the first complete wiring diagram of a visual system\, which has been used to identify all 200+ cell types that are intrinsic to the Drosophila optic lobe. About half of these cell types were previously unknown\, and less than 20% have ever been recorded by a physiologist. I will argue that plausible functions for many cell types can be guessed by interpreting the wiring diagram.
URL:https://arni-institute.org/event/cnt-sebastian-seung/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240913T113000
DTEND;TZID=UTC:20240913T130000
DTSTAMP:20260430T173506
CREATED:20240910T180255Z
LAST-MODIFIED:20240910T180503Z
UID:1032-1726227000-1726232400@arni-institute.org
SUMMARY:CTN: Stephanie Palmer
DESCRIPTION:Title: How behavioral and evolutionary constraints sculpt early visual processing \nAbstract: Biological systems must selectively encode partial information about the environment\, as dictated by the capacity constraints at work in all living organisms. For example\, we cannot see every feature of the light field that reaches our eyes; temporal resolution is limited by transmission noise and delays\, and spatial resolution is limited by the finite number of photoreceptors and output cells in the retina. Classical efficient coding theory describes how sensory systems can maximize information transmission given such capacity constraints\, but it treats all input features equally. Not all inputs are\, however\, of equal value to the organism. Our work quantifies whether and how the brain selectively encodes stimulus features\, specifically predictive features\, that are most useful for fast and effective movements. We have shown that efficient predictive computation starts at the earliest stages of the visual system\, in the retina. We borrow techniques from statistical physics and information theory to assess how we get terrific\, predictive vision from these imperfect (lagged and noisy) component parts. In broader terms\, we aim to build a more complete theory of efficient encoding in the brain\, and along the way have found some intriguing connections between formal notions of coarse graining in biology and physics.
URL:https://arni-institute.org/event/stephanie-palmer/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240913T153000
DTEND;TZID=America/New_York:20240913T170000
DTSTAMP:20260430T173506
CREATED:20240905T195120Z
LAST-MODIFIED:20240914T042826Z
UID:1023-1726241400-1726246800@arni-institute.org
SUMMARY:Continual Learning Working Group: Kick Off
DESCRIPTION:Speaker: Mengye Ren\n\n\nTitle: Lifelong and Human-like Learning in Foundation Models\n\nAbstract: Real-world agents\, including humans\, learn from online\, lifelong experiences. However\, today’s foundation models primarily acquire knowledge through offline\, iid learning\, while relying on in-context learning for most online adaptation. It is crucial to equip foundation models with lifelong and human-like learning abilities to enable more flexible use of AI in real-world applications. In this talk\, I will discuss recent works exploring interesting phenomena in foundation models when learning in online\, structured environments. Notably\, foundation models exhibit anticipatory and semantically-aware memorization and forgetting behaviors. Furthermore\, I will introduce a new method that combines pretraining and meta-learning for learning and consolidating new concepts in large language models. This approach has the potential to lead to future foundation models with incremental consolidation and abstraction capabilities.
URL:https://arni-institute.org/event/continual-learning-working-group-kick-off/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240916T080000
DTEND;TZID=America/New_York:20240916T163000
DTSTAMP:20260430T173506
CREATED:20240913T190408Z
LAST-MODIFIED:20240914T042847Z
UID:1044-1726473600-1726504200@arni-institute.org
SUMMARY:ARNI NSF Site Visit
DESCRIPTION:NSF Site Visit – The NSF team will evaluate the progress and achievements of ARNI’s projects to date and provide recommendations to steer future directions and funding for the project. \nIf you are interested in learning more about ARNI over-all\, join this Zoom link from 9am to 12pm or 2pm to 4:30pm.
URL:https://arni-institute.org/event/arni-nsf-site-visit/
LOCATION:Innovation Hub\, Tang Family Hall - 2276 12TH AVENUE – FLOOR 02
ORGANIZER;CN="ARNI":MAILTO:arni@columbia.edu
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240920T113000
DTEND;TZID=America/New_York:20240920T130000
DTSTAMP:20260430T173506
CREATED:20240910T180610Z
LAST-MODIFIED:20240917T214456Z
UID:1036-1726831800-1726837200@arni-institute.org
SUMMARY:CTN: Eva Dyer 
DESCRIPTION:Title: Large-scale pretraining on neural data allows for transfer across individuals\, tasks and species \nAbstract: As neuroscience datasets grow in size and complexity\, integrating diverse data sources to achieve a comprehensive understanding of brain function presents both an opportunity and a challenge. In this talk\, I will introduce our approach to developing a multi-source foundation model for neuroscience\, utilizing large-scale pretraining on neural data from various tasks\, brain regions\, and species. These models are designed to enable seamless transfer learning across individuals\, tasks\, and species\, thereby enhancing data efficiency and advancing the capabilities of neural decoding technologies. By integrating diverse datasets\, our aim is to uncover the common neural functions that underlie a wide range of tasks and brain regions\, providing a deeper understanding of brain function and informing future brain-machine interface applications. \nZoom:\nhttps://columbiauniversity.zoom.us/j/97505761667?pwd=KkvqBSag7VPFebf8eyqKpqvdVPbaHn.1\npasscode: ctn
URL:https://arni-institute.org/event/ctn-eva-dyer/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240920T153000
DTEND;TZID=America/New_York:20240920T170000
DTSTAMP:20260430T173506
CREATED:20240905T195458Z
LAST-MODIFIED:20240917T214549Z
UID:1025-1726846200-1726851600@arni-institute.org
SUMMARY:Continual Learning Working Group: Haozhe Shan
DESCRIPTION:Speaker: Haozhe Shan \n\nTitle: A theory of continual learning in deep neural networks: task relations\, network architecture and learning procedure\n\nAbstract: Imagine listening to this talk and afterwards forgetting everything else you’ve ever learned. This absurd scenario would be commonplace if the brain could not perform continual learning (CL) – acquiring new skills and knowledge without dramatically forgetting old ones. Ubiquitous and essential in our daily life\, CL has proven a daunting computational challenge for neural networks (NN) in machine learning. When is CL especially easy or difficult for neural systems\, and why?\n\nTowards answering these questions\, we developed a statistical mechanics theory of CL dynamics in deep NNs. The theory exactly describes how the network’s input-output mapping evolves as it learns a sequence of tasks\, as a function of the training data\, NN architecture\, and the strength of a penalty applied to between-task weight changes. We first analyzed how task relations affect CL performance\, finding that they can be efficiently described by two metrics: similarity between inputs from two tasks in the NN’s feature space (“input overlap”) and consistency of input-output rules of different tasks (“rule congruency”). Higher input overlap leads to faster forgetting while lower congruency leads to stronger asymptotic forgetting – predictions which we validated with both synthetic tasks and popular benchmark datasets. Surprisingly\, we found that increasing the network depth reshapes geometry of the network’s feature space to decrease input overlap between tasks and slow forgetting. The reduced cross-task overlap in deeper networks also leads to less anterograde interference during CL but at the same time hinders their ability to accumulate knowledge across tasks. Finally\, our theory can well match CL dynamics in NNs trained with stochastic gradient descent (SGD). Using noisier\, faster learning during CL is equivalent to weakening the weight-change penalty. Link to preprint: https://arxiv.org/abs/2407.10315. \nBio: Haozhe Shan joined Columbia University as an ARNI Postdoctoral Fellow in August 2024. He recently received a Ph.D. in Neuroscience from Harvard\, advised by Haim Sompolinsky. His research applies quantitative tools from physics\, statistics and other fields to discover computational principles behind neural systems\, both biological and artificial. A recent research interest is the ability of neural systems to continually learn and perform multiple tasks in a flexible manner. \nZoom Link: https://columbiauniversity.zoom.us/j/97176853843?pwd=VLZdh6yqHBcOQhdf816lkN5ByIpIsF.1
URL:https://arni-institute.org/event/continual-learning-working-group-haozhe-shan/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240925T140000
DTEND;TZID=America/New_York:20240925T163000
DTSTAMP:20260430T173506
CREATED:20240930T175148Z
LAST-MODIFIED:20240930T175148Z
UID:1078-1727272800-1727281800@arni-institute.org
SUMMARY:Multi-resource-cost Optimization for Neural Networks Models Working Group (NNMS): Tom Griffiths
DESCRIPTION:Title: Bounded optimality: A cognitive perspective on neural computation with resource limitations
URL:https://arni-institute.org/event/multi-resource-cost-optimization-for-neural-networks-models-working-group-nnms-tom-griffiths/
LOCATION:Zuckerman Institute – L3-079\, 3227 Broadway\, New York\, NY\, 10027\, United States
END:VEVENT
END:VCALENDAR