BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//ARNI - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://arni-institute.org
X-WR-CALDESC:Events for ARNI
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20230312T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20231105T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240920T113000
DTEND;TZID=America/New_York:20240920T130000
DTSTAMP:20260504T211113
CREATED:20240910T180610Z
LAST-MODIFIED:20240917T214456Z
UID:1036-1726831800-1726837200@arni-institute.org
SUMMARY:CTN: Eva Dyer 
DESCRIPTION:Title: Large-scale pretraining on neural data allows for transfer across individuals\, tasks and species \nAbstract: As neuroscience datasets grow in size and complexity\, integrating diverse data sources to achieve a comprehensive understanding of brain function presents both an opportunity and a challenge. In this talk\, I will introduce our approach to developing a multi-source foundation model for neuroscience\, utilizing large-scale pretraining on neural data from various tasks\, brain regions\, and species. These models are designed to enable seamless transfer learning across individuals\, tasks\, and species\, thereby enhancing data efficiency and advancing the capabilities of neural decoding technologies. By integrating diverse datasets\, our aim is to uncover the common neural functions that underlie a wide range of tasks and brain regions\, providing a deeper understanding of brain function and informing future brain-machine interface applications. \nZoom:\nhttps://columbiauniversity.zoom.us/j/97505761667?pwd=KkvqBSag7VPFebf8eyqKpqvdVPbaHn.1\npasscode: ctn
URL:https://arni-institute.org/event/ctn-eva-dyer/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240920T153000
DTEND;TZID=America/New_York:20240920T170000
DTSTAMP:20260504T211113
CREATED:20240905T195458Z
LAST-MODIFIED:20240917T214549Z
UID:1025-1726846200-1726851600@arni-institute.org
SUMMARY:Continual Learning Working Group: Haozhe Shan
DESCRIPTION:Speaker: Haozhe Shan \n\nTitle: A theory of continual learning in deep neural networks: task relations\, network architecture and learning procedure\n\nAbstract: Imagine listening to this talk and afterwards forgetting everything else you’ve ever learned. This absurd scenario would be commonplace if the brain could not perform continual learning (CL) – acquiring new skills and knowledge without dramatically forgetting old ones. Ubiquitous and essential in our daily life\, CL has proven a daunting computational challenge for neural networks (NN) in machine learning. When is CL especially easy or difficult for neural systems\, and why?\n\nTowards answering these questions\, we developed a statistical mechanics theory of CL dynamics in deep NNs. The theory exactly describes how the network’s input-output mapping evolves as it learns a sequence of tasks\, as a function of the training data\, NN architecture\, and the strength of a penalty applied to between-task weight changes. We first analyzed how task relations affect CL performance\, finding that they can be efficiently described by two metrics: similarity between inputs from two tasks in the NN’s feature space (“input overlap”) and consistency of input-output rules of different tasks (“rule congruency”). Higher input overlap leads to faster forgetting while lower congruency leads to stronger asymptotic forgetting – predictions which we validated with both synthetic tasks and popular benchmark datasets. Surprisingly\, we found that increasing the network depth reshapes geometry of the network’s feature space to decrease input overlap between tasks and slow forgetting. The reduced cross-task overlap in deeper networks also leads to less anterograde interference during CL but at the same time hinders their ability to accumulate knowledge across tasks. Finally\, our theory can well match CL dynamics in NNs trained with stochastic gradient descent (SGD). Using noisier\, faster learning during CL is equivalent to weakening the weight-change penalty. Link to preprint: https://arxiv.org/abs/2407.10315. \nBio: Haozhe Shan joined Columbia University as an ARNI Postdoctoral Fellow in August 2024. He recently received a Ph.D. in Neuroscience from Harvard\, advised by Haim Sompolinsky. His research applies quantitative tools from physics\, statistics and other fields to discover computational principles behind neural systems\, both biological and artificial. A recent research interest is the ability of neural systems to continually learn and perform multiple tasks in a flexible manner. \nZoom Link: https://columbiauniversity.zoom.us/j/97176853843?pwd=VLZdh6yqHBcOQhdf816lkN5ByIpIsF.1
URL:https://arni-institute.org/event/continual-learning-working-group-haozhe-shan/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
END:VCALENDAR