BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//ARNI - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:ARNI
X-ORIGINAL-URL:https://arni-institute.org
X-WR-CALDESC:Events for ARNI
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:UTC
BEGIN:STANDARD
TZOFFSETFROM:+0000
TZOFFSETTO:+0000
TZNAME:UTC
DTSTART:20220101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=UTC:20240315T113000
DTEND;TZID=UTC:20240315T113000
DTSTAMP:20260405T160623
CREATED:20240314T195510Z
LAST-MODIFIED:20240314T195946Z
UID:627-1710502200-1710502200@arni-institute.org
SUMMARY:Rui Ponte Costa
DESCRIPTION:Title: Brain-wide credit assignment: cortical and subcortical perspectives \nAbstract: The brain assigns credit to trillions of synapses remarkably well. How the brain achieves this feat is one of the great mysteries in neuroscience. Recently\, we have introduced Bursting cortico-cortical networks\, a computational model of hierarchical credit assignment that captures a large number of biological features while approximating deep learning algorithms (Greedy et al. NeurIPS 2022). I will show that in contrast to previous work this model (i) does not require a multi-phase learning process\, (ii) is consistent with experimental observations across multiple levels and (iii) provides efficient credit assignment across the cortical hierarchy. \nHowever\, these models often assume that behavioural feedback is readily available. How the brain learns efficiently despite the sparse nature of feedback remains unclear. Recently we have proposed that a subcortical region\, the cerebellum\, predicts behavioural feedback\, thereby unlocking learning in cortical networks from future feedback. We have introduced two views by which the cerebellum may help the cortex: (i) by driving cortical plasticity (Boven et al. Nature Comms 2023) or (ii) by driving cortical dynamics (Pemberton et al. bioRxiv). Together these two views suggest that cortico-cerebellar loops are a critical part of task acquisition\, switching\, and consolidation in the brain.
URL:https://arni-institute.org/event/rui-ponte-costa/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240307T133000
DTEND;TZID=UTC:20240307T144000
DTSTAMP:20260405T160623
CREATED:20240315T195437Z
LAST-MODIFIED:20240315T195437Z
UID:654-1709818200-1709822400@arni-institute.org
SUMMARY:Continual Learning Working Group
DESCRIPTION:Weekly Meeting Group Discussion: Paper Topic: https://arxiv.org/abs/1906.01076\nZoom: https://columbiauniversity.zoom.us/j/94783759415?pwd=cTlDTDdCVk9vdEV0QzRKL0hKQW1Kdz09
URL:https://arni-institute.org/event/continual-learning-working-group-2/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240229T133000
DTEND;TZID=UTC:20240229T144000
DTSTAMP:20260405T160623
CREATED:20240315T195658Z
LAST-MODIFIED:20240315T195658Z
UID:657-1709213400-1709217600@arni-institute.org
SUMMARY:Continual Learning Working Group
DESCRIPTION:Weekly Meeting Group Discussion: Paper Topic: https://arxiv.org/abs/2302.00487 \nZoom: https://columbiauniversity.zoom.us/j/97515072030?pwd=VGJONXR6bW9LVTN3VlZZSXdRZnNIdz09
URL:https://arni-institute.org/event/continual-learning-working-group-3/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240222T133000
DTEND;TZID=UTC:20240222T144000
DTSTAMP:20260405T160623
CREATED:20240315T195822Z
LAST-MODIFIED:20240315T195822Z
UID:659-1708608600-1708612800@arni-institute.org
SUMMARY:Continual Learning Working Group
DESCRIPTION:Weekly Meeting Group Discussion: Paper Topic: https://arxiv.org/abs/2302.00487\nZoom: https://columbiauniversity.zoom.us/j/3658091817?pwd=WHFJVzAwbDdQcFMzc2FreVplKzVMUT09
URL:https://arni-institute.org/event/continual-learning-working-group-4/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240220T130000
DTEND;TZID=UTC:20240220T141500
DTSTAMP:20260405T160623
CREATED:20240315T200440Z
LAST-MODIFIED:20240315T200440Z
UID:661-1708434000-1708438500@arni-institute.org
SUMMARY:Generative AI Freespeech & Public Discourse
DESCRIPTION:ARNI coPI Kathy McKeown and ARNI faculty Carl Vondrick participate in the\nPanel 1: Empirical and Technological Questions: Current Landscape\, Challenges\, and Opportunities\nLink: https://www.engineering.columbia.edu/symposium-generative-ai-free-speech-public-discourse\nArticle: https://www.engineering.columbia.edu/news/navigating-generative-ai-and-its-impact-future-public-discourse?utm_source=newsletter&utm_medium=email&utm_campaign=highlights030124
URL:https://arni-institute.org/event/generative-ai-freespeech-public-discourse/
LOCATION:Forum Auditorium\, 601 W 125th St\, New York\, NY\, 10027\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240216T150000
DTEND;TZID=UTC:20240216T170000
DTSTAMP:20260405T160623
CREATED:20240315T200745Z
LAST-MODIFIED:20240315T200908Z
UID:665-1708095600-1708102800@arni-institute.org
SUMMARY:Animal Behavior Video Analysis Working Group
DESCRIPTION:Title: Mapping the landscape of social of social behavior using high-resolution 3D tracking of freely interacting animals \nPresenter: Ugne Klibaite\, PhD\nHarvard University\, Department of Organismic & Evolutionary Biology (PI\, Bence P. Ölveczky) \nAbstract: Social interaction is a fundamental component of animal behavior. However\, we lack tools to describe it with quantitative rigor\, limiting our understanding of its principles and the neuropsychiatric disorders\, like autism\, that perturb it. To address these limitations\, I and collaborators have developed a technique for high-resolution 3D tracking of freely interacting animals and their body-wide social touch patterns\, solving the challenging subject occlusion and part assignment problems using 3D geometric reasoning\, graph neural networks\, and semi-supervised learning. Using this technology\, I have collected and annotated over 34 million 3D postures in interacting rats\, featuring five new monogenic autism models lacking reports of social behavioral phenotypes. I will introduce a novel multi-scale approach which I have used to identify a rich landscape of stereotyped interactions\, synchrony\, and body contact across strains. This deep phenotyping approach revealed a spectrum of changes in rat autism models and in response to amphetamine\, and this framework has the potential to facilitate quantitative studies of social behaviors and their neurobiological underpinnings. \nJoin Zoom Meeting:\nhttps://columbiauniversity.zoom.us/j/94848687512pwd=d0d2L20wSUdZWGZ4dytuZ1YyaEt3QT09 \nMeeting ID: 948 4868 7512\nPasscode: 446335
URL:https://arni-institute.org/event/animal-behavior-video-analysis-working-group/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240119T150000
DTEND;TZID=UTC:20240119T170000
DTSTAMP:20260405T160623
CREATED:20240315T201042Z
LAST-MODIFIED:20240315T201042Z
UID:668-1705676400-1705683600@arni-institute.org
SUMMARY:Animal Behavior Video Analysis Working Group
DESCRIPTION:Title: Multimodal Learning from Pixels to People \nPresenter: Carl Vondrick \nAbstract: People experience the world through modalities of sight\, sound\, words\, touch\, and more. By leveraging their natural relationships and developing multimodal learning methods\, my research creates artificial perception systems with diverse skills\, including spatial\, physical\, logical\, and cognitive abilities\, for flexibly analyzing visual data. This multimodal approach provides versatile representations for tasks like 3D reconstruction\, visual question answering\, and object recognition\, while offering inherent explainability and excellent zero-shot generalization across tasks. By closely integrating diverse modalities\, we can overcome key challenges in machine learning and enable new capabilities for computer vision\, especially for the many upcoming applications where trust is required. \nJoin Zoom Meeting:\nhttps://columbiauniversity.zoom.us/j/96127949475pwd=TWxLa3A3a3lBRjdqbDBWMkRycHFMZz09 \nMeeting ID: 948 4868 7512\nPasscode: 446335
URL:https://arni-institute.org/event/animal-behavior-video-analysis-working-group-2/
LOCATION:CSB 480\, Mudd Building\, 500 W 120th Street
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20231201T150000
DTEND;TZID=UTC:20231201T170000
DTSTAMP:20260405T160623
CREATED:20240315T201133Z
LAST-MODIFIED:20240315T201133Z
UID:671-1701442800-1701450000@arni-institute.org
SUMMARY:Animal Behavior Video Analysis Working Group
DESCRIPTION:Title: Precise quantification of natural behavior with computer vision \nAbstract: To understand the neural control of movement\, cognition\, and social interaction\, we need to precisely quantify motor behaviors. Deep learning tools now enable to extract meaningful behavioral signals from raw videos\, in high spatiotemporal resolution. These technologies are gaining increasing adoption in system neuroscience and are transforming the field in many ways. We will provide an overview of the field\, present the limitations of some of the standard approaches\, and present some of our own work on pose tracking (keypoint detection) and perhaps behavioral segmentation (discovering discrete behavioral motifs). We look forward to exploring fresh perspectives on this important problem. \nJoin Zoom Meeting:\nhttps://columbiauniversity.zoom.us/j/95557736296pwd=V2tTNEVOellZMENGUDF5RXVwcUUyQT09 \nMeeting ID: 948 4868 7512\nPasscode: 446335
URL:https://arni-institute.org/event/animal-behavior-video-analysis-working-group-3/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
END:VCALENDAR