BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//ARNI - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://arni-institute.org
X-WR-CALDESC:Events for ARNI
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20260308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20261101T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251105T113000
DTEND;TZID=America/New_York:20251105T123000
DTSTAMP:20260430T204042
CREATED:20251105T152843Z
LAST-MODIFIED:20251105T152843Z
UID:2028-1762342200-1762345800@arni-institute.org
SUMMARY:CTN: Yael Niv
DESCRIPTION:Seminar Time: 11:30am\nDate: Fri 11/7/25\nSeminar Location: JLG\, L5-084\nHost: Weijia Zhang\n\n\n\nTitle: Latent causes\, prediction errors\, and the organization of memory\n\nAbstract: No two events are alike. But still\, we learn\, which means that we implicitly decide what events are similar enough that experience with one can inform us about what to do in another. We have suggested that this relies on parsing of incoming information into “clusters” according to inferred hidden (latent) causes. Moreover\, we have suggested that unexpected information (that is\, a prediction error) is key to this separation into clusters. In this talk\, I will demonstrate these ideas through behavioral experiments showing evidence for clustering and illustrate the effects of prediction errors on the organization of memory. I will then tie the different findings together into a hypothesis about how information about events is organized in our brain.
URL:https://arni-institute.org/event/ctn-yael-niv/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251105T140000
DTEND;TZID=America/New_York:20251105T150000
DTSTAMP:20260430T204042
CREATED:20251022T211425Z
LAST-MODIFIED:20251028T143328Z
UID:2019-1762351200-1762354800@arni-institute.org
SUMMARY:Speaker: Bryan Li - ARNI Frontier Models for Neuroscience and Behavior Working Group
DESCRIPTION:Bio\nBryan Li is completing his PhD in NeuroAI at the University of Edinburgh\, under the supervision of Arno Onken and Nathalie Rochefort. His main PhD project focuses on building deep learning-based encoding models of the visual cortex that accurately predict neural activity in response to arbitrary visual stimuli. Recently\, he joined Dario Farina’s lab at Imperial College London as an Encode Fellow\, working on neuromotor interfacing and decoding.\n\nTitle (https://www.biorxiv.org/content/10.1101/2025.09.16.676524v2)\nMovie-trained transformer reveals novel response properties to dynamic stimuli in mouse visual cortex\n\nAbstract\nUnderstanding how the brain encodes complex\, dynamic visual stimuli remains a\nfundamental challenge in neuroscience. Here\, we introduce ViV1T\, a transformer-based model trained on natural movies to predict neuronal responses in mouse primary visual cortex (V1). ViV1T outperformed state-of-the-art models in predicting responses to both natural and artificial dynamic stimuli\, while requiring fewer parameters and reducing runtime. Despite being trained exclusively on natural movies\, ViV1T accurately captured core V1 properties\, including orientation and direction selectivity as well as contextual modulation\, despite lacking explicit feedback mechanisms. ViV1T also revealed novel functional features. The model predicted a wider range of contextual responses when using natural and model-generated surround stimuli compared to traditional gratings\, with novel model-generated dynamic stimuli eliciting maximal V1 responses. ViV1T also predicted that dynamic surrounds elicited stronger contextual modulation than static surrounds. Finally\, the model identified a subpopulation of neurons that exhibit contrast-dependent surround modulation\, switching their response to surround stimuli from inhibition to excitation when contrast decreases. These predictions were validated through semi-closed-loop in vivo recordings. Overall\, ViV1T establishes a powerful\, data-driven framework for understanding how brain sensory areas process dynamic visual information across space and time.\n\nZoom link: Upon request @ arni@columbia.edu
URL:https://arni-institute.org/event/arni-frontier-models-for-neuroscience-and-behavior-working-group-2/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
END:VCALENDAR