BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//ARNI - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://arni-institute.org
X-WR-CALDESC:Events for ARNI
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20260308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20261101T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251105T113000
DTEND;TZID=America/New_York:20251105T123000
DTSTAMP:20260505T102332
CREATED:20251105T152843Z
LAST-MODIFIED:20251105T152843Z
UID:2028-1762342200-1762345800@arni-institute.org
SUMMARY:CTN: Yael Niv
DESCRIPTION:Seminar Time: 11:30am\nDate: Fri 11/7/25\nSeminar Location: JLG\, L5-084\nHost: Weijia Zhang\n\n\n\nTitle: Latent causes\, prediction errors\, and the organization of memory\n\nAbstract: No two events are alike. But still\, we learn\, which means that we implicitly decide what events are similar enough that experience with one can inform us about what to do in another. We have suggested that this relies on parsing of incoming information into “clusters” according to inferred hidden (latent) causes. Moreover\, we have suggested that unexpected information (that is\, a prediction error) is key to this separation into clusters. In this talk\, I will demonstrate these ideas through behavioral experiments showing evidence for clustering and illustrate the effects of prediction errors on the organization of memory. I will then tie the different findings together into a hypothesis about how information about events is organized in our brain.
URL:https://arni-institute.org/event/ctn-yael-niv/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251105T140000
DTEND;TZID=America/New_York:20251105T150000
DTSTAMP:20260505T102332
CREATED:20251022T211425Z
LAST-MODIFIED:20251028T143328Z
UID:2019-1762351200-1762354800@arni-institute.org
SUMMARY:Speaker: Bryan Li - ARNI Frontier Models for Neuroscience and Behavior Working Group
DESCRIPTION:Bio\nBryan Li is completing his PhD in NeuroAI at the University of Edinburgh\, under the supervision of Arno Onken and Nathalie Rochefort. His main PhD project focuses on building deep learning-based encoding models of the visual cortex that accurately predict neural activity in response to arbitrary visual stimuli. Recently\, he joined Dario Farina’s lab at Imperial College London as an Encode Fellow\, working on neuromotor interfacing and decoding.\n\nTitle (https://www.biorxiv.org/content/10.1101/2025.09.16.676524v2)\nMovie-trained transformer reveals novel response properties to dynamic stimuli in mouse visual cortex\n\nAbstract\nUnderstanding how the brain encodes complex\, dynamic visual stimuli remains a\nfundamental challenge in neuroscience. Here\, we introduce ViV1T\, a transformer-based model trained on natural movies to predict neuronal responses in mouse primary visual cortex (V1). ViV1T outperformed state-of-the-art models in predicting responses to both natural and artificial dynamic stimuli\, while requiring fewer parameters and reducing runtime. Despite being trained exclusively on natural movies\, ViV1T accurately captured core V1 properties\, including orientation and direction selectivity as well as contextual modulation\, despite lacking explicit feedback mechanisms. ViV1T also revealed novel functional features. The model predicted a wider range of contextual responses when using natural and model-generated surround stimuli compared to traditional gratings\, with novel model-generated dynamic stimuli eliciting maximal V1 responses. ViV1T also predicted that dynamic surrounds elicited stronger contextual modulation than static surrounds. Finally\, the model identified a subpopulation of neurons that exhibit contrast-dependent surround modulation\, switching their response to surround stimuli from inhibition to excitation when contrast decreases. These predictions were validated through semi-closed-loop in vivo recordings. Overall\, ViV1T establishes a powerful\, data-driven framework for understanding how brain sensory areas process dynamic visual information across space and time.\n\nZoom link: Upon request @ arni@columbia.edu
URL:https://arni-institute.org/event/arni-frontier-models-for-neuroscience-and-behavior-working-group-2/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251107T160000
DTEND;TZID=America/New_York:20251107T170000
DTSTAMP:20260505T102332
CREATED:20251103T193100Z
LAST-MODIFIED:20251103T193100Z
UID:2024-1762531200-1762534800@arni-institute.org
SUMMARY:ARNI Biological Learning Working Group
DESCRIPTION:Continuation from prior meetings about benchmarks and competition proposals. \nZoom Link: upon request @ ARNI@columbia.edu
URL:https://arni-institute.org/event/arni-biological-learning-working-group-5/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251112T160000
DTEND;TZID=America/New_York:20251112T170000
DTSTAMP:20260505T102332
CREATED:20251112T151131Z
LAST-MODIFIED:20251112T151131Z
UID:2032-1762963200-1762966800@arni-institute.org
SUMMARY:ARNI Continual Learning Working Group
DESCRIPTION:Next Meeting Info\n\n\nDate: Wednesday\, Nov 12\nTime: 4pm-5pm\nRoom: CEPSR 620\nZoom: Upon request @ arni@columbia.edu
URL:https://arni-institute.org/event/arni-continual-learning-working-group-3/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251113T143000
DTEND;TZID=America/New_York:20251113T153000
DTSTAMP:20260505T102332
CREATED:20251103T193526Z
LAST-MODIFIED:20251103T193526Z
UID:2025-1763044200-1763047800@arni-institute.org
SUMMARY:Carl Vondick Hosts Talk with Aaron Hertzmann (Adobe)
DESCRIPTION:Aaron Hertzmann \nWhy Do Pictures Work? Explanations From Real-World Vision\nSpeaker: Aaron Hertzmann (Adobe)\nHost: Carl Vondrick\nDate: Thursday\, November 13\, 2025\nTime: 2:30 PM\nLocation: CSB 453\n\nAbstract: I outline possible answers to the long-standing question of why pictures work: why can people look at a painting or photograph\, and see a depicted subject\, rather than just marks on a page or lights on a display? Observers with no prior experience with pictures can understand some kinds of pictures\, indicating that picture understanding is not solely a product of experience or culture.  I argues that picture perception can be explained as a product of several properties of real-world vision. First\, the fact that humans can understand certain real-world phenomena—refraction\, reflection\, cast shadows—as simultaneously surface phenomena but also images of an underlying cause explains why we can see pictures as depictions and not just markings.  Second\, the fact that viewers can understand real-world scenes with unfamiliar combinations of objects explains our ability to understand many different styles of depiction. For example\, we can understand black-and-white photos of people because\, in real-world vision\, we could recognize a familiar person who had been painted gray. Third\, our robustness to visual defects and other difficult viewing conditions explains our ability to understand styles of pictorial textures\, like paint strokes.  Extensions of these basic ideas can explain depiction in many different visual styles\, including photographic tone reproduction\, line drawings\, silhouettes\, cartoons\, painterly styles\, and more. The proposed models of picture understanding could significantly inform future analysis of perceptual mechanisms\, picture aesthetics\, and the nature of different styles of depiction.\n\nZoom Link: Upon request @arni@columbia.edu
URL:https://arni-institute.org/event/carl-vondick-hosts-talk-with-aaron-hertzmann-adobe/
LOCATION:CSB 453\, Mudd Building\, 500 W 120th Street
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251117T080000
DTEND;TZID=America/New_York:20251118T130000
DTSTAMP:20260505T102332
CREATED:20251112T151814Z
LAST-MODIFIED:20251112T151821Z
UID:2033-1763366400-1763470800@arni-institute.org
SUMMARY:ARNI Annual Retreat 2025
DESCRIPTION:To celebrate the many accomplishments as we wrap up year two and continue our momentum into year three. \nWe anticipate engaging discussions in the working groups and panels as we explore future directions for ARNI. \nWe also want to highlight the participation of Bing Brunton\, Jim DiCarlo\, and Thomas Reardon from our External Advisory Board. \nBy registration only!
URL:https://arni-institute.org/event/arni-annual-retreat-2025/
ORGANIZER;CN="ARNI":MAILTO:arni@columbia.edu
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251121T113000
DTEND;TZID=America/New_York:20251121T123000
DTSTAMP:20260505T102332
CREATED:20251105T152923Z
LAST-MODIFIED:20251118T184741Z
UID:2030-1763724600-1763728200@arni-institute.org
SUMMARY:CTN: Karel Svoboda
DESCRIPTION:Seminar Time: 11:30am\nDate: 11/21/25\nSeminar Location: JLG\, L5-084\nHost: Ji Xia\n\n\nTitle: Illuminating synaptic learning \nAbstract: How do synapses in the middle of the brain know how to adjust their weight to advance a behavioral goal? This is referred to as the synaptic ‘credit assignment problem’. A large variety of synaptic learning rules have been proposed\, mainly in the context of artificial neural networks. The most powerful learning rules (e.g. back-propagation of error) are thought to be biologically implausible\, whereas the widely studied biological learning rules (Hebbian) are insufficient for goal-directed learning. I will describe ongoing work\, both experimental and theoretical\, focused on understanding learning at the level of circuits and synapses in the motor cortex. 
URL:https://arni-institute.org/event/ctn-karel-svoboda/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
END:VCALENDAR