BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//ARNI - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:ARNI
X-ORIGINAL-URL:https://arni-institute.org
X-WR-CALDESC:Events for ARNI
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20230312T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20231105T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241101T113000
DTEND;TZID=America/New_York:20241101T130000
DTSTAMP:20260428T082545
CREATED:20241031T200759Z
LAST-MODIFIED:20241031T201035Z
UID:1131-1730460600-1730466000@arni-institute.org
SUMMARY:CTN: Jacob Macke
DESCRIPTION:Title: Building mechanistic models of neural computations with simulation-based machine learning\n\nAbstract: Experimental techniques now make it possible to measure the structure and function of neural circuits at an unprecedented scale and resolution. How can we leverage this wealth of data to understand how neural circuits perform computations underlying behaviour? A mechanistic understanding will require models that align with experimental measurements and biophysical mechanisms\, while also being capable of performing behaviorally relevant computations. Building such models has remained a central challenge.\nI will present our work on addressing his challenge: We have developed machine learning methods and differentiable simulators that make it possible to algorithmically identify models that link biophysical mechanisms\, neural data\, and behaviour. I will show how these approaches—in combination with modern connectomic measurements—make it possible to build large-scale mechanistic models of the fruit fly visual system\, and how such a model can make experimentally testable predictions for each neuron in the system.
URL:https://arni-institute.org/event/ctn-jacob/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241101T150000
DTEND;TZID=America/New_York:20241101T170000
DTSTAMP:20260428T082545
CREATED:20241016T230120Z
LAST-MODIFIED:20241023T190913Z
UID:1104-1730473200-1730480400@arni-institute.org
SUMMARY:ARNI Seminar Series Kick Off: Speaker Jim DiCarlo
DESCRIPTION:Title: Do contemporary\, machine-executable models (aka digital twins) of the primate ventral visual system unlock the ability to non-invasively\, beneficially modulate high level brain states? \nAbstract: \nIn this talk\, I will first briefly review the story of how neuroscience\, cognitive science and computer science (“AI”) converged to create specific\, image-computable\, deep neural network models intended to appropriately abstract\, emulate and explain the mechanisms of primate core visual object identification and categorization behaviors. Based on a large body of primate neurophysiological and behavioral data\, some of these network models are now the most accurate emulators of the primate ventral visual stream — they well-approximate both its internal neural mechanisms and how those mechanisms support the ability of humans and other primates to rapidly and accurately infer object identity\, position\, pose\, etc. from the set of pixels (image) received during typical natural viewing. \nBecause these leading neuroscientific emulator models — aka “digital twins” — are fully observable and machine-executable\, they offer predictive and potential application power that our field’s prior conceptual models did not. I will describe two recent examples from our team. First\, the current leading digital twins predict that the brain’s high level visual neurons (inferior temporal cortex\, IT) should be highly susceptible to “adversarial attacks” in which an agent (the adversary) aims to strongly disrupt the normal neural response (here\, neural firing rate) to any given natural image via small magnitude\, targeted changes to that image. We verified this surprising prediction in monkey IT neurons. Second\, we show how we can turn this result around and extend it: instead of making adversarial “attacks”\, we propose using digital twin models to support non-invasive\, beneficial brain modulation. Specifically\, we show that we can use a digital twin to design spatial patterns of light energy that\, when applied to the organism’s retina in the context of ongoing natural visual processing\, results in precise modulation (i.e. rate bias) of the pattern of a population of IT neurons (where any intended modulation pattern is chosen ahead of time by the scientist). Because the IT visual neural populations are known to directly modulate downstream neural circuits involved in mood and anxiety\, we speculate that this could provide a new\, non-invasive application avenue of potential future human clinical benefit. \nZoom Link: https://columbiauniversity.zoom.us/j/97757217278?pwd=3iRcxbHOY4z4giEiEGo1peEC8EIfK1.1
URL:https://arni-institute.org/event/arni-seminar-series-kick-off-speaker-jim-dicarlo/
LOCATION:Zuckerman Institute – L7-119\, 3227 Broadway\, New York\, NY\, 10027\, United States
ORGANIZER;CN="ARNI":MAILTO:arni@columbia.edu
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241106T123000
DTEND;TZID=America/New_York:20241106T130000
DTSTAMP:20260428T082545
CREATED:20241031T201241Z
LAST-MODIFIED:20241106T202632Z
UID:1138-1730896200-1730898000@arni-institute.org
SUMMARY:Continual Learning Working Group: Brainstorming
DESCRIPTION:Zoom Link:  https://columbiauniversity.zoom.us/j/97176853843?pwd=VLZdh6yqHBcOQhdf816lkN5ByIpIsF.1
URL:https://arni-institute.org/event/continual-learning-working-group-brainstorming/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241108T113000
DTEND;TZID=America/New_York:20241108T130000
DTSTAMP:20260428T082545
CREATED:20241031T201012Z
LAST-MODIFIED:20241106T202546Z
UID:1135-1731065400-1731070800@arni-institute.org
SUMMARY:CTN: Tanya Sharpee
DESCRIPTION:Seminar Time: 11:30am\nDate: 11/8/2024\nLocation: JLG\, L5-084 \nHost: Krishan Kumar\n \n\n\nTitle: Building mechanistic models of neural computations with simulation-based machine learnin
URL:https://arni-institute.org/event/ctn-tanya-sharpee/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241113T120000
DTEND;TZID=America/New_York:20241113T130000
DTSTAMP:20260428T082545
CREATED:20241108T015156Z
LAST-MODIFIED:20241111T155937Z
UID:1151-1731499200-1731502800@arni-institute.org
SUMMARY:Continual Learning Working Group: Nikita Rajaneesh
DESCRIPTION:Title: Wandering Within a World \nA discussion on Wandering Within a World: Online Contextualized Few-Shot Learning\, this 2021 paper by our very own Rich Zemel leverages contextual information in a continually changing environment to improve model performance in realistic settings. \nZoom Link: https://columbiauniversity.zoom.us/j/97176853843?pwd=VLZdh6yqHBcOQhdf816lkN5ByIpIsF.1
URL:https://arni-institute.org/event/continual-learning-working-group-curriculum-review/
LOCATION:CEPSR 6LW4\, Computer Science Department 500 West 120 Street
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241115T113000
DTEND;TZID=America/New_York:20241115T130000
DTSTAMP:20260428T082545
CREATED:20241108T014804Z
LAST-MODIFIED:20241108T014804Z
UID:1148-1731670200-1731675600@arni-institute.org
SUMMARY:CTN: Catherine Hartley
DESCRIPTION:Title: TBD \nAbstract: TBD
URL:https://arni-institute.org/event/ctn-catherine-hartley/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241120T103000
DTEND;TZID=America/New_York:20241120T113000
DTSTAMP:20260428T082545
CREATED:20241113T161354Z
LAST-MODIFIED:20241114T183223Z
UID:1162-1732098600-1732102200@arni-institute.org
SUMMARY:CTN: Seminar Speaker Alessandro Ingrosso
DESCRIPTION:Title:\nStatistical mechanics of transfer learning in the proportional limit\n\nAbstract:\nTransfer learning (TL) is a well-established machine learning technique to boost the generalization performance on a specific (target) task using information gained from a related (source) task\, and it crucially depends on the ability of a network to learn useful features. I will present a recent work that leverages analytical progress in the proportional regime of deep learning theory (i.e. the limit where the size of the training set P and the size of the hidden layers N are taken to infinity keeping their ratio P/N finite) to develop a novel statistical mechanics formalism for TL in Bayesian neural networks. I’ll show how such single-instance Franz-Parisi formalism can yield an effective theory for TL in one-hidden-layer fully-connected neural networks. Unlike the (lazy-training) infinite-width limit\, where TL is ineffective\, in the proportional limit TL occurs due to a renormalized source-target kernel that quantifies their relatedness and determines whether TL is beneficial for generalization.
URL:https://arni-institute.org/event/ctn-seminar-speaker-alessandro-ingrosso/
LOCATION:Zuckerman Institute – L3-079\, 3227 Broadway\, New York\, NY\, 10027\, United States
END:VEVENT
END:VCALENDAR