BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//ARNI - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://arni-institute.org
X-WR-CALDESC:Events for ARNI
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20260308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20261101T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20270314T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20271107T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260123T113000
DTEND;TZID=America/New_York:20260123T130000
DTSTAMP:20260430T085242
CREATED:20260113T192529Z
LAST-MODIFIED:20260121T163725Z
UID:2193-1769167800-1769173200@arni-institute.org
SUMMARY:CTN: Scott Linderman
DESCRIPTION:Title: When and How to Parallelize Seemingly Sequential Models\n \nAbstract: Transformers have become the de facto model for sequential data in large part because they are well adapted to modern hardware: At training time\, the loss can be evaluated in parallel over the sequence length on GPUs and TPUs. By contrast\, evaluating nonlinear recurrent neural networks (RNNs) appears to be an inherently sequential problem. However\, recent advances like DEER (arXiv:2309.12252) and DeepPCR (arXiv:2309.16318) have shown that evaluating a nonlinear recursion can be recast as solving a parallelizable optimization problem\, and sometimes this approach can yield dramatic speed-ups in wall-clock time. However\, the factors that govern the difficulty of these optimization problems remain unclear\, limiting the larger adoption of the technique. I will present a recent line of work from my lab that further develops these methods in both theory and practice. We establish a precise relationship between the dynamics of a nonlinear system and the conditioning of its corresponding optimization formulation. We show that the predictability of a system\, defined as the degree to which small perturbations in state influence future behavior\, impacts the number of optimization steps required for evaluation. In predictable systems\, the state trajectory can be computed in O(log2T) time\, where T is the sequence length\, a major improvement over the conventional sequential approach. In contrast\, chaotic or unpredictable systems exhibit poor conditioning\, with the consequence that parallel evaluation converges too slowly to be useful. We validate our claims through extensive experiments\, with a particular emphasis on parallelizing nonlinear RNNs and Markov chain Monte Carlo (MCMC) algorithms for Bayesian statistics. I will provide practical guidance on when nonlinear dynamical systems can be efficiently parallelized\, and highlighting predictability as a key design principle for parallelizable models.
URL:https://arni-institute.org/event/ctn-scott-linderman/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260123T120000
DTEND;TZID=America/New_York:20260123T130000
DTSTAMP:20260430T085242
CREATED:20260113T193600Z
LAST-MODIFIED:20260113T193600Z
UID:2194-1769169600-1769173200@arni-institute.org
SUMMARY:Language and Vision Working Group
DESCRIPTION:Initial Meeting! \nAbout: \nThe ARNI Language & Vision Working Group aims to bring together researchers across neuroscience\, cognitive science\, computer science\, and AI to collaboratively advance our understanding of how humans and machines construct multimodal experiences. Its goal is to create a space for discussing ongoing language- and vision-focused projects\, identifying natural points of overlap\, and transforming them into larger\, interdisciplinary initiatives. Grounded in the idea that language and vision form a dynamic\, symbiotic system rather than isolated modules\, the group seeks to explore how this integration is represented in the brain and in the machine. Strengthening collaboration between these domains is essential for building the next generation of AI systems that learn from continual\, multimodal input\, reflect human cognitive principles\, and ultimately support real-world human needs. \nMore questions: Contact Anna Krason (akrason@gc.cuny.edu) \nZoom: upon request @ arni@columbia.edu
URL:https://arni-institute.org/event/language-and-vision-working-group/
END:VEVENT
END:VCALENDAR