BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//ARNI - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:ARNI
X-ORIGINAL-URL:https://arni-institute.org
X-WR-CALDESC:Events for ARNI
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:UTC
BEGIN:STANDARD
TZOFFSETFROM:+0000
TZOFFSETTO:+0000
TZNAME:UTC
DTSTART:20230101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=UTC:20240717T160000
DTEND;TZID=UTC:20240717T200000
DTSTAMP:20260501T002810
CREATED:20240712T212855Z
LAST-MODIFIED:20240712T212855Z
UID:998-1721232000-1721246400@arni-institute.org
SUMMARY:Zuckerman Institute Demo Day
DESCRIPTION:
URL:https://arni-institute.org/event/zuckerman-institute-demo-day/
LOCATION:Lightning AI\, 50 West 23 Street 7th FL\, New York\, NY\, 10010\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240725T150000
DTEND;TZID=UTC:20240725T170000
DTSTAMP:20260501T002810
CREATED:20240723T230443Z
LAST-MODIFIED:20240723T230443Z
UID:1006-1721919600-1721926800@arni-institute.org
SUMMARY:Dr. Richard Lange
DESCRIPTION:Title: “What Bayes can and cannot tell us about the neuroscience of vision” \nNikolaus Kriegeskorte’s Group is hosting Dr.Richard Lange\, Assistant Professor in the Department of Computer Science at Rochester Institute of Technology. He will be giving a talk at Zuckerman Institute.
URL:https://arni-institute.org/event/dr-richard-lange/
LOCATION:Zuckerman Institute – L3-079\, 3227 Broadway\, New York\, NY\, 10027\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240730T150000
DTEND;TZID=UTC:20240730T170000
DTSTAMP:20260501T002810
CREATED:20240729T213123Z
LAST-MODIFIED:20240729T213418Z
UID:1009-1722351600-1722358800@arni-institute.org
SUMMARY:Continual Learning Working Group Talk
DESCRIPTION:Title: Continual learning\, machine self-reference\, and the problem of problem-awareness \nAbstract: Continual learning (CL) without forgetting has been a long-standing problem in machine learning with neural networks. Here I will bring a new perspective by looking at learning algorithms (LAs) as memory mechanisms with their own decision making problem. I will present a natural solution to CL under this view: instead of handcrafting such LAs\, we metalearn continual in-context LAs using self-referential weight matrices. Experiments confirm that this method effectively achieves CL without forgetting\, outperforming handcrafted algorithms on classic benchmarks. While this is a promising result on its own\, in this talk\, I will go beyond this limited scope of CL. I will serve this CL setting as an example to introduce a broader perspective of “problem awareness” in machine learning. I will argue that in many prior CL methods\, systems fail in CL because they do not know what it means to continually learn without forgetting. I will show that the same argument can explain the previous failures of neural networks on other classic challenges—historically pointed out by cognitive scientists in comparison to human intelligence—\, such as systematic generalization and few-shot learning. I will highlight how similar metalearning methods provide a promising solution to these challenges too. \nBio: is a post-postdoc at Harvard University\, Center for Brain Science.\nPreviously\, he was a postdoc and lecturer at the Swiss AI Lab IDSIA\, University of Lugano (Switzerland) from 2020 to 2023\, where he taught a popular course on practical deep learning. He received his PhD in Computer Science from RWTH Aachen University (Germany) in 2020\, and undergraduate and Master’s degrees in Applied Mathematics from École Centrale Paris and ENS Cachan (France). He was also a research intern at Google in NYC and Mountain View\, in 2017 and 2018. He is broadly interested in the computational principles of learning\, memory\, perception\, self-reference\, and decision making\, as ingredients for building and understanding general-purpose intelligence. The scope of his research interests has expanded from language modeling (PhD) to general sequence and program learning (postdoc)\, and currently to neuroscience and cognitive science (post-postdoc).
URL:https://arni-institute.org/event/continual-learning-machine-self-reference-and-the-problem-of-problem-awareness/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
END:VCALENDAR