BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//ARNI - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://arni-institute.org
X-WR-CALDESC:Events for ARNI
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20260308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20261101T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250408T150000
DTEND;TZID=America/New_York:20250408T160000
DTSTAMP:20260424T120022
CREATED:20250404T132848Z
LAST-MODIFIED:20250404T132848Z
UID:1602-1744124400-1744128000@arni-institute.org
SUMMARY:ARNI Emerging Researchers Talk Series #1: Rahul Ramesh
DESCRIPTION:Title: Principles of Learning from Multiple Tasks \n\nAbstract: \n\nDeep networks are increasingly trained on data from multiple tasks with the goal of sharing synergistic information across related tasks. A language model\, for example\, is trained on 10 trillion tokens on tasks ranging from programming\, finance\, trivia to translation and a vision model is trained on over a billion images for tasks like object recognition\, depth prediction and semantic segmentation. With this motivation\, in this talk\, I will present the principles behind how to optimally train on multiple tasks and attempt to answer why we are able to learn on these tasks. In the first part of the talk we develop a theory that shows that dissimilar tasks fight for model capacity when trained together. We use this insight to design Model Zoo — a learner that splits its capacity to train many small models on related subsets of tasks — which is state-of-the-art for task-incremental continual learning. In the second half of this talk\, we show that typical tasks are highly redundant functions of the input\, i.e.\, the subspaces that vary the most and ones that vary the least are both highly predictive of typical tasks. This result suggests that there are many subspaces that can be used to solve typical tasks\, which allows us to learn a shared representation for these tasks. We believe that organisms choose to solve redundant tasks because they are the only ones that agents with bounded resources can readily learn. \n\nSpeaker Bio:\nRahul Ramesh is a 6th year PhD student at the University of Pennsylvania in the department of computer and information science and is advised by Pratik Chaudhari. He previously received his B.Tech from the Indian Institute of Technology Madras in Computer science and Engineering. Rahul is interested in using perspectives from statistical learning theory\, information theory and neuroscience to study self-supervised and multitask learning.\n\n\n\nZoom Link: https://columbiauniversity.zoom.us/j/91436346202?pwd=Fa0ohRBhckitrJqVF5gWrUPo5774U2.1
URL:https://arni-institute.org/event/arni-emerging-researchers-talk-series-1-rahul-ramesh/
END:VEVENT
END:VCALENDAR