Skip to content
  • Lecture Series in AI: Richard Zemel

    Davis Auditorium 530 W 120th St, New York, NY 10027, New York, NY

    General website Title: Integrating Past and Present in Continual Learning Abstract: Continual learning aims to bridge the gap between typical human and machine-learning environments. The continual setting does not have separate training and testing phases, and instead models are evaluated online while learning novel concepts and tasks. The most capable current AI systems struggle to…

  • Speaker Josue Ortega Caro: ARNI Frontier Models for Neuroscience and Behavior Working Group

    Virtual

    Time: 30th March. 3pm EST Title: Large scale models for spatiotemporal data. Speaker: Josue Ortega Caro https://josueortc.github.io/ Abstract:  Spatiotemporal and multimodal datasets contain structured variability distributed across space, time, and measurement modality, motivating modeling approaches that can learn representations directly from large-scale data. Inspired by video foundational models, we study how the masked autoencoder training objective can…

  • Speaker: Hadi Vafaii ARNI WG Multi-resource-cost optimization of neural network models

    Zuckerman Institute - L3-079 3227 Broadway, New York, NY, United States

    Location: ZI L3-079 Time: 1:00pm Title: Metabolic cost of information processing in Poisson variational autoencoders Abstract:Computation in biological systems is fundamentally energy-constrained, yet standard theories of computation treat energy as freely available. Here, we argue that variational free energy minimization under a Poisson assumption offers a principled path toward an energy-aware theory of computation. Our…

  • Speaker: Mengye Ren – ARNI Continual Learning Working Group Meeting

    CEPSR 620 Schapiro 530 W. 120th St

    Mengye Ren Mengye will also be giving a talk on continual learning at the Zemel group meeting an hour prior (at 2pm) that working group attendees are welcome to join if interested. Here's the abstract of his talk: Today's AI models primarily acquire knowledge through offline, i.i.d. learning. While in-context learning offers some capacity for online…

  • CTN: Jack Lindsey (Anthropic)

    Zuckerman Institute - L5-084 3227 Broadway, New York, NY, United States

    Title: The inner lives of language models Abstract: In recent years, LLMs have evolved from bad text completion engines, to decent chatbots, to digital genies that work miracles on your computer (while making the occasional catastrophic error). The increasing sophistication of AI models’ behavior has been accompanied by a commensurate enrichment of their internal representations…

  • ARNI Distinguished Seminar Series: Ellie Pavlick (Brown University)

    Zuckerman Institute- Kavli Auditorium 9th Fl 3227 Broadway, NY

    Ellie Pavlick (Assistant Professor of Computer Science and Linguistics, Brown University and Director, NSF Institute on Interaction for AI Assistants (ARIA)) Location: ZI Kavli Auditorium 9th Floor Time: 3:00pm Title: (How) Does AI Think? Abstract: The increasingly human-like behavior of AI has led to a fascination with ascribing it human-like internal properties -- notions like…

  • Speaker: Ziwei (Sara) Gong – ARNI Language and Vision Working Group

    Virtual

    Title: Decoding Human Emotions: From Psychological Theories to Multimodal NLP Models Abstract: Understanding and modeling human emotions is essential for natural language processing (NLP) applications, from conversational AI to mental health assessment.…