Skip to content
  • CTN: Andreas Tolias

    Zuckerman Institute - L5-084 3227 Broadway, New York, NY, United States

    Title: Foundation models of the brain Abstract: You … your memories and ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells …’ Crick’s words capture the profound challenge of decrypting the neural code. This challenge has long been hindered by our limited…

  • Speaker: Xuexin Wei ARNI WG Multi-resource-cost optimization of neural network models

    Zuckerman Institute - L5-084 3227 Broadway, New York, NY, United States

    Title: Constraints of efficient neural computation Abstract: Neural systems adapt to the statistical structure of the environment to support behavior. While it is generally recognized that such adaptation is subject to various biological constraints (such as noise, metabolism, wiring cost), how these constraints determine the optimal neural computation remains unclear. For the first part of…

  • CTN: Farzaneh Najafi

    Zuckerman Institute - L5-084 3227 Broadway, New York, NY, United States
  • Speaker: Katherine Xu – Language and Vision Working Group

    Virtual

    Title: Are Vision-Language Models Checking or Looking? Abstract: Today’s AI vision systems are trained on vast amounts of data, yet it remains unclear whether they simply retrieve memorized answers or actively reason. We conjecture that hallucinations and limited creativity in these models stem from an over-reliance on superficial "checking" rather than active "looking." Checking retrieves the…

  • Lecture Series in AI: Richard Zemel

    Davis Auditorium 530 W 120th St, New York, NY 10027, New York, NY

    General website Title: Integrating Past and Present in Continual Learning Abstract: Continual learning aims to bridge the gap between typical human and machine-learning environments. The continual setting does not have separate training and testing phases, and instead models are evaluated online while learning novel concepts and tasks. The most capable current AI systems struggle to…

  • Speaker Josue Ortega Caro: ARNI Frontier Models for Neuroscience and Behavior Working Group

    Virtual

    Time: 30th March. 3pm EST Title: Large scale models for spatiotemporal data. Speaker: Josue Ortega Caro https://josueortc.github.io/ Abstract:  Spatiotemporal and multimodal datasets contain structured variability distributed across space, time, and measurement modality, motivating modeling approaches that can learn representations directly from large-scale data. Inspired by video foundational models, we study how the masked autoencoder training objective can…

  • Speaker: Hadi Vafaii ARNI WG Multi-resource-cost optimization of neural network models

    Zuckerman Institute - L3-079 3227 Broadway, New York, NY, United States

    Location: ZI L3-079 Time: 1:00pm Title: Metabolic cost of information processing in Poisson variational autoencoders Abstract:Computation in biological systems is fundamentally energy-constrained, yet standard theories of computation treat energy as freely available. Here, we argue that variational free energy minimization under a Poisson assumption offers a principled path toward an energy-aware theory of computation. Our…

  • Speaker: Mengye Ren – ARNI Continual Learning Working Group Meeting

    CEPSR 620 Schapiro 530 W. 120th St

    Mengye Ren Mengye will also be giving a talk on continual learning at the Zemel group meeting an hour prior (at 2pm) that working group attendees are welcome to join if interested. Here's the abstract of his talk: Today's AI models primarily acquire knowledge through offline, i.i.d. learning. While in-context learning offers some capacity for online…

  • CTN: Jack Lindsey (Anthropic)

    Zuckerman Institute - L5-084 3227 Broadway, New York, NY, United States

    Title: The inner lives of language models Abstract: In recent years, LLMs have evolved from bad text completion engines, to decent chatbots, to digital genies that work miracles on your computer (while making the occasional catastrophic error). The increasing sophistication of AI models’ behavior has been accompanied by a commensurate enrichment of their internal representations…