CTN: Guillaume Hennequin
Zuckerman Institute - L5-084 3227 Broadway, New York, NY, United StatesTitle: A recurrent network model of planning explains hippocampal replay and human behaviour Abstract: When faced with a novel situation, humans often spend substantial periods of time contemplating possible futures. For such planning to be rational, the benefits to behaviour must compensate for the time spent thinking. I will show how we recently captured these features…
Canceled CTN: Bob Datta
Zuckerman Institute - L5-084 3227 Broadway, New York, NY, United StatesTitle and Abstract: TBD
CTN: Stefano Fusi
Zuckerman Institute - L5-084 3227 Broadway, New York, NY, United StatesTitle: The Geometry of Abstraction Abstract: I'll first discuss the theoretical framework introduced in Bernardi et al. 2020, Cell, in which we propose a possible definition of abstract representations. I'll go into the details of the most up-to-date conceptual framework, discuss the computational relevance of the representational geometry and the cross-validated measures of representational geometry that we…
CTN: Peter Dayan
Jerome L. Greene Science Center 3227 Broadway 9th FL Lecture Hall, New York, NY, United StatesTitle: Risking your Tail: Curiosity, Danger & Exploration Abstract: Risk and reward are critical balancing determinants of adaptive behaviour, associated respectively with neophobia and neophilia in the case of exploration. There are rather great differences in how individuals engage with novelty - with substantial consequences for what they are able to learn. Here, we consider…
Zuckerman Institute Demo Day
Lightning AI 50 West 23 Street 7th FL, New York, NY, United StatesDr. Richard Lange
Zuckerman Institute - L3-079 3227 Broadway, New York, NY, United StatesTitle: "What Bayes can and cannot tell us about the neuroscience of vision" Nikolaus Kriegeskorte's Group is hosting Dr.Richard Lange, Assistant Professor in the Department of Computer Science at Rochester Institute of Technology. He will be giving a talk at Zuckerman Institute.
Continual Learning Working Group Talk
CEPSR 620 Schapiro 530 W. 120th StTitle: Continual learning, machine self-reference, and the problem of problem-awareness Abstract: Continual learning (CL) without forgetting has been a long-standing problem in machine learning with neural networks. Here I will bring a new perspective by looking at learning algorithms (LAs) as memory mechanisms with their own decision making problem. I will present a natural solution to CL…
CTN Claudia Clopath
Zuckerman Institute - L5-084 3227 Broadway, New York, NY, United StatesTitle: Feedback-based motor control can guide plasticity and drive rapid learning Abstract: Animals use afferent feedback to rapidly correct ongoing movements in the presence of a perturbation. Repeated exposure to a predictable perturbation leads to behavioural adaptation that counteracts its effects. Primary motor cortex (M1) is intimately involved in both processes, integrating inputs from various sensorimotor brain…
CTN: Sebastian Seung
Zuckerman Institute - L5-084 3227 Broadway, New York, NY, United StatesTitle: Insights into vision from interpreting a neuronal wiring diagram Host: Marcus Triplett Abstract: In 2023, the FlyWire Consortium released the neuronal wiring diagram of an adult fly brain. This contains as a corollary the first complete wiring diagram of a visual system, which has been used to identify all 200+ cell types that are intrinsic to the…
CTN: Stephanie Palmer
Zuckerman Institute - L5-084 3227 Broadway, New York, NY, United StatesTitle: How behavioral and evolutionary constraints sculpt early visual processing Abstract: Biological systems must selectively encode partial information about the environment, as dictated by the capacity constraints at work in all living organisms. For example, we cannot see every feature of the light field that reaches our eyes; temporal resolution is limited by transmission noise and delays,…
Continual Learning Working Group: Kick Off
CEPSR 620 Schapiro 530 W. 120th StSpeaker: Mengye Ren Title: Lifelong and Human-like Learning in Foundation Models Abstract: Real-world agents, including humans, learn from online, lifelong experiences. However, today’s foundation models primarily acquire knowledge through offline, iid learning, while relying on in-context learning for most online adaptation. It is crucial to equip foundation models with lifelong and human-like learning abilities to enable more flexible…
ARNI NSF Site Visit
Innovation Hub Tang Family Hall - 2276 12TH AVENUE – FLOOR 02NSF Site Visit - The NSF team will evaluate the progress and achievements of ARNI’s projects to date and provide recommendations to steer future directions and funding for the project. If you are interested in learning more about ARNI over-all, join this Zoom link from 9am to 12pm or 2pm to 4:30pm.