BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//ARNI - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://arni-institute.org
X-WR-CALDESC:Events for ARNI
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20260308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20261101T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250110T113000
DTEND;TZID=America/New_York:20250110T130000
DTSTAMP:20260426T181623
CREATED:20250108T164302Z
LAST-MODIFIED:20250108T164302Z
UID:1316-1736508600-1736514000@arni-institute.org
SUMMARY:CTN: Mazviita Chirimuuta
DESCRIPTION:Title: Neuromorphic Computing and the Significance of Medium Dependence\n\n \nAbstract:\nThe increasingly prohibitive cost of energy demanded by large artificial neural networks (ANNs) is giving new impetus to research and development on neuromorphic computing. Importantly\, there is an open question over how brain-like the hardware will have to be in order for an artificial intelligence to match the brain in its combination of robustness\, adaptability\, and energy efficiency. If biological cognition is heavily dependent on the specific properties of the material that instantiates it (i.e. living cells)\, then neuromorphic computing will have to merge with synthetic biology in order to achieve its ultimate goal of brain-like performance. If it is not\, neuromorphic computing holds out the promise of some gains in efficiency but there is no pressure for hardware to become increasingly neuro-mimetic in order to match the functionality of the nervous system. In this talk I introduce the concept of practical medium dependence/independence in order to explore the likelihood of these two scenarios. I present the argument that practically medium independent approaches to information processing\, such as digital computing\, are inherently less efficient than ones dependent on the specifics of implementing media\, and for that reason will not have evolved. This result has implications for how we rate the near-term possibility of human-like artificial general intelligence\, and offers a new way to understand how cognition is rooted\, more generally\, in biological processes.
URL:https://arni-institute.org/event/ctn-mazviita-chirimuuta/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250113T113000
DTEND;TZID=America/New_York:20250113T130000
DTSTAMP:20260426T181623
CREATED:20250108T204804Z
LAST-MODIFIED:20250108T204828Z
UID:1325-1736767800-1736773200@arni-institute.org
SUMMARY:CTN: Mehdi Azabou\, ARNI Postdoctorate Research Scientist
DESCRIPTION:Title: Building foundation models for neuroscience\n\nAbstract: Current methodologies for recording brain activity often provide narrow views of the brain’s function. This fragmentation of datasets has hampered the development of robust and comprehensive computational models that generalize across diverse conditions\, tasks\, and individuals. Our work is motivated by the need for a large-scale foundation model in neuroscience–one that can go beyond the limitations of single-dataset approaches and offer a fuller\, more comprehensive picture of brain function. We propose a novel\, scalable and unified approach for training on diverse neural datasets. We test our model across two large collections of data: 1. on nonhuman primates performing diverse motor tasks\, spanning over 158 different sessions from over 27\,373 neural units\, and 2. the entirety of the Allen Institute’s Brain Observatory dataset\, containing responses from over 100\,000 neurons in 6 areas of the brains of mice\, observed with two-photon calcium imaging\, recorded while the mice observed different types of visual stimuli.
URL:https://arni-institute.org/event/ctn-mehdi-azabou-arni-postdoctorate-research-scientists/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250117T113000
DTEND;TZID=America/New_York:20250117T130000
DTSTAMP:20260426T181623
CREATED:20250113T163553Z
LAST-MODIFIED:20250114T161426Z
UID:1338-1737113400-1737118800@arni-institute.org
SUMMARY:CTN: Adam Cohen
DESCRIPTION:Title: Mapping bioelectrical signals\, from dendrites to circuits\n\n\nAbstract:\nNeuronal dendrites are excitable\, but what are these excitations for?  Are dendritic excitations involved in integration?  Or in mediating back-propagation?  What are their footprints\, and what patterns of spiking and synaptic inputs can activate them?  We mapped bioelectrical signals throughout dendritic arbors of pyramidal cells in behaving mice and developed simple models relating dendritic biophysics to computation.  I will also describe all-optical circuit mapping in behaving mice\, and experiments recording voltage simultaneously from hundreds of genetically defined neurons during behavior.  These new data sets open possibilities for modeling how cellular intrinsic properties and local circuits process information.
URL:https://arni-institute.org/event/ctn-adam-cohen/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250124T113000
DTEND;TZID=America/New_York:20250124T130000
DTSTAMP:20260426T181623
CREATED:20250113T164851Z
LAST-MODIFIED:20250122T185351Z
UID:1341-1737718200-1737723600@arni-institute.org
SUMMARY:CTN: Jonathan Pillow
DESCRIPTION:Title: Disentangling the Roles of Distinct Cell Classes with Cell-Type Dynamical Systems\n \n\nAbstract:\nLatent dynamical systems have been widely used to characterize the dynamics of neural population activity in the brain. However\, these models typically ignore the fact that the brain contains multiple cell types\, which limits their ability to capture the functional roles of distinct cell classes or predict the effects of cell-specific perturbations. To overcome these limitations\, we introduce the “cell-type dynamical systems” (CTDS) model\, which extends latent linear dynamical systems to contain distinct latent variables for each cell class\, with appropriate sign constraints on the interactions between them. In this talk\, I will describe the CTDS model and show that fitting in the noiseless case can be reduced to non-negative matrix factorization.  I will then show an application of a multi-region model CTDS to simultaneous recordings from rat frontal orienting fields (FOF) and anterior dorsal striatum (ADS) during an auditory decision-making task.  Remarkably\, the model — fit only to unperturbed neural activity — predicts the time-dependent effects of different optogenetic perturbations on behavior\, specifically in FOF\, ADS\, and FOF-to-ADS axon terminals. I will close by discussing the future directions and other applications for biologically-constrained dynamical models of neural activity and behavior.
URL:https://arni-institute.org/event/ctn-jonathan-pillow/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250127T113000
DTEND;TZID=America/New_York:20250127T113000
DTSTAMP:20260426T181623
CREATED:20250127T143630Z
LAST-MODIFIED:20250127T143903Z
UID:1453-1737977400-1737977400@arni-institute.org
SUMMARY:CTN: Monday Lab Kim Stachenfeld
DESCRIPTION:Title: Discovering Symbolic Cognitive Models from Human and Animal Behavior with CogFunSearch \n\nAbstract: A key goal of cognitive science is to discover mathematical models that describe how the brain implements cognitive processes. These models often take the form of short computer programs\, and constructing them typically requires a great deal of human effort and ingenuity. In this meeting\, I’ll share current results from our recent efforts to apply FunSearch [Romera-Paredes et al 2024] to the problem of discovering  programs that reproduce the behavior of humans or other animals performing simple tasks. FunSearch is a recently developed tool that uses Large Language Models (LLMs) in an evolutionary algorithm to discover programs optimized for some objective. For our investigation\, we consider datasets from three species performing a classic reward-learning task that has been the focus of a great deal of modeling effort. Our approach reliably discovers models that outperform state-of-the-art cognitive models for each dataset. The discovered programs can readily be interpreted as computational cognitive models\, instantiating human-interpretable hypotheses about the learning and decision-making algorithms used by the brain. This is work that we’re wrapping up at DeepMind for ICML and prepping for journal submission\, so it’s a great time to get questions\, comments\, feedback\, criticisms\, and suggestions for new opportunities! We are also hoping to apply the approach to new tasks/datasets soon\, and I’d love to get ideas.
URL:https://arni-institute.org/event/ctn-monday-lab-kim-stachenfeld/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250129T140000
DTEND;TZID=America/New_York:20250129T150000
DTSTAMP:20260426T181623
CREATED:20250122T185251Z
LAST-MODIFIED:20250128T200112Z
UID:1411-1738159200-1738162800@arni-institute.org
SUMMARY:ARNI Biological Learning Working Group
DESCRIPTION:Title: Brain-like learning with exponentiated gradients and Learning to live with Dale’s principle: ANNs with separate excitatory and inhibitory units \nMeeting Summary: Our focus will be on answering the following question\, which may be a focus for the next few meetings: To what degree are different learning algorithms entangled with a particular neural architecture? Can we find neural architectures that interact better with certain learning algorithms? \nMeeting link: http://meet.google.com/stu-ozga-syi \nMore about the Biological Learning Working Group: The biological learning WG is interested in better understanding how biological neural networks perform credit assignment (i.e. how they determine which synapses should change to get better at a task). The success of credit assignment algorithms in AI\, such as backpropagation-of-error\, have revealed that the traditional Hebbian plasticity rules used in computational neuroscience were not nearly as powerful as is possible for learning in distributed networks. This has spurred a new field of research in neuroscience that seeks to uncover the mechanisms used for credit assignment in the brain\, as many researchers expect they are quite powerful\, similar to those used in AI. The goal of this WG is to explore this new field of research and consider new potential directions for explaining credit assignment in the brain. Additionally\, this could inspire new mechanisms for credit assignment in AI that are more efficient from an energy perspective than backpropagation-of-error.
URL:https://arni-institute.org/event/biological-learning-working-group/
END:VEVENT
END:VCALENDAR