BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//ARNI - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:ARNI
X-ORIGINAL-URL:https://arni-institute.org
X-WR-CALDESC:Events for ARNI
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20230312T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20231105T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20260308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20261101T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250408T150000
DTEND;TZID=America/New_York:20250408T160000
DTSTAMP:20260403T154333
CREATED:20250404T132848Z
LAST-MODIFIED:20250404T132848Z
UID:1602-1744124400-1744128000@arni-institute.org
SUMMARY:ARNI Emerging Researchers Talk Series #1: Rahul Ramesh
DESCRIPTION:Title: Principles of Learning from Multiple Tasks \n\nAbstract: \n\nDeep networks are increasingly trained on data from multiple tasks with the goal of sharing synergistic information across related tasks. A language model\, for example\, is trained on 10 trillion tokens on tasks ranging from programming\, finance\, trivia to translation and a vision model is trained on over a billion images for tasks like object recognition\, depth prediction and semantic segmentation. With this motivation\, in this talk\, I will present the principles behind how to optimally train on multiple tasks and attempt to answer why we are able to learn on these tasks. In the first part of the talk we develop a theory that shows that dissimilar tasks fight for model capacity when trained together. We use this insight to design Model Zoo — a learner that splits its capacity to train many small models on related subsets of tasks — which is state-of-the-art for task-incremental continual learning. In the second half of this talk\, we show that typical tasks are highly redundant functions of the input\, i.e.\, the subspaces that vary the most and ones that vary the least are both highly predictive of typical tasks. This result suggests that there are many subspaces that can be used to solve typical tasks\, which allows us to learn a shared representation for these tasks. We believe that organisms choose to solve redundant tasks because they are the only ones that agents with bounded resources can readily learn. \n\nSpeaker Bio:\nRahul Ramesh is a 6th year PhD student at the University of Pennsylvania in the department of computer and information science and is advised by Pratik Chaudhari. He previously received his B.Tech from the Indian Institute of Technology Madras in Computer science and Engineering. Rahul is interested in using perspectives from statistical learning theory\, information theory and neuroscience to study self-supervised and multitask learning.\n\n\n\nZoom Link: https://columbiauniversity.zoom.us/j/91436346202?pwd=Fa0ohRBhckitrJqVF5gWrUPo5774U2.1
URL:https://arni-institute.org/event/arni-emerging-researchers-talk-series-1-rahul-ramesh/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250405T090000
DTEND;TZID=America/New_York:20250405T170000
DTSTAMP:20260403T154333
CREATED:20250312T151528Z
LAST-MODIFIED:20250312T172239Z
UID:1567-1743843600-1743872400@arni-institute.org
SUMMARY:Girls' Science Day
DESCRIPTION:ARNI is committed to promoting science education among New York City’s youth. This year\, ARNI is supporting Girls’ Science Day on April 5\, 2025. \nLocation: TBD \nMission\nGirls’ Science Day at Columbia University seeks to champion the advancement of women and underrepresented groups in the fields of science\, technology\, engineering\, and math (STEM). By offering a full day of hands-on experiments\, we aim to provide middle school girls (5th– 8th grade) with an engaging introduction to science\, spark their curiosity and confidence so they can envision themselves as the next generation of STEM explorers. Purpose Girls’ Science Day is designed to offer participants immersive\, hands-on experiments led by Columbia students. It serves as a lively\, fun\, and accessible entry point into STEM\, providing opportunities for active learning and reflection. \nGoals\n1. Empower Young Scientists: Provide a welcoming space and foster curiosity and excitement about science among middle school girls.\n2. Provide Mentorship: Connect participants with enthusiastic Columbia volunteers — undergraduates\, graduate students\, and postdocs—who can share personal journeys and inspiration.\n3. Strengthen Community Ties: Keep building our local STEM network through close collaboration with parents\, teachers\, and NYC tri-state area schools.
URL:https://arni-institute.org/event/girls-science-day/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250404T150000
DTEND;TZID=America/New_York:20250404T170000
DTSTAMP:20260403T154333
CREATED:20250321T134811Z
LAST-MODIFIED:20250421T155950Z
UID:1581-1743778800-1743786000@arni-institute.org
SUMMARY:ARNI Distinguished Seminar Series: Eftychios A. Pnevmatikakis\, Research Scientist\, Reality labs at Meta
DESCRIPTION:Research Scientist\, Reality labs at Meta \nTitle: TBD \nLocation: TBD \nAbstract: TBD
URL:https://arni-institute.org/event/arni-distinguished-seminar-series-eftychios-a-pnevmatikakis/
ORGANIZER;CN="ARNI":MAILTO:arni@columbia.edu
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250402T130000
DTEND;TZID=America/New_York:20250402T140000
DTSTAMP:20260403T154333
CREATED:20250324T151641Z
LAST-MODIFIED:20250324T151654Z
UID:1585-1743598800-1743602400@arni-institute.org
SUMMARY:ARNI Continual Learning Working Group Project
DESCRIPTION:Continuation of meeting from prior working group meetings. \nZoom link: https://columbiauniversity.zoom.us/j/97176853843?pwd=VLZdh6yqHBcOQhdf816lkN5ByIpIsF.1
URL:https://arni-institute.org/event/arni-continual-learning-working-group-project/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250321T110000
DTEND;TZID=America/New_York:20250321T130000
DTSTAMP:20260403T154333
CREATED:20250303T214301Z
LAST-MODIFIED:20250303T214301Z
UID:1542-1742554800-1742562000@arni-institute.org
SUMMARY:CTN: Anna Levina and
DESCRIPTION:Title: TBD \nAbstract: TBD
URL:https://arni-institute.org/event/ctn-anna-levina-and/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250319T113000
DTEND;TZID=America/New_York:20250319T130000
DTSTAMP:20260403T154333
CREATED:20250319T141153Z
LAST-MODIFIED:20250319T141153Z
UID:1578-1742383800-1742389200@arni-institute.org
SUMMARY:CTN: Soledad Gonzalo Cogno
DESCRIPTION:Soledad Gonzalo Cogno \nSeminar Time: 11:30am \nDate: Wed 3/19/25 \nLocation: JLG\, L5-084 \nTitle: Ultraslow patterns of neural population activity in the entorhinal-hippocampal circuit \nNote: Everything I will present in this talk is preliminary – Feedback and ideas will be very much appreciated! \nAbstract: The medial entorhinal cortex hosts many of the brain’s circuit elements for spatial navigation and episodic memory\, operations that require neural activity to be organized across long durations of experience. We have previously found that entorhinal cells can organize their activity into ultraslow oscillations (frequency < 0.1 Hz) that manifest as periodic sequences of activity in the neural population (Gonzalo Cogno et al.\, 2024). These ultraslow periodic sequences were recorded while mice ran at free pace on a rotating wheel in darkness\, with no change in running direction and no scheduled rewards. It remains unknown\, however\, whether the sequences also occur during more naturalistic behaviours\, for example while mice run in an open field arena\, or during sleep. In this presentation I will show that in free foraging conditions\, MEC neuronal activity can organize into sequences. However\, the sequential activity is now characterized by resettings and interruptions. By developing a computational model\, we investigate the conditions under which the sequences reset. In addition\, we found that during slow-wave-sleep neural activity is also organized into ultraslow oscillations\, but not into sequences. The oscillations also manifest in the hippocampus\, and are highly synchronized with those in the MEC. These results suggest the presence of internal dynamics that unfold at ultraslow time scales\, and that are modulated by sensory information and cognitive demands. \nBecause oscillations and sequences are not the only way into which neural activity can organize at ultraslow time scales\, we next sought to determine whether other slowly changing patterns of activity are present in the MEC. If those exists\, it is yet an open question whether\, and how\, those are transformed in the hippocampal-entorhinal circuit. We found that when animals ran at free pace on a rotating wheel in darkness\, the activity in the MEC\, lateral entorhinal cortex (LEC) and hippocampus slowly drifted over session time\, enabling a readout of episodic time. However\, the drift in the MEC and the hippocampus\, but not in the LEC\, significantly decreased when animals ran in an open field arena. These results suggest that the slow drift of hippocampal and MEC activity is attenuated by spatial landmarks when these are present. \nAll in all\, our results point to the existent of ultraslow dynamics in the entorhinal-hippocampal circuit that may facilitate the encoding of experience at behavioral time scales.
URL:https://arni-institute.org/event/ctn-soledad-gonzalo-cogno/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250314T113000
DTEND;TZID=America/New_York:20250314T130000
DTSTAMP:20260403T154333
CREATED:20250303T214002Z
LAST-MODIFIED:20250312T142728Z
UID:1539-1741951800-1741957200@arni-institute.org
SUMMARY:CTN: Christian Machens
DESCRIPTION:Title: Computing with spikes: A geometric approach\n\nAbstract: How can recurrent spiking networks perform computations in a biologically realistic regime? I will outline the progress we have made in answering this question. Our approach follows two principles. First\, we don’t average over spikes\, but focus on the contribution of each individual spike. Second\, we study the decision to spike in a low-dimensional space of latent population modes (or readouts\, components\, factors\, you name it) rather than in the original neural space. Neural thresholds then become convex boundaries in latent space\, and the latent dynamics is either attracted (I population) or repelled (E population) by these boundaries. The combination of E and I populations results in balanced\, inhibition-stabilized networks which are capable of producing (arbitrary) dynamical systems or input-output mappings. Moreover\, there are profound differences between computation in these spiking networks compared to classical rate networks. I will illustrate all of these insights through geometrical pictures and movies and thereby demonstrate that we are far from having exhausted analytical and geometric methods in understanding recurrent spiking neural networks [joint work with William Podlaski].
URL:https://arni-institute.org/event/ctn-christian-machens/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250311T160000
DTEND;TZID=America/New_York:20250311T170000
DTSTAMP:20260403T154333
CREATED:20250307T145746Z
LAST-MODIFIED:20250307T145746Z
UID:1560-1741708800-1741712400@arni-institute.org
SUMMARY:ARNI Continual Learning Working Group
DESCRIPTION:Continuation of Year 3 proposal meeting! \nZoom: https://columbiauniversity.zoom.us/j/97176853843?pwd=VLZdh6yqHBcOQhdf816lkN5ByIpIsF.1
URL:https://arni-institute.org/event/arni-continual-learning-working-group/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250307T113000
DTEND;TZID=America/New_York:20250307T130000
DTSTAMP:20260403T154333
CREATED:20250303T213800Z
LAST-MODIFIED:20250305T185758Z
UID:1536-1741347000-1741352400@arni-institute.org
SUMMARY:CTN: Tim Buschman
DESCRIPTION:Title: The geometry of cognitive flexibility \n\nAbstract: Humans and animals are remarkably good at multi-tasking: we quickly learn many different tasks and flexibly switch between them. Theoretical work suggests such cognitive flexibility requires representing the current task and then using this task representation to selectively engage in task-relevant computations. In this talk\, I will discuss recent research from my lab aimed at understanding the neural mechanisms underlying cognitive flexibility. I will discuss how tasks are represented in the brain and how new task representations can be learned. I will also discuss how the brain flexibly re-uses neural representations of sensory inputs and motor actions across different tasks. This allows the brain to compositionally construct complex tasks from simpler sub-tasks by routing task-relevant information between subspaces of neural activity.
URL:https://arni-institute.org/event/ctn-tim-buschman/
LOCATION:Zuckerman Institute – L7-119\, 3227 Broadway\, New York\, NY\, 10027\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250304T090000
DTEND;TZID=America/New_York:20250304T170000
DTSTAMP:20260403T154333
CREATED:20250303T214649Z
LAST-MODIFIED:20250303T214859Z
UID:1547-1741078800-1741107600@arni-institute.org
SUMMARY:Columbia AI Summit
DESCRIPTION:Columbia University is bringing its community together for an exhilarating\, day-long exploration of artificial intelligence and its transformative impact across disciplines. Across the Morningside\, Manhattanville\, and Medical Center campuses\, specialized workshops will dive deep into AI’s role in fields ranging from healthcare to the humanities. The event will feature a must-see keynote by Sami Haddadin\, Director of the Munich Institute of Robotics and Machine Intelligence and Vice President for Research at MBZUAI. \nLink: https://ai.columbia.edu/ai-summit#!#text-1655
URL:https://arni-institute.org/event/columbia-ai-summit/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250228T153000
DTEND;TZID=America/New_York:20250228T170000
DTSTAMP:20260403T154333
CREATED:20250128T194129Z
LAST-MODIFIED:20250221T200454Z
UID:1474-1740756600-1740762000@arni-institute.org
SUMMARY:ARNI Distinguished Seminar Series: Marlene Behrmann
DESCRIPTION:About Dr. Marlene Behrmann:\nMarlene Behrmann joined the Department of Ophthalmology at the University of Pittsburgh School of Medicine\, where she holds the John and Clelia Sheppard Chair\, in 2022. She also holds the position of Emeritus Professor at Carnegie Mellon University. Dr. Behrmann’s research is concerned with the psychological and neural bases of visual processing\, with specific attention to the mechanisms by which the signals from the eye are transformed into meaningful percepts by the brain. She adopts an interdisciplinary approach combining computational\, neuropsychological and neuroimaging studies with adults and children in health and disease. Examples of her recent studies include investigations of the cortical visual system in paediatric patients following hemispherectomy and identifying mechanisms of plasticity and elucidating the potential for cortical reorganization\, but she has also studied visual cortical function in individuals with inherited retinal dystrophy. Dr. Behrmann was elected a member of the Society for Experimental Psychologists in 2008\, and was inducted into the National Academy of Sciences in 2015\, and into the American Academy of Arts and Sciences in 2019. Dr Behrmann has received many awards including the Presidential Early Career Award for Engineering and Science\, the APA Distinguished Scientific Award for Early Career Contributions and the Fred Kavli Distinguished Career Contributions in Cognitive Neuroscience Award from the Cognitive Neuroscience Society. \nTitle: The development\, hemispheric organization\, and plasticity of high-level vision \nAbstract: \nAdults recognize complex visual inputs\, such as faces and words\, with remarkable speed\, accuracy and ease\, but a full understanding of these abilities is still lacking. Much prior research has favoured a binary separation of faces and words\, with the right hemisphere specialized for the representation of faces\, and the left hemisphere specialized for the representation of words. Close scrutiny of the data\, however\, suggest a more graded and distributed hemispheric organization\, as well as differing hemispheric profiles across individuals. Combining detailed behavioral data with structural and functional imaging data reveals how the distribution of function both within and between the two cerebral hemispheres emerges over the course of development\, and a computational account of this mature organization is offered and tested. Provocatively\, this mature profile is more malleable than previously thought\, and cross-sectional and longitudinal data acquired from individuals with hemispherectomy reveal how a single hemisphere can subserve both visual classes. Together\, the findings support a view of cortical visual organization (and perhaps\, the organization of other functions too) as plastic and dynamic\, both within and between hemispheres. \nLocation: Zuckerman Institute\, Kavli Auditorium 9th Floor (for access to Zuckerman Institute\, please email Lena Mei @ lm3440@columbia.edu 24 hours prior to the event) \nZoom link: https://columbiauniversity.zoom.us/j/96156119664?pwd=PCGPe1UbEzzbIvGnbAdVa8wX5wH9J0.1
URL:https://arni-institute.org/event/arni-distinguished-seminar-series-marlene-behrmann/
LOCATION:Zuckerman Institute- Kavli Auditorium 9th Fl\, 3227 Broadway\, NY
ORGANIZER;CN="ARNI":MAILTO:arni@columbia.edu
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250227T130000
DTEND;TZID=America/New_York:20250227T140000
DTSTAMP:20260403T154333
CREATED:20250221T155402Z
LAST-MODIFIED:20250221T165304Z
UID:1511-1740661200-1740664800@arni-institute.org
SUMMARY:ARNI Continual Learning Project
DESCRIPTION:Followup to discussion in Meeting 1 \nZoom Link: https://columbiauniversity.zoom.us/j/97176853843?pwd=VLZdh6yqHBcOQhdf816lkN5ByIpIsF.1 \n  \n  \n 
URL:https://arni-institute.org/event/arni-continual-learning-project/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250226T140000
DTEND;TZID=America/New_York:20250226T150000
DTSTAMP:20260403T154333
CREATED:20250218T211553Z
LAST-MODIFIED:20250218T211606Z
UID:1501-1740578400-1740582000@arni-institute.org
SUMMARY:ARNI Biological Learning Working Group
DESCRIPTION:Ken Miller will be talking about E/I networks & balanced networks and some computational/functional implications\, there’s two papers I’d suggest reading:on balanced amplification: https://www.sciencedirect.com/science/article/pii/S0896627309001287 review of loosely and tightly balanced networks: https://www.sciencedirect.com/science/article/pii/S0896627321005754. \n\nMeeting Link: meet.google.com/nnq-csiy-yah
URL:https://arni-institute.org/event/arni-biological-learning-working-group/
LOCATION:Virtual
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250225T150000
DTEND;TZID=America/New_York:20250225T170000
DTSTAMP:20260403T154333
CREATED:20250217T144605Z
LAST-MODIFIED:20250224T195355Z
UID:1490-1740495600-1740502800@arni-institute.org
SUMMARY:ARNI WG Multi-resource-cost optimization of neural network models: Paul Schrater
DESCRIPTION:Title: Control when confidence is costly \nAbstract:\nWe develop a version of stochastic control that accounts for computational costs of inference. Past studies identified efficient coding without control\, or efficient control that neglects the cost of synthesizing information. Here we combine these concepts into a framework where agents rationally approximate inference for efficient control. Specifically\, we study Linear Quadratic Gaussian (LQG) control with an added internal cost on the relative precision of the posterior probability over the world state. This creates a trade-off: an agent can obtain more utility overall by sacrificing some task performance\, if doing so saves enough bits during inference. We discover that the rational strategy that solves the joint inference and control problem goes through phase transitions depending on the task demands\, switching from a costly but optimal inference to a family of suboptimal inferences related by rotation transformations\, each misestimate the stability of the world. In all cases\, the agent moves more to think less. This work provides a foundation for a new type of rational computations that could be used by both brains and machines for efficient but computationally constrained control.\nWe develop a version of stochastic control that accounts for computational costs of inference. Past studies identified efficient coding without control\, or efficient control that neglects the cost of synthesizing information. Here we combine these concepts into a framework where agents rationally approximate inference for efficient control. Specifically\, we study Linear Quadratic Gaussian (LQG) control with an added internal cost on the relative precision of the posterior probability over the world state. This creates a trade-off: an agent can obtain more utility overall by sacrificing some task performance\, if doing so saves enough bits during inference. We discover that the rational strategy that solves the joint inference and control problem goes through phase transitions depending on the task demands\, switching from a costly but optimal inference to a family of suboptimal inferences related by rotation transformations\, each misestimate the stability of the world. In all cases\, the agent moves more to think less. This work provides a foundation for a new type of rational computations that could be used by both brains and machines for efficient but computationally constrained control. \nZoom Link: https://columbiauniversity.zoom.us/j/98244449046?pwd=ZagtGamVQgwy8XrPdXdlzJRbgrXtVj.1
URL:https://arni-institute.org/event/arni-wg-multi-resource-cost-optimization-of-neural-network-models-paul-schrater/
LOCATION:Zuckerman Institute – L3-079\, 3227 Broadway\, New York\, NY\, 10027\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250218T160000
DTEND;TZID=America/New_York:20250218T164500
DTSTAMP:20260403T154333
CREATED:20250217T150602Z
LAST-MODIFIED:20250217T150623Z
UID:1493-1739894400-1739897100@arni-institute.org
SUMMARY:ARNI Continual Learning Working Group Spring Opening Meeting
DESCRIPTION:From: Tom Zollo\n\n\n\n\n\nIn Y2\, the aim is to use this working group as a launchpad for a larger ARNI continual learning project (which we hope to spawn multiple subprojects and papers).  We hope for this group to tackle issues that are relevant to both modern practitioners and the ARNI mission of connecting artificial and natural intelligence.\n\nAs a potential topic for this project\, we think we might consider the problem of long and short term memory in LLMs.  There has been recent interest from industry labs\, e.g. Google (paper link) and Meta (paper link)\, in fitting an LLM with a long-term neural memory module to complement the short-term memory given by the context window.  Several threads relevant to ARNI could extend from this research direction.  For instance\, we might consider cognitively-inspired benchmarks for LLM memory systems for lifelong learning\, e.g.\, based on human-like tasks that might be difficult for autoregressive models.  Also\, we could explore methodological work in LLM memory mechanisms based on our understanding of natural intelligence.  We are particularly interested in learning about relevant studies in neuroscience and cognitive science that could help constrain and inspire the methodological approaches.  Beyond these\, one could imagine many other related directions of interest to ARNI.\n\n\nZoom: https://columbiauniversity.zoom.us/j/99160043324?pwd=1BvBZBeyB3b8da74wuLsgPCabCVudL.1
URL:https://arni-institute.org/event/arni-continual-learning-working-group-spring-opening-meeting/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250210T113000
DTEND;TZID=America/New_York:20250210T130000
DTSTAMP:20260403T154333
CREATED:20250127T152702Z
LAST-MODIFIED:20250127T152702Z
UID:1463-1739187000-1739192400@arni-institute.org
SUMMARY:CTN Monda Lab: Liam Paninski
DESCRIPTION:Title and Abstract: TBD
URL:https://arni-institute.org/event/ctn-monda-lab-liam-paninski/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250207T113000
DTEND;TZID=America/New_York:20250207T130000
DTSTAMP:20260403T154333
CREATED:20250127T152415Z
LAST-MODIFIED:20250127T152743Z
UID:1460-1738927800-1738933200@arni-institute.org
SUMMARY:CTN: Eva Naumann
DESCRIPTION:Title and Abstract: TBD
URL:https://arni-institute.org/event/ctn-eva-naumann/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250205T113000
DTEND;TZID=America/New_York:20250205T130000
DTSTAMP:20260403T154333
CREATED:20250127T152236Z
LAST-MODIFIED:20250127T152236Z
UID:1456-1738755000-1738760400@arni-institute.org
SUMMARY:CTN: Hidenori Tanaka
DESCRIPTION:Hidenori Tanaka \nTitle and Abstract: TBD
URL:https://arni-institute.org/event/ctn-hidenori-tanaka/
LOCATION:Zuckerman Institute- Kavli Auditorium 9th Fl\, 3227 Broadway\, NY
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250129T140000
DTEND;TZID=America/New_York:20250129T150000
DTSTAMP:20260403T154333
CREATED:20250122T185251Z
LAST-MODIFIED:20250128T200112Z
UID:1411-1738159200-1738162800@arni-institute.org
SUMMARY:ARNI Biological Learning Working Group
DESCRIPTION:Title: Brain-like learning with exponentiated gradients and Learning to live with Dale’s principle: ANNs with separate excitatory and inhibitory units \nMeeting Summary: Our focus will be on answering the following question\, which may be a focus for the next few meetings: To what degree are different learning algorithms entangled with a particular neural architecture? Can we find neural architectures that interact better with certain learning algorithms? \nMeeting link: http://meet.google.com/stu-ozga-syi \nMore about the Biological Learning Working Group: The biological learning WG is interested in better understanding how biological neural networks perform credit assignment (i.e. how they determine which synapses should change to get better at a task). The success of credit assignment algorithms in AI\, such as backpropagation-of-error\, have revealed that the traditional Hebbian plasticity rules used in computational neuroscience were not nearly as powerful as is possible for learning in distributed networks. This has spurred a new field of research in neuroscience that seeks to uncover the mechanisms used for credit assignment in the brain\, as many researchers expect they are quite powerful\, similar to those used in AI. The goal of this WG is to explore this new field of research and consider new potential directions for explaining credit assignment in the brain. Additionally\, this could inspire new mechanisms for credit assignment in AI that are more efficient from an energy perspective than backpropagation-of-error.
URL:https://arni-institute.org/event/biological-learning-working-group/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250127T113000
DTEND;TZID=America/New_York:20250127T113000
DTSTAMP:20260403T154333
CREATED:20250127T143630Z
LAST-MODIFIED:20250127T143903Z
UID:1453-1737977400-1737977400@arni-institute.org
SUMMARY:CTN: Monday Lab Kim Stachenfeld
DESCRIPTION:Title: Discovering Symbolic Cognitive Models from Human and Animal Behavior with CogFunSearch \n\nAbstract: A key goal of cognitive science is to discover mathematical models that describe how the brain implements cognitive processes. These models often take the form of short computer programs\, and constructing them typically requires a great deal of human effort and ingenuity. In this meeting\, I’ll share current results from our recent efforts to apply FunSearch [Romera-Paredes et al 2024] to the problem of discovering  programs that reproduce the behavior of humans or other animals performing simple tasks. FunSearch is a recently developed tool that uses Large Language Models (LLMs) in an evolutionary algorithm to discover programs optimized for some objective. For our investigation\, we consider datasets from three species performing a classic reward-learning task that has been the focus of a great deal of modeling effort. Our approach reliably discovers models that outperform state-of-the-art cognitive models for each dataset. The discovered programs can readily be interpreted as computational cognitive models\, instantiating human-interpretable hypotheses about the learning and decision-making algorithms used by the brain. This is work that we’re wrapping up at DeepMind for ICML and prepping for journal submission\, so it’s a great time to get questions\, comments\, feedback\, criticisms\, and suggestions for new opportunities! We are also hoping to apply the approach to new tasks/datasets soon\, and I’d love to get ideas.
URL:https://arni-institute.org/event/ctn-monday-lab-kim-stachenfeld/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250124T113000
DTEND;TZID=America/New_York:20250124T130000
DTSTAMP:20260403T154333
CREATED:20250113T164851Z
LAST-MODIFIED:20250122T185351Z
UID:1341-1737718200-1737723600@arni-institute.org
SUMMARY:CTN: Jonathan Pillow
DESCRIPTION:Title: Disentangling the Roles of Distinct Cell Classes with Cell-Type Dynamical Systems\n \n\nAbstract:\nLatent dynamical systems have been widely used to characterize the dynamics of neural population activity in the brain. However\, these models typically ignore the fact that the brain contains multiple cell types\, which limits their ability to capture the functional roles of distinct cell classes or predict the effects of cell-specific perturbations. To overcome these limitations\, we introduce the “cell-type dynamical systems” (CTDS) model\, which extends latent linear dynamical systems to contain distinct latent variables for each cell class\, with appropriate sign constraints on the interactions between them. In this talk\, I will describe the CTDS model and show that fitting in the noiseless case can be reduced to non-negative matrix factorization.  I will then show an application of a multi-region model CTDS to simultaneous recordings from rat frontal orienting fields (FOF) and anterior dorsal striatum (ADS) during an auditory decision-making task.  Remarkably\, the model — fit only to unperturbed neural activity — predicts the time-dependent effects of different optogenetic perturbations on behavior\, specifically in FOF\, ADS\, and FOF-to-ADS axon terminals. I will close by discussing the future directions and other applications for biologically-constrained dynamical models of neural activity and behavior.
URL:https://arni-institute.org/event/ctn-jonathan-pillow/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250117T113000
DTEND;TZID=America/New_York:20250117T130000
DTSTAMP:20260403T154333
CREATED:20250113T163553Z
LAST-MODIFIED:20250114T161426Z
UID:1338-1737113400-1737118800@arni-institute.org
SUMMARY:CTN: Adam Cohen
DESCRIPTION:Title: Mapping bioelectrical signals\, from dendrites to circuits\n\n\nAbstract:\nNeuronal dendrites are excitable\, but what are these excitations for?  Are dendritic excitations involved in integration?  Or in mediating back-propagation?  What are their footprints\, and what patterns of spiking and synaptic inputs can activate them?  We mapped bioelectrical signals throughout dendritic arbors of pyramidal cells in behaving mice and developed simple models relating dendritic biophysics to computation.  I will also describe all-optical circuit mapping in behaving mice\, and experiments recording voltage simultaneously from hundreds of genetically defined neurons during behavior.  These new data sets open possibilities for modeling how cellular intrinsic properties and local circuits process information.
URL:https://arni-institute.org/event/ctn-adam-cohen/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250113T113000
DTEND;TZID=America/New_York:20250113T130000
DTSTAMP:20260403T154333
CREATED:20250108T204804Z
LAST-MODIFIED:20250108T204828Z
UID:1325-1736767800-1736773200@arni-institute.org
SUMMARY:CTN: Mehdi Azabou\, ARNI Postdoctorate Research Scientist
DESCRIPTION:Title: Building foundation models for neuroscience\n\nAbstract: Current methodologies for recording brain activity often provide narrow views of the brain’s function. This fragmentation of datasets has hampered the development of robust and comprehensive computational models that generalize across diverse conditions\, tasks\, and individuals. Our work is motivated by the need for a large-scale foundation model in neuroscience–one that can go beyond the limitations of single-dataset approaches and offer a fuller\, more comprehensive picture of brain function. We propose a novel\, scalable and unified approach for training on diverse neural datasets. We test our model across two large collections of data: 1. on nonhuman primates performing diverse motor tasks\, spanning over 158 different sessions from over 27\,373 neural units\, and 2. the entirety of the Allen Institute’s Brain Observatory dataset\, containing responses from over 100\,000 neurons in 6 areas of the brains of mice\, observed with two-photon calcium imaging\, recorded while the mice observed different types of visual stimuli.
URL:https://arni-institute.org/event/ctn-mehdi-azabou-arni-postdoctorate-research-scientists/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250110T113000
DTEND;TZID=America/New_York:20250110T130000
DTSTAMP:20260403T154333
CREATED:20250108T164302Z
LAST-MODIFIED:20250108T164302Z
UID:1316-1736508600-1736514000@arni-institute.org
SUMMARY:CTN: Mazviita Chirimuuta
DESCRIPTION:Title: Neuromorphic Computing and the Significance of Medium Dependence\n\n \nAbstract:\nThe increasingly prohibitive cost of energy demanded by large artificial neural networks (ANNs) is giving new impetus to research and development on neuromorphic computing. Importantly\, there is an open question over how brain-like the hardware will have to be in order for an artificial intelligence to match the brain in its combination of robustness\, adaptability\, and energy efficiency. If biological cognition is heavily dependent on the specific properties of the material that instantiates it (i.e. living cells)\, then neuromorphic computing will have to merge with synthetic biology in order to achieve its ultimate goal of brain-like performance. If it is not\, neuromorphic computing holds out the promise of some gains in efficiency but there is no pressure for hardware to become increasingly neuro-mimetic in order to match the functionality of the nervous system. In this talk I introduce the concept of practical medium dependence/independence in order to explore the likelihood of these two scenarios. I present the argument that practically medium independent approaches to information processing\, such as digital computing\, are inherently less efficient than ones dependent on the specifics of implementing media\, and for that reason will not have evolved. This result has implications for how we rate the near-term possibility of human-like artificial general intelligence\, and offers a new way to understand how cognition is rooted\, more generally\, in biological processes.
URL:https://arni-institute.org/event/ctn-mazviita-chirimuuta/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241209T113000
DTEND;TZID=America/New_York:20241209T130000
DTSTAMP:20260403T154333
CREATED:20241202T205723Z
LAST-MODIFIED:20241202T205723Z
UID:1214-1733743800-1733749200@arni-institute.org
SUMMARY:CTN Lab: Ashok Litwin-Kumar
DESCRIPTION:Title: Searching for symmetries in connectome data\n\nAbstract: I will talk about work with Haozhe Shan on identifying structure in connectome data that suggests a cell type encodes one or a handful of variables\, like heading direction or retinotopy. We are framing the problem as learning a graph embedding\, but I will also mention other things we have considered which\, at least for me\, were educational. The project is at an early stage\, so we would welcome suggestions and ideas.
URL:https://arni-institute.org/event/ctn-lab-ashok-litwin-kumar/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241206T140000
DTEND;TZID=America/New_York:20241206T150000
DTSTAMP:20260403T154333
CREATED:20241111T160235Z
LAST-MODIFIED:20241204T160339Z
UID:1159-1733493600-1733497200@arni-institute.org
SUMMARY:Continual Learning Working Group: Lea Duncker
DESCRIPTION:Title: Task-dependent low-dimensional population dynamics for robustness and learning \nAbstract: Biological systems face dynamic environments that require flexibly deploying learned skills and continual learning of new tasks. It is not well understood how these systems balance the tension between flexibility for learning and robustness for memory of previous behaviors. Neural activity underlying single\, highly controlled experimental tasks has repeatedly been observed to exhibit low-dimensional structure. However\, it is unclear how this organization arises and is maintained throughout learning\, and how it might differ when networks are exposed to multiple tasks. In this talk\, I will present work on a continual learning rule designed to minimize interference between sequentially learned tasks in recurrent networks. The learning rule preserves network dynamics within activity-defined low-dimensional subspaces used for previously learned tasks. It encourages recurrent dynamics associated with interfering tasks to explore orthogonal subspaces. Employing a set of tasks used in neuroscience\, I will show that this approach can successfully eliminate catastrophic interference\, while allowing for reuse of similar low-dimensional dynamics across similar tasks. This possibility for shared computation allows for faster learning during sequential training. Finally\, I will highlight limitations of this approach in fully exploiting task-similarity for optimal re-use of previously learned solutions\, and outline new work we are starting in my group now to address this. \nZoom Link: https://columbiauniversity.zoom.us/j/97176853843?pwd=VLZdh6yqHBcOQhdf816lkN5ByIpIsF.1
URL:https://arni-institute.org/event/continual-learning-working-group-lea-duncker/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241206T110000
DTEND;TZID=America/New_York:20241206T130000
DTSTAMP:20260403T154333
CREATED:20241202T175520Z
LAST-MODIFIED:20241202T175520Z
UID:1211-1733482800-1733490000@arni-institute.org
SUMMARY:CTN Seminar: Andrew Leifer
DESCRIPTION:Title: TBD \nAbstract: TBD
URL:https://arni-institute.org/event/ctn-seminar-andrew-leifer/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241206T110000
DTEND;TZID=America/New_York:20241206T120000
DTSTAMP:20260403T154333
CREATED:20241210T193448Z
LAST-MODIFIED:20241210T193448Z
UID:1245-1733482800-1733486400@arni-institute.org
SUMMARY:Lecture in AI: Danqi Chen
DESCRIPTION:Title: Training Language Models in Academic: Research Questions and Opportunities \nAbstract: Large language models have emerged as transformative tools in artificial intelligence\, demonstrating unprecedented capabilities in understanding and generating human language. While these models have achieved remarkable performance across a wide range of benchmarks and enabled groundbreaking applications\, their development has been predominantly concentrated within large technology companies due to substantial computational and proprietary data requirements. In this talk\, I will present a vision for how academic research can play a critical role in advancing the open language model ecosystem\, particularly by developing smaller yet highly capable models and advancing our fundamental understanding of training practices. Drawing from our research group’s recent projects\, I will examine key research questions and challenges in both pre-training and post-training stages. Our work spans developing small language models (Sheared LLaMA; 1-3B parameters)\, the state-of-the-art <10B model on Chatbot Arena (gemma-2-SimPO)\, and long-context models supporting up to 512K tokens (ProLong). These examples illustrate how academic research can push the boundaries of model efficiency\, capability\, and scalability. I will conclude by exploring future directions and highlighting opportunities to shape the development of more accessible and powerful language models.
URL:https://arni-institute.org/event/lecture-in-ai-danqi-chen/
LOCATION:Davis Auditorium\, 530 W 120th St\, New York\, NY 10027\, New York\, NY\, 10027
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241120T103000
DTEND;TZID=America/New_York:20241120T113000
DTSTAMP:20260403T154333
CREATED:20241113T161354Z
LAST-MODIFIED:20241114T183223Z
UID:1162-1732098600-1732102200@arni-institute.org
SUMMARY:CTN: Seminar Speaker Alessandro Ingrosso
DESCRIPTION:Title:\nStatistical mechanics of transfer learning in the proportional limit\n\nAbstract:\nTransfer learning (TL) is a well-established machine learning technique to boost the generalization performance on a specific (target) task using information gained from a related (source) task\, and it crucially depends on the ability of a network to learn useful features. I will present a recent work that leverages analytical progress in the proportional regime of deep learning theory (i.e. the limit where the size of the training set P and the size of the hidden layers N are taken to infinity keeping their ratio P/N finite) to develop a novel statistical mechanics formalism for TL in Bayesian neural networks. I’ll show how such single-instance Franz-Parisi formalism can yield an effective theory for TL in one-hidden-layer fully-connected neural networks. Unlike the (lazy-training) infinite-width limit\, where TL is ineffective\, in the proportional limit TL occurs due to a renormalized source-target kernel that quantifies their relatedness and determines whether TL is beneficial for generalization.
URL:https://arni-institute.org/event/ctn-seminar-speaker-alessandro-ingrosso/
LOCATION:Zuckerman Institute – L3-079\, 3227 Broadway\, New York\, NY\, 10027\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241115T113000
DTEND;TZID=America/New_York:20241115T130000
DTSTAMP:20260403T154333
CREATED:20241108T014804Z
LAST-MODIFIED:20241108T014804Z
UID:1148-1731670200-1731675600@arni-institute.org
SUMMARY:CTN: Catherine Hartley
DESCRIPTION:Title: TBD \nAbstract: TBD
URL:https://arni-institute.org/event/ctn-catherine-hartley/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
END:VCALENDAR