BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//ARNI - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://arni-institute.org
X-WR-CALDESC:Events for ARNI
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:UTC
BEGIN:STANDARD
TZOFFSETFROM:+0000
TZOFFSETTO:+0000
TZNAME:UTC
DTSTART:20230101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=UTC:20240502T185700
DTEND;TZID=UTC:20240502T185700
DTSTAMP:20260513T163145
CREATED:20240423T172750Z
LAST-MODIFIED:20240502T225819Z
UID:814-1714676220-1714676220@arni-institute.org
SUMMARY:Continual Learning Working Group
DESCRIPTION:Weekly Meeting Group Discussion: Lifelong and Human-like Learning in Foundation Models \nSpeaker: Mengye Ren (New York University)\nAssistant Professor\nDepartment of Computer Science\nCourant Institute of Mathematical Sciences\nCenter for Data Science (joint)\nNew York University \nAbstract: Real-world agents\, including humans\, learn from online\, lifelong experiences. However\, today’s foundation models primarily acquire knowledge through offline\, iid learning\, while relying on in-context learning for most online adaptation. It is crucial to equip foundation models with lifelong and human-like learning abilities to enable more flexible use of AI in real-world applications. In this talk\, I will discuss recent works exploring interesting phenomena in foundation models when learning in online\, structured environments. Notably\, foundation models exhibit anticipatory and semantically-aware memorization and forgetting behaviors. Furthermore\, I will introduce a new method that combines pretraining and meta-learning for learning and consolidating new concepts in large language models. This approach has the potential to lead to future foundation models with incremental consolidation and abstraction capabilities.
URL:https://arni-institute.org/event/continual-learning-working-group-9/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240510T113000
DTEND;TZID=UTC:20240510T130000
DTSTAMP:20260513T163145
CREATED:20240502T215817Z
LAST-MODIFIED:20240507T192754Z
UID:828-1715340600-1715346000@arni-institute.org
SUMMARY:CTN: Adam Hantman
DESCRIPTION:Title: Neural basis for skilled movements \nAbstract: Generating behavior is an incredible achievement of the nervous system\, considering the range of possible actions and the complexity of musculoskeletal arrangements. Motor control involves understanding the surrounding environment\, selecting appropriate plans\, converting those plans into motor commands\, and adaptively reacting to feedback. This seminar will review efforts of the Hantman lab to dissect the neural circuits for skilled movements\, and will also feature new work examining the robustness and resilience of these motor systems.
URL:https://arni-institute.org/event/adam-hantman/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240515T120000
DTEND;TZID=UTC:20240515T140000
DTSTAMP:20260513T163145
CREATED:20240429T202900Z
LAST-MODIFIED:20240509T224223Z
UID:819-1715774400-1715781600@arni-institute.org
SUMMARY:Multi-resource-cost Optimization for Neural Networks Models Working Group (NNMS)
DESCRIPTION:Title: Scope of the working group\, example project\, and literature \nShort Description: From Nikolaus Kriegeskorte’s (Professor of Psychology and of Neuroscience (in the Mortimer B. Zuckerman Mind Brain Behavior Institute) lab\, Eivinas Butkus (grad student) will show an example of a modeling project optimizing energetic demands along with accuracy in a vision task\, and Josh Ying (grad student) will give a sense of the literature \nMore about NNMS:\nNeural network models are typically set up with a fixed architecture that defines the number of nodes and the connectivity\, and are unrolled for a fixed number of timesteps to obtain a computational graph for backpropagation. This amounts to fixing the resources that a physical implementation in a biological brain or dedicated engineered system would require in terms of space (to accommodate nodes and connections)\, time (to execute the steps)\, and energy. The fixed architecture of neural network models allows us to limit the resource requirements and discover what level of performance is possible through optimization. However\, it makes it difficult to explore the tradeoffs between the multiple resources. For example\, would a smaller network that runs for more timesteps give preferable results according to a joint cost of nodes\, connections\, time\, energy\, and error? It would be useful to be able to flexibly trade off resources against each other and against task performance as part of the optimization of a single model\, rather than having to train many models (each with a fixed vector of costs) to explore the space of solutions. We will develop (1) ways to quantify space\, time\, and energy costs of neural network models and (2) differentiable objectives that enable efficient joint minimization of the costs of multiple resources. Such methods could help us understand biological neural mechanisms that emerge from particular profiles of resource costs and behavioral affordances and also to engineer more efficient AI for resource-limited devices.\n \nZoom Link: https://columbiauniversity.zoom.us/j/97052575063?pwd=SllDVFd4VlA2TnN4RDV3VVJ3b2lldz09
URL:https://arni-institute.org/event/multi-resource-cost-optimization-for-neural-networks-models-working-group-nnms/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240517T113000
DTEND;TZID=UTC:20240517T130000
DTSTAMP:20260513T163145
CREATED:20240506T215427Z
LAST-MODIFIED:20240513T233334Z
UID:834-1715945400-1715950800@arni-institute.org
SUMMARY:CTN: Wei Ji Ma
DESCRIPTION:Title: Efficient coding in reward neurons\n\nAbstract: Two of the greatest triumphs of computational neuroscience have been efficient coding accounts of tuning properties of sensory neurons and reinforcement learning accounts of dopaminergic neurons in the midbrain. At first glance\, these theories seem to have no connection\, but I will argue that they do. One can apply efficient coding principles to derive the optimal population of neurons to encode rewards drawn from a probability distribution. Similar to this optimal population\, dopaminergic reward prediction error neurons in the mouse have a\nbroad distribution of thresholds. We can make further predictions: that neurons with higher thresholds have higher gain and that the asymmetry of their responses depends on the\nthreshold. We also derive learning rules that can approximate the efficient code. Finally\, we apply the theory to monkey data. Taken together\, efficient coding might provide a normative underpinning to distributional reinforcement learning.
URL:https://arni-institute.org/event/wei-ji-ma/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240520T113000
DTEND;TZID=UTC:20240520T130000
DTSTAMP:20260513T163145
CREATED:20240507T192656Z
LAST-MODIFIED:20240514T200254Z
UID:839-1716204600-1716210000@arni-institute.org
SUMMARY:CTN: Quentin Huys (Seminar Speaker)
DESCRIPTION:Title: Translating computational mechanisms to clinical applications \nComputational psychiatry is a rapidly growing field attempting to translate advances in computational neuroscience and machine learning into improved outcomes for patients suffering from mental illness. In this lecture\, I will provide an overview over recent approaches for translating computational research into an understanding of symptoms\, and mechanisms of treatments.  I will start with two studies taking a computational approach to understanding symptoms of depression and anxiety: the selection of thoughts and the derivation of meaning and pleasure. I will then describe a recent series of studies which take a computational approach to understanding the active components of psychotherapy\, and finally finish with an applied example\, examining  mechanisms and predictors of relapse after antidepressant discontinuation.  Overall\, I will hope to clarify the role computational approaches can play in identifying mechanisms\, and in harnessing these mechanisms for therapeutic purposes.
URL:https://arni-institute.org/event/quentin-huys-seminar-speaker/
LOCATION:To Be Determined
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240524T113000
DTEND;TZID=UTC:20240524T130000
DTSTAMP:20260513T163145
CREATED:20240514T200412Z
LAST-MODIFIED:20240522T002132Z
UID:868-1716550200-1716555600@arni-institute.org
SUMMARY:CTN: Guillaume Hennequin
DESCRIPTION:Title: A recurrent network model of planning explains hippocampal replay and human behaviour\n\nAbstract:  When faced with a novel situation\, humans often spend substantial periods of time contemplating possible futures. For such planning to be rational\, the benefits to behaviour must compensate for the time spent thinking. I will show how we recently captured these features of human behaviour by developing a neural network model where planning itself is controlled by prefrontal cortex. This model consists of a meta-reinforcement learning agent augmented with the ability to plan by sampling imagined action sequences from its own policy\, which we call ‘rollouts’. The agent learns to plan when planning is beneficial\, explaining empirical variability in human thinking times. Additionally\, the patterns of policy rollouts employed by the artificial agent closely resemble patterns of rodent hippocampal replays recently recorded during spatial navigation. Our work provides a new theory of how the brain could implement planning through prefrontal-hippocampal interactions\, where hippocampal replays are triggered by – and adaptively affect – prefrontal dynamics. This is joint work with Kristopher Jensen and Marcelo Mattar.
URL:https://arni-institute.org/event/ctn-guillaume-hennequin/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
END:VCALENDAR