BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//ARNI - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:ARNI
X-ORIGINAL-URL:https://arni-institute.org
X-WR-CALDESC:Events for ARNI
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:UTC
BEGIN:STANDARD
TZOFFSETFROM:+0000
TZOFFSETTO:+0000
TZNAME:UTC
DTSTART:20230101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20230312T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20231105T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=UTC:20240725T150000
DTEND;TZID=UTC:20240725T170000
DTSTAMP:20260403T143346
CREATED:20240723T230443Z
LAST-MODIFIED:20240723T230443Z
UID:1006-1721919600-1721926800@arni-institute.org
SUMMARY:Dr. Richard Lange
DESCRIPTION:Title: “What Bayes can and cannot tell us about the neuroscience of vision” \nNikolaus Kriegeskorte’s Group is hosting Dr.Richard Lange\, Assistant Professor in the Department of Computer Science at Rochester Institute of Technology. He will be giving a talk at Zuckerman Institute.
URL:https://arni-institute.org/event/dr-richard-lange/
LOCATION:Zuckerman Institute – L3-079\, 3227 Broadway\, New York\, NY\, 10027\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240717T160000
DTEND;TZID=UTC:20240717T200000
DTSTAMP:20260403T143346
CREATED:20240712T212855Z
LAST-MODIFIED:20240712T212855Z
UID:998-1721232000-1721246400@arni-institute.org
SUMMARY:Zuckerman Institute Demo Day
DESCRIPTION:
URL:https://arni-institute.org/event/zuckerman-institute-demo-day/
LOCATION:Lightning AI\, 50 West 23 Street 7th FL\, New York\, NY\, 10010\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240621T113000
DTEND;TZID=UTC:20240621T130000
DTSTAMP:20260403T143346
CREATED:20240605T191850Z
LAST-MODIFIED:20240620T180152Z
UID:906-1718969400-1718974800@arni-institute.org
SUMMARY:CTN: Peter Dayan
DESCRIPTION:Title: Risking your Tail: Curiosity\, Danger & Exploration \nAbstract: Risk and reward are critical balancing determinants of adaptive behaviour\, associated respectively with neophobia and neophilia in the case of exploration. There are rather great differences in how individuals engage with novelty – with substantial consequences for what they are able to learn. Here\, we consider how a modern formal treatment of risk (called the conditional value at risk) and pessimistic prior expectations can model some of these differences. Although the effects of risk on isolated decisions are well understood\, additional issues arise in the context of sequences of choices\, something that is inevitable in the case of exploration. This is joint work with Chris Gagne. Kevin Shen\, Xin Sui and Kevin Lloyd. \nMeeting ID: 958 4779 3410\nPasscode: ctn\nhttps://columbiauniversity.zoom.us/j/95847793410?pwd=VtROykVM4N5ywvAL7t32aYNZsH0Yyr.1
URL:https://arni-institute.org/event/peter-dayan/
LOCATION:Jerome L. Greene Science Center\, 3227 Broadway 9th FL Lecture Hall\, New York\, NY\, 10027\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240617T113000
DTEND;TZID=UTC:20240617T130000
DTSTAMP:20260403T143346
CREATED:20240613T195130Z
LAST-MODIFIED:20240613T195130Z
UID:929-1718623800-1718629200@arni-institute.org
SUMMARY:CTN: Stefano Fusi
DESCRIPTION:Title: The Geometry of Abstraction\n\nAbstract: I’ll first discuss the theoretical framework introduced in Bernardi et al. 2020\, Cell\, in which we propose a possible definition of abstract representations. I’ll go into the details of the most up-to-date  conceptual framework\, discuss the computational relevance of the representational geometry and the cross-validated measures of representational geometry that we normally use to characterize neural data in artificial and biological networks. Then I’ll apply the analytical tools to the study of human electrophysiological data (see Courellis\, H.S.\, Mixha\, J.\, Cardenas\, A.R.\, Kimmel\, D.\, Reed\, C.M.\, Valiante\, T.A.\, Salzman\, C.D.\, Mamelak\, A.N.\, Fusi\, S. and Rutishauser\, U.\, 2023. Abstract representations emerge in human hippocampal neurons during inference behavior. bioRxiv\, pp.2023-11 for more details).
URL:https://arni-institute.org/event/ctn-stefano-fusi/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240614T113000
DTEND;TZID=UTC:20240614T130000
DTSTAMP:20260403T143346
CREATED:20240522T002311Z
LAST-MODIFIED:20240613T195532Z
UID:874-1718364600-1718370000@arni-institute.org
SUMMARY:CTN: Bob Datta
DESCRIPTION:Title and Abstract: TBD
URL:https://arni-institute.org/event/bob-datta/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240524T113000
DTEND;TZID=UTC:20240524T130000
DTSTAMP:20260403T143346
CREATED:20240514T200412Z
LAST-MODIFIED:20240522T002132Z
UID:868-1716550200-1716555600@arni-institute.org
SUMMARY:CTN: Guillaume Hennequin
DESCRIPTION:Title: A recurrent network model of planning explains hippocampal replay and human behaviour\n\nAbstract:  When faced with a novel situation\, humans often spend substantial periods of time contemplating possible futures. For such planning to be rational\, the benefits to behaviour must compensate for the time spent thinking. I will show how we recently captured these features of human behaviour by developing a neural network model where planning itself is controlled by prefrontal cortex. This model consists of a meta-reinforcement learning agent augmented with the ability to plan by sampling imagined action sequences from its own policy\, which we call ‘rollouts’. The agent learns to plan when planning is beneficial\, explaining empirical variability in human thinking times. Additionally\, the patterns of policy rollouts employed by the artificial agent closely resemble patterns of rodent hippocampal replays recently recorded during spatial navigation. Our work provides a new theory of how the brain could implement planning through prefrontal-hippocampal interactions\, where hippocampal replays are triggered by – and adaptively affect – prefrontal dynamics. This is joint work with Kristopher Jensen and Marcelo Mattar.
URL:https://arni-institute.org/event/ctn-guillaume-hennequin/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240520T113000
DTEND;TZID=UTC:20240520T130000
DTSTAMP:20260403T143346
CREATED:20240507T192656Z
LAST-MODIFIED:20240514T200254Z
UID:839-1716204600-1716210000@arni-institute.org
SUMMARY:CTN: Quentin Huys (Seminar Speaker)
DESCRIPTION:Title: Translating computational mechanisms to clinical applications \nComputational psychiatry is a rapidly growing field attempting to translate advances in computational neuroscience and machine learning into improved outcomes for patients suffering from mental illness. In this lecture\, I will provide an overview over recent approaches for translating computational research into an understanding of symptoms\, and mechanisms of treatments.  I will start with two studies taking a computational approach to understanding symptoms of depression and anxiety: the selection of thoughts and the derivation of meaning and pleasure. I will then describe a recent series of studies which take a computational approach to understanding the active components of psychotherapy\, and finally finish with an applied example\, examining  mechanisms and predictors of relapse after antidepressant discontinuation.  Overall\, I will hope to clarify the role computational approaches can play in identifying mechanisms\, and in harnessing these mechanisms for therapeutic purposes.
URL:https://arni-institute.org/event/quentin-huys-seminar-speaker/
LOCATION:To Be Determined
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240517T113000
DTEND;TZID=UTC:20240517T130000
DTSTAMP:20260403T143346
CREATED:20240506T215427Z
LAST-MODIFIED:20240513T233334Z
UID:834-1715945400-1715950800@arni-institute.org
SUMMARY:CTN: Wei Ji Ma
DESCRIPTION:Title: Efficient coding in reward neurons\n\nAbstract: Two of the greatest triumphs of computational neuroscience have been efficient coding accounts of tuning properties of sensory neurons and reinforcement learning accounts of dopaminergic neurons in the midbrain. At first glance\, these theories seem to have no connection\, but I will argue that they do. One can apply efficient coding principles to derive the optimal population of neurons to encode rewards drawn from a probability distribution. Similar to this optimal population\, dopaminergic reward prediction error neurons in the mouse have a\nbroad distribution of thresholds. We can make further predictions: that neurons with higher thresholds have higher gain and that the asymmetry of their responses depends on the\nthreshold. We also derive learning rules that can approximate the efficient code. Finally\, we apply the theory to monkey data. Taken together\, efficient coding might provide a normative underpinning to distributional reinforcement learning.
URL:https://arni-institute.org/event/wei-ji-ma/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240515T120000
DTEND;TZID=UTC:20240515T140000
DTSTAMP:20260403T143346
CREATED:20240429T202900Z
LAST-MODIFIED:20240509T224223Z
UID:819-1715774400-1715781600@arni-institute.org
SUMMARY:Multi-resource-cost Optimization for Neural Networks Models Working Group (NNMS)
DESCRIPTION:Title: Scope of the working group\, example project\, and literature \nShort Description: From Nikolaus Kriegeskorte’s (Professor of Psychology and of Neuroscience (in the Mortimer B. Zuckerman Mind Brain Behavior Institute) lab\, Eivinas Butkus (grad student) will show an example of a modeling project optimizing energetic demands along with accuracy in a vision task\, and Josh Ying (grad student) will give a sense of the literature \nMore about NNMS:\nNeural network models are typically set up with a fixed architecture that defines the number of nodes and the connectivity\, and are unrolled for a fixed number of timesteps to obtain a computational graph for backpropagation. This amounts to fixing the resources that a physical implementation in a biological brain or dedicated engineered system would require in terms of space (to accommodate nodes and connections)\, time (to execute the steps)\, and energy. The fixed architecture of neural network models allows us to limit the resource requirements and discover what level of performance is possible through optimization. However\, it makes it difficult to explore the tradeoffs between the multiple resources. For example\, would a smaller network that runs for more timesteps give preferable results according to a joint cost of nodes\, connections\, time\, energy\, and error? It would be useful to be able to flexibly trade off resources against each other and against task performance as part of the optimization of a single model\, rather than having to train many models (each with a fixed vector of costs) to explore the space of solutions. We will develop (1) ways to quantify space\, time\, and energy costs of neural network models and (2) differentiable objectives that enable efficient joint minimization of the costs of multiple resources. Such methods could help us understand biological neural mechanisms that emerge from particular profiles of resource costs and behavioral affordances and also to engineer more efficient AI for resource-limited devices.\n \nZoom Link: https://columbiauniversity.zoom.us/j/97052575063?pwd=SllDVFd4VlA2TnN4RDV3VVJ3b2lldz09
URL:https://arni-institute.org/event/multi-resource-cost-optimization-for-neural-networks-models-working-group-nnms/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240510T113000
DTEND;TZID=UTC:20240510T130000
DTSTAMP:20260403T143346
CREATED:20240502T215817Z
LAST-MODIFIED:20240507T192754Z
UID:828-1715340600-1715346000@arni-institute.org
SUMMARY:CTN: Adam Hantman
DESCRIPTION:Title: Neural basis for skilled movements \nAbstract: Generating behavior is an incredible achievement of the nervous system\, considering the range of possible actions and the complexity of musculoskeletal arrangements. Motor control involves understanding the surrounding environment\, selecting appropriate plans\, converting those plans into motor commands\, and adaptively reacting to feedback. This seminar will review efforts of the Hantman lab to dissect the neural circuits for skilled movements\, and will also feature new work examining the robustness and resilience of these motor systems.
URL:https://arni-institute.org/event/adam-hantman/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240502T185700
DTEND;TZID=UTC:20240502T185700
DTSTAMP:20260403T143346
CREATED:20240423T172750Z
LAST-MODIFIED:20240502T225819Z
UID:814-1714676220-1714676220@arni-institute.org
SUMMARY:Continual Learning Working Group
DESCRIPTION:Weekly Meeting Group Discussion: Lifelong and Human-like Learning in Foundation Models \nSpeaker: Mengye Ren (New York University)\nAssistant Professor\nDepartment of Computer Science\nCourant Institute of Mathematical Sciences\nCenter for Data Science (joint)\nNew York University \nAbstract: Real-world agents\, including humans\, learn from online\, lifelong experiences. However\, today’s foundation models primarily acquire knowledge through offline\, iid learning\, while relying on in-context learning for most online adaptation. It is crucial to equip foundation models with lifelong and human-like learning abilities to enable more flexible use of AI in real-world applications. In this talk\, I will discuss recent works exploring interesting phenomena in foundation models when learning in online\, structured environments. Notably\, foundation models exhibit anticipatory and semantically-aware memorization and forgetting behaviors. Furthermore\, I will introduce a new method that combines pretraining and meta-learning for learning and consolidating new concepts in large language models. This approach has the potential to lead to future foundation models with incremental consolidation and abstraction capabilities.
URL:https://arni-institute.org/event/continual-learning-working-group-9/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240429T113000
DTEND;TZID=UTC:20240429T130000
DTSTAMP:20260403T143346
CREATED:20240416T222403Z
LAST-MODIFIED:20240423T191956Z
UID:806-1714390200-1714395600@arni-institute.org
SUMMARY:Lenka Zdeborova (Seminar Speaker)
DESCRIPTION:Title: Phase transition in learning with neural networks  \nAbstract: Statistical physics has studied exactly solvable models of neural networks for more than four decades. In this talk\, we will put this line of work in perspective of recent questions stemming from deep learning. We will describe several types of phase transition that appear in the high-dimensional limit as a function of the amount of data. Discontinuous phase transitions are linked to adjacent algorithmic hardness. This so-called hard phase influences the behaviour of gradient-descent-like algorithms. We show a case where the hardness is mitigated by overparametrization\, proposing that the benefits of overparametrization may be linked to the usage of a specific type of algorithm. We will also discuss recent progress in identifying phase transitions and their consequences in networks with attention layers and in sampling with generative diffusion-based networks.
URL:https://arni-institute.org/event/lenka-zdeborova-seminar-speaker/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240426T113000
DTEND;TZID=UTC:20240426T130000
DTSTAMP:20260403T143346
CREATED:20240416T222623Z
LAST-MODIFIED:20240423T171216Z
UID:809-1714131000-1714136400@arni-institute.org
SUMMARY:Roberta Raileanu
DESCRIPTION:Title: Teaching Large Language Models to Reason with Reinforcement Learning \nAbstract: In this talk\, I will discuss how we can use Reinforcement Learning (RL) to improve reasoning in Large Language Models (LLM)\, as well as when\, where\, and how to refine LLM reasoning. First\, we study how different RL-like algorithms can improve LLM reasoning. We investigate both sparse and dense rewards provided to the LLM both heuristically and via a learned reward model. However\, even with RL fine-tuning\, LLM reasoning remains imperfect. Prior work found that LLMs can further improve their reasoning via online refinements. However\, in our new work we show that LLMs struggle to identify when and where to refine their reasoning without access to external feedback. Outcome-based Reward Models (ORMs) trained to predict the correctness of the final answer\, can indicate when to refine. Process Based Reward Models (PRMs) trained to predict correctness of intermediate steps\, can indicate where to refine. But PRMs are expensive to train\, requiring extensive human annotations. We introduce Stepwise ORMs (SORMs) which are trained only on synthetic data\, to approximate the expected future reward of the optimal policy\, or V*.  Our experiments show that SORMs can more accurately detect incorrect reasoning steps compared to ORMs\, thus improving downstream accuracy on reasoning tasks. For the question of how to refine LLM reasoning\, we find that global and local refinements have complementary benefits\, so combining both of them achieves the best results. With this strategy we can improve the accuracy of a LLaMA-2 13B model (already fine-tuned with RL) on GSM8K from 53% to 65% when greedily sampled.
URL:https://arni-institute.org/event/roberta-raileanu/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240425T133000
DTEND;TZID=UTC:20240425T144000
DTSTAMP:20260403T143346
CREATED:20240416T185339Z
LAST-MODIFIED:20240416T185339Z
UID:804-1714051800-1714056000@arni-institute.org
SUMMARY:Continual Learning Working Group - Creative Group Brainstorming Session
DESCRIPTION:
URL:https://arni-institute.org/event/continual-learning-working-group-creative-group-brainstorming-session/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240419T113000
DTEND;TZID=UTC:20240419T130000
DTSTAMP:20260403T143346
CREATED:20240410T234910Z
LAST-MODIFIED:20240416T184148Z
UID:792-1713526200-1713531600@arni-institute.org
SUMMARY:Shihab Shamma
DESCRIPTION:Title: The auditory cortex: A sensorimotor fulcrum for speech and music perception \nAbstract: The Auditory cortex sits at the center of all auditory-motor tasks and percepts\, from listening to our voice as we speak\, to the music that we play\, and to the complex sound mixtures that we seek to perceive. The auditory cortex orchestrates all these demands by segregating the sound sources and attending to a few\, and then directing them to be semantically decoded in the language or music areas of the brain. It also sends collateral signals to motor areas where the sounds could be produced and controlled (e.g.\, the vocal tract or hands). All these regions in turn reflect back to the auditory cortex their expectations and predictions of the activations due to the incoming sound streams.I shall review in this talk computational models of several such phenomena\, and discuss the experimental findings that test their underlying assumptions in humans and ferrets while they segregate speech mixtures\, imagine music\, or listen to songs.
URL:https://arni-institute.org/event/shihab-shamma/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240418T133000
DTEND;TZID=UTC:20240418T144000
DTSTAMP:20260403T143346
CREATED:20240403T195744Z
LAST-MODIFIED:20240403T195911Z
UID:767-1713447000-1713451200@arni-institute.org
SUMMARY:Continual Learning Working Group
DESCRIPTION:Weekly Meeting Group Discussion: Saket Navlakha\, Associate Professor at Cold Spring Harbor Labs (Available via Zoom) \nSaket Navlakha\, Associate Professor at Cold Spring Harbor Labs\, will present his work\, “Reducing Catastrophic Forgetting With Associative Learning: A Lesson From Fruit Flies“. In this work\, the authors identified a two-layer neural circuit in the fruit fly olfactory system that performs continual associative learning between odors and their associated valences. In the first layer\, inputs (odors) are encoded using sparse\, high-dimensional representations\, which reduces memory interference by activating nonoverlapping populations of neurons for different odors. In the second layer\, only the synapses between odor-activated neurons and the odor’s associated output neuron are modified during learning; the rest of the weights are frozen to prevent unrelated memories from being overwritten. The takeaway is that fruit flies evolved an efficient continual associative learning algorithm\, and circuit mechanisms from neuroscience can be translated to improve machine computation. \nZoom: https://columbiauniversity.zoom.us/j/94783759415?pwd=cTlDTDdCVk9vdEV0QzRKL0hKQW1Kdz09
URL:https://arni-institute.org/event/continual-learning-working-group-8/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240415T060000
DTEND;TZID=UTC:20240415T200000
DTSTAMP:20260403T143346
CREATED:20240411T000121Z
LAST-MODIFIED:20240416T223659Z
UID:795-1713160800-1713211200@arni-institute.org
SUMMARY:Breakthrough Technologies
DESCRIPTION:Queens\, NY – The New York Hall of Science (NYSCI)\, the AI Institute for Artificial and Natural Intelligence (ARNI)\, and the Fu Foundation School of Engineering and Applied Science at Columbia University will feature an engaging panel discussion exploring recent developments in quantum computing and AI. The goal of the discussion is to provide an exciting glimpse of how these new technologies will enhance our future. (Invitation Only) \nThe lively panel will include: \n\nDario Gil\, SVP and Director of Research\, IBM \n\nDr. Gil leads innovation efforts at IBM\, directing research strategies in areas including AI\, cloud\, quantum computing\, and exploratory science. \n\nXaq Pitkow Associate Director\, ARNI \n\nDr. Pitkow is a computational neuroscientist who develops mathematical theories of the brain and general principles of intelligent systems. \n\nJeannette Wing\, EVPR and Professor of Computer Science\, Columbia University\n\nDr. Wing’s research contributes to trustworthy AI\, security and privacy\, specification and verification\, concurrent and distributed systems\, programming languages\, and software engineering.
URL:https://arni-institute.org/event/breakthrough-technologies/
LOCATION:NY
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240412T150000
DTEND;TZID=UTC:20240412T170000
DTSTAMP:20260403T143346
CREATED:20240319T000436Z
LAST-MODIFIED:20240409T195851Z
UID:673-1712934000-1712941200@arni-institute.org
SUMMARY:Animal Behavior Video Analysis Working Group
DESCRIPTION:Title: Whole-body simulation of realistic fruit fly locomotion with deep reinforcement learning \nAbstract: The body of an animal determines how the nervous system produces behavior. Therefore\, detailed modeling of the neural control of sensorimotor behavior requires a detailed model of the body. Here we contribute an anatomically-detailed biomechanical whole-body model of the fruit fly {\em Drosophila melanogaster} in the \mujoco physics engine. Our model is general-purpose\, enabling the simulation of diverse fly behaviors\, both on land and in the air. We demonstrate the generality of our model by simulating realistic locomotion\, both flight and walking. To support these behaviors\, we have extended \mbox{MuJoCo} with phenomenological models of fluid forces and adhesion forces. Through data-driven end-to-end reinforcement learning\, we demonstrate that these advances enable the training of neural network controllers capable of realistic locomotion along complex trajectories based on high-level steering control signals. With a visually guided flight task\, we demonstrate a neural controller that can use the vision sensors of the body model to control and steer flight. Our project is an open-source platform for modeling neural control of sensorimotor behavior in an embodied context. \nJoin Zoom Meeting:\nhttps://columbiauniversity.zoom.us/j/98060956155?pwd=eVJDY0JOdWV4U1R4emt3dnNPbElWdz09  \nMeeting ID: 980 6095 6155\nPasscode: 263132
URL:https://arni-institute.org/event/animal-behavior-video-analysis-working-group-4/
LOCATION:Zuckerman Institute – L3-079\, 3227 Broadway\, New York\, NY\, 10027\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240412T113000
DTEND;TZID=UTC:20240412T130000
DTSTAMP:20260403T143346
CREATED:20240410T180731Z
LAST-MODIFIED:20240410T180759Z
UID:787-1712921400-1712926800@arni-institute.org
SUMMARY:Adam Charles
DESCRIPTION:Title: Micron brain data at scale: computational challenges in imaging and analysis. \nAbstract: Uncovering the principles of neural computation requires 1) new methods to observe micron-level targets at scale and 2) interpretable models of high-dimensional time-series. In this talk I will cover recent advances in leveraging advanced data models based on latent sparsity and low-dimensionality to tackle key challenges in both domains. First I will discuss ongoing work in multi-photon data analysis. This work seeks to expand our capabilities to extract scientifically rich information from large-scale data of sub-micron targets that represent how circuits compute and how those computations adapt over time. Specifically\, I will discuss recent machine learning image enhancement for tracking synaptic strength in-vivo at scale\, and a morphology-independent image segmentation algorithm for identifying geometrically complex fluorescing objects (e.g.\, dendritic and wide-field imaging). Next I will discuss the analysis challenges if inferring meaningful representations of brain-wide activity provided by imaging advances. Specifically\, brain-wide data represents many parallel and distributed computations. I will discuss recent work building on the intuition of the “neural data manifold”\, and present a decomposed linear dynamical systems (dLDS) model that can capture the nonlinear and non-stationary properties of the neural trajectories along this manifold. dLDS learns a concise model of such dynamics by breaking up the system into several independent\, overlapping systems that are each interpretable as linear systems. I will demonstrate how this model finds meaningful trajectories both in synthetic data and in “whole-brain” C. elegans imaging.
URL:https://arni-institute.org/event/adam-charles/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240411T133000
DTEND;TZID=UTC:20240411T144000
DTSTAMP:20260403T143346
CREATED:20240401T223314Z
LAST-MODIFIED:20240410T190727Z
UID:755-1712842200-1712846400@arni-institute.org
SUMMARY:Continual Learning Working Group
DESCRIPTION:Weekly Meeting Group Discussion: \nPaper Topic: https://direct.mit.edu/neco/article/35/11/1797/117579/Reducing-Catastrophic-Forgetting-With-Associative \nZoom: https://columbiauniversity.zoom.us/j/94783759415?pwd=cTlDTDdCVk9vdEV0QzRKL0hKQW1Kdz09
URL:https://arni-institute.org/event/continual-learning-working-group-7/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240409T114000
DTEND;TZID=UTC:20240409T130000
DTSTAMP:20260403T143346
CREATED:20240408T194253Z
LAST-MODIFIED:20240408T194253Z
UID:777-1712662800-1712667600@arni-institute.org
SUMMARY:Automating Analysis in Biology Using AI\, From Data to Discovery
DESCRIPTION:Speaker: Markus Marks (Caltech)\n\n\nTitle: Automating Analysis in Biology Using AI\, From Data to Discovery\n\n\n\nTime and Place: Davis Auditorium\, 11:40am\, Tuesday April 9\n\n\n\nAbstract: Thanks to improved sensors and decreasing data acquisition and storage costs\, biologists are increasingly able to collect more and higher quality data. How can we harness the expanding capabilities of GPUs at lower costs and fast-improving AI algorithms to effectively handle the rapid influx of data and extract scientific insights with manageable human effort? My work focuses on integrating machine learning into biology and medicine with three core goals: reducing human effort in data annotation\, mitigating human bias in annotations\, and uncovering concealed patterns within biomedical data through data-driven approaches.\n\nThis talk will focus on tackling these challenges\, removing human effort and bias step-by-step. I will elucidate this approach with recent work on behavioral and cellular data analysis\, starting with the application of machine learning to quantify animal behavior automatically in neuroscience experiments. I will then present our recent efforts to develop foundational models for scientific applications\, showcased by a cellular segmentation model that generalizes across a wide range of cell types. Furthermore\, I will show how we can move beyond human-generated labels and discover features directly from the data using self-supervision and experimental observations. Finally\, I will outline how these technologies can be combined to accelerate analysis and facilitate discovery for scientific experiments.\n\nBio: Markus is a postdoc at Caltech working in the computer vision group with Pietro Perona. He received his Ph.D. at the Institute for Neuroinformatics at ETH Zurich. Currently\, Markus focuses on developing machine learning algorithms to enhance scientific discovery in biology and medicine\, collaborating closely with domain experts. Markus organized the interdisciplinary MABe workshop in 2023 with Jennifer Sun from Cornell and the Kennedy lab at Northwestern\, aiming to bring together people and perspectives from different fields working on interacting agents.
URL:https://arni-institute.org/event/automating-analysis-in-biology-using-ai-from-data-to-discovery/
LOCATION:Davis Auditorium\, 530 W 120th St\, New York\, NY 10027\, New York\, NY\, 10027
ORGANIZER;CN="Colloquium":MAILTO:https://lists.cs.columbia.edu/mailman/listinfo/colloquium
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240405T113000
DTEND;TZID=UTC:20240405T130000
DTSTAMP:20260403T143346
CREATED:20240402T000158Z
LAST-MODIFIED:20240404T004652Z
UID:760-1712316600-1712322000@arni-institute.org
SUMMARY:Misha Tsodyks
DESCRIPTION:Title: Putative synaptic theory of temporal order encoding in working memory\n(Joint work with Gianluigi Mongillo) \nAbstract: Overwhelming evidence indicates that working memory automatically encodes incoming stimuli in the correct presentation order. How this is achieved in the brain is however not well understood. We addressed this issue in the framework of our previously proposed synaptic theory\, according to which stimuli are encoded in working memory by selective short-term facilitation of corresponding recurrent synaptic connections. We further suggest that if synapses exhibit longer-term forms of facilitation\, e.g. synaptic augmentation\, encodings acquire a ‘primacy gradient’\, i.e. stimuli presented earlier are stronger encoded compared to later presented ones. We propose a simple way the order information can be retrieved. The new model also sheds new light on the important issue of working memory capacity. We suggest that one should distinguish between retrieval capacity which is limited to very few items\, and representational capacity that can be significantly larger.
URL:https://arni-institute.org/event/misha-tsodyks/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240404T133000
DTEND;TZID=UTC:20240404T144000
DTSTAMP:20260403T143346
CREATED:20240326T190451Z
LAST-MODIFIED:20240401T223135Z
UID:740-1712237400-1712241600@arni-institute.org
SUMMARY:Continual Learning Working Group
DESCRIPTION:Weekly Meeting Group Discussion: \nPaper Topic: https://arxiv.org/abs/2309.10105 \nZoom: https://columbiauniversity.zoom.us/j/94783759415?pwd=cTlDTDdCVk9vdEV0QzRKL0hKQW1Kdz09
URL:https://arni-institute.org/event/continual-learning-working-group-6/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240404T080000
DTEND;TZID=UTC:20240404T170000
DTSTAMP:20260403T143346
CREATED:20240315T190701Z
LAST-MODIFIED:20240315T190701Z
UID:649-1712217600-1712250000@arni-institute.org
SUMMARY:Data Science Day 2024
DESCRIPTION:“The Data Science Institute’s flagship annual event connects innovators in industry and government to Columbia researchers who are propelling advances across every sector with data science.”\nIf you are interested in the event please register on their event page.
URL:https://arni-institute.org/event/data-science-day-2024/
LOCATION:Alfred Lerner Hall\, 2920 Broadway\, New York\, NY\, 10027\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240328T133000
DTEND;TZID=UTC:20240328T144000
DTSTAMP:20260403T143346
CREATED:20240322T003508Z
LAST-MODIFIED:20240326T190536Z
UID:722-1711632600-1711636800@arni-institute.org
SUMMARY:Continual Learning Working Group
DESCRIPTION:Weekly Meeting Group Discussion: \nPaper Topic: https://arxiv.org/abs/2302.03241 \nZoom: https://columbiauniversity.zoom.us/j/94783759415?pwd=cTlDTDdCVk9vdEV0QzRKL0hKQW1Kdz09
URL:https://arni-institute.org/event/continual-learning-working-group-5/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240322T150000
DTEND;TZID=UTC:20240322T170000
DTSTAMP:20260403T143346
CREATED:20240319T234436Z
LAST-MODIFIED:20240320T000555Z
UID:699-1711119600-1711126800@arni-institute.org
SUMMARY:Animal Behavior Video Analysis Working Group
DESCRIPTION:Title: Brain Decodes Deep Nets\nPresenter: Jianbo Shi\, PhD\nGRASP Laboratory\nComputer and Information Science\nUniversity of Pennsylvania\n \nAbstract: We developed a surprising usage of brain encoding: using a brain fMRI prediction model to draw a picture of how a deep net processes information onto a brain.  Our tool provides a detailed analysis of large pre-trained vision models by mapping them onto the brain\, thus exposing their hidden layers and channels.   Our results show how different training methods matter: they lead to remarkable differences in hierarchical organization and scaling behavior. It also provides insight into finetuning: how large pre-trained models change when adapting to new datasets. \n  \n\nJoin Zoom Meeting:\nhttps://columbiauniversity.zoom.us/j/93542681364?pwd=eFlZSkhGY0JHZGlHSk8zSVRYdHRSZz09 \nMeeting ID: 935 4268 1364\nPasscode: 645004
URL:https://arni-institute.org/event/animal-behavior-video-analysis-working-group-5/
LOCATION:CSB 453\, Mudd Building\, 500 W 120th Street
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240322T000000
DTEND;TZID=UTC:20240322T000000
DTSTAMP:20260403T143346
CREATED:20240315T190133Z
LAST-MODIFIED:20240319T225406Z
UID:643-1711065600-1711065600@arni-institute.org
SUMMARY:Jennifer Groh
DESCRIPTION:Title: Multiplexing multiple signals in neural codes: new statistical tools and evidence \nAbstract: How the brain represents multiple objects is mysterious. Sensory neurons are broadly tuned\, producing overlap in the populations of neurons potentially activated by each object in the scene. This overlap raises questions about how distinct information is retained about each item. I will present a novel theory of neural representation\, positing that neural signals may interleave representations of individual items across time. Evidence for this theory has come from new statistical tools that overcome the limitations inherent to standard time-and-trial-pooled assessments of activity. This theory has implications for diverse domains of neuroscience\, including attention\, figure-ground segregation\, and grounded cognition.
URL:https://arni-institute.org/event/jennifer-groh/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240321T133000
DTEND;TZID=America/New_York:20240321T144000
DTSTAMP:20260403T143346
CREATED:20240314T201207Z
LAST-MODIFIED:20240322T003125Z
UID:637-1711027800-1711032000@arni-institute.org
SUMMARY:Continual Learning Working Group
DESCRIPTION:Weekly Meeting Group Discussion: \nPaper Topic: https://arxiv.org/abs/2102.01951 \nZoom: https://columbiauniversity.zoom.us/j/94783759415?pwd=cTlDTDdCVk9vdEV0QzRKL0hKQW1Kdz09
URL:https://arni-institute.org/event/continual-learning-working-group/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240315T113000
DTEND;TZID=UTC:20240315T113000
DTSTAMP:20260403T143346
CREATED:20240314T195510Z
LAST-MODIFIED:20240314T195946Z
UID:627-1710502200-1710502200@arni-institute.org
SUMMARY:Rui Ponte Costa
DESCRIPTION:Title: Brain-wide credit assignment: cortical and subcortical perspectives \nAbstract: The brain assigns credit to trillions of synapses remarkably well. How the brain achieves this feat is one of the great mysteries in neuroscience. Recently\, we have introduced Bursting cortico-cortical networks\, a computational model of hierarchical credit assignment that captures a large number of biological features while approximating deep learning algorithms (Greedy et al. NeurIPS 2022). I will show that in contrast to previous work this model (i) does not require a multi-phase learning process\, (ii) is consistent with experimental observations across multiple levels and (iii) provides efficient credit assignment across the cortical hierarchy. \nHowever\, these models often assume that behavioural feedback is readily available. How the brain learns efficiently despite the sparse nature of feedback remains unclear. Recently we have proposed that a subcortical region\, the cerebellum\, predicts behavioural feedback\, thereby unlocking learning in cortical networks from future feedback. We have introduced two views by which the cerebellum may help the cortex: (i) by driving cortical plasticity (Boven et al. Nature Comms 2023) or (ii) by driving cortical dynamics (Pemberton et al. bioRxiv). Together these two views suggest that cortico-cerebellar loops are a critical part of task acquisition\, switching\, and consolidation in the brain.
URL:https://arni-institute.org/event/rui-ponte-costa/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240307T133000
DTEND;TZID=UTC:20240307T144000
DTSTAMP:20260403T143346
CREATED:20240315T195437Z
LAST-MODIFIED:20240315T195437Z
UID:654-1709818200-1709822400@arni-institute.org
SUMMARY:Continual Learning Working Group
DESCRIPTION:Weekly Meeting Group Discussion: Paper Topic: https://arxiv.org/abs/1906.01076\nZoom: https://columbiauniversity.zoom.us/j/94783759415?pwd=cTlDTDdCVk9vdEV0QzRKL0hKQW1Kdz09
URL:https://arni-institute.org/event/continual-learning-working-group-2/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
END:VCALENDAR