BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//ARNI - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:ARNI
X-ORIGINAL-URL:https://arni-institute.org
X-WR-CALDESC:Events for ARNI
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:UTC
BEGIN:STANDARD
TZOFFSETFROM:+0000
TZOFFSETTO:+0000
TZNAME:UTC
DTSTART:20220101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20230312T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20231105T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=UTC:20240412T150000
DTEND;TZID=UTC:20240412T170000
DTSTAMP:20260404T051301
CREATED:20240319T000436Z
LAST-MODIFIED:20240409T195851Z
UID:673-1712934000-1712941200@arni-institute.org
SUMMARY:Animal Behavior Video Analysis Working Group
DESCRIPTION:Title: Whole-body simulation of realistic fruit fly locomotion with deep reinforcement learning \nAbstract: The body of an animal determines how the nervous system produces behavior. Therefore\, detailed modeling of the neural control of sensorimotor behavior requires a detailed model of the body. Here we contribute an anatomically-detailed biomechanical whole-body model of the fruit fly {\em Drosophila melanogaster} in the \mujoco physics engine. Our model is general-purpose\, enabling the simulation of diverse fly behaviors\, both on land and in the air. We demonstrate the generality of our model by simulating realistic locomotion\, both flight and walking. To support these behaviors\, we have extended \mbox{MuJoCo} with phenomenological models of fluid forces and adhesion forces. Through data-driven end-to-end reinforcement learning\, we demonstrate that these advances enable the training of neural network controllers capable of realistic locomotion along complex trajectories based on high-level steering control signals. With a visually guided flight task\, we demonstrate a neural controller that can use the vision sensors of the body model to control and steer flight. Our project is an open-source platform for modeling neural control of sensorimotor behavior in an embodied context. \nJoin Zoom Meeting:\nhttps://columbiauniversity.zoom.us/j/98060956155?pwd=eVJDY0JOdWV4U1R4emt3dnNPbElWdz09  \nMeeting ID: 980 6095 6155\nPasscode: 263132
URL:https://arni-institute.org/event/animal-behavior-video-analysis-working-group-4/
LOCATION:Zuckerman Institute – L3-079\, 3227 Broadway\, New York\, NY\, 10027\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240412T113000
DTEND;TZID=UTC:20240412T130000
DTSTAMP:20260404T051301
CREATED:20240410T180731Z
LAST-MODIFIED:20240410T180759Z
UID:787-1712921400-1712926800@arni-institute.org
SUMMARY:Adam Charles
DESCRIPTION:Title: Micron brain data at scale: computational challenges in imaging and analysis. \nAbstract: Uncovering the principles of neural computation requires 1) new methods to observe micron-level targets at scale and 2) interpretable models of high-dimensional time-series. In this talk I will cover recent advances in leveraging advanced data models based on latent sparsity and low-dimensionality to tackle key challenges in both domains. First I will discuss ongoing work in multi-photon data analysis. This work seeks to expand our capabilities to extract scientifically rich information from large-scale data of sub-micron targets that represent how circuits compute and how those computations adapt over time. Specifically\, I will discuss recent machine learning image enhancement for tracking synaptic strength in-vivo at scale\, and a morphology-independent image segmentation algorithm for identifying geometrically complex fluorescing objects (e.g.\, dendritic and wide-field imaging). Next I will discuss the analysis challenges if inferring meaningful representations of brain-wide activity provided by imaging advances. Specifically\, brain-wide data represents many parallel and distributed computations. I will discuss recent work building on the intuition of the “neural data manifold”\, and present a decomposed linear dynamical systems (dLDS) model that can capture the nonlinear and non-stationary properties of the neural trajectories along this manifold. dLDS learns a concise model of such dynamics by breaking up the system into several independent\, overlapping systems that are each interpretable as linear systems. I will demonstrate how this model finds meaningful trajectories both in synthetic data and in “whole-brain” C. elegans imaging.
URL:https://arni-institute.org/event/adam-charles/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240411T133000
DTEND;TZID=UTC:20240411T144000
DTSTAMP:20260404T051301
CREATED:20240401T223314Z
LAST-MODIFIED:20240410T190727Z
UID:755-1712842200-1712846400@arni-institute.org
SUMMARY:Continual Learning Working Group
DESCRIPTION:Weekly Meeting Group Discussion: \nPaper Topic: https://direct.mit.edu/neco/article/35/11/1797/117579/Reducing-Catastrophic-Forgetting-With-Associative \nZoom: https://columbiauniversity.zoom.us/j/94783759415?pwd=cTlDTDdCVk9vdEV0QzRKL0hKQW1Kdz09
URL:https://arni-institute.org/event/continual-learning-working-group-7/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240409T114000
DTEND;TZID=UTC:20240409T130000
DTSTAMP:20260404T051301
CREATED:20240408T194253Z
LAST-MODIFIED:20240408T194253Z
UID:777-1712662800-1712667600@arni-institute.org
SUMMARY:Automating Analysis in Biology Using AI\, From Data to Discovery
DESCRIPTION:Speaker: Markus Marks (Caltech)\n\n\nTitle: Automating Analysis in Biology Using AI\, From Data to Discovery\n\n\n\nTime and Place: Davis Auditorium\, 11:40am\, Tuesday April 9\n\n\n\nAbstract: Thanks to improved sensors and decreasing data acquisition and storage costs\, biologists are increasingly able to collect more and higher quality data. How can we harness the expanding capabilities of GPUs at lower costs and fast-improving AI algorithms to effectively handle the rapid influx of data and extract scientific insights with manageable human effort? My work focuses on integrating machine learning into biology and medicine with three core goals: reducing human effort in data annotation\, mitigating human bias in annotations\, and uncovering concealed patterns within biomedical data through data-driven approaches.\n\nThis talk will focus on tackling these challenges\, removing human effort and bias step-by-step. I will elucidate this approach with recent work on behavioral and cellular data analysis\, starting with the application of machine learning to quantify animal behavior automatically in neuroscience experiments. I will then present our recent efforts to develop foundational models for scientific applications\, showcased by a cellular segmentation model that generalizes across a wide range of cell types. Furthermore\, I will show how we can move beyond human-generated labels and discover features directly from the data using self-supervision and experimental observations. Finally\, I will outline how these technologies can be combined to accelerate analysis and facilitate discovery for scientific experiments.\n\nBio: Markus is a postdoc at Caltech working in the computer vision group with Pietro Perona. He received his Ph.D. at the Institute for Neuroinformatics at ETH Zurich. Currently\, Markus focuses on developing machine learning algorithms to enhance scientific discovery in biology and medicine\, collaborating closely with domain experts. Markus organized the interdisciplinary MABe workshop in 2023 with Jennifer Sun from Cornell and the Kennedy lab at Northwestern\, aiming to bring together people and perspectives from different fields working on interacting agents.
URL:https://arni-institute.org/event/automating-analysis-in-biology-using-ai-from-data-to-discovery/
LOCATION:Davis Auditorium\, 530 W 120th St\, New York\, NY 10027\, New York\, NY\, 10027
ORGANIZER;CN="Colloquium":MAILTO:https://lists.cs.columbia.edu/mailman/listinfo/colloquium
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240405T113000
DTEND;TZID=UTC:20240405T130000
DTSTAMP:20260404T051301
CREATED:20240402T000158Z
LAST-MODIFIED:20240404T004652Z
UID:760-1712316600-1712322000@arni-institute.org
SUMMARY:Misha Tsodyks
DESCRIPTION:Title: Putative synaptic theory of temporal order encoding in working memory\n(Joint work with Gianluigi Mongillo) \nAbstract: Overwhelming evidence indicates that working memory automatically encodes incoming stimuli in the correct presentation order. How this is achieved in the brain is however not well understood. We addressed this issue in the framework of our previously proposed synaptic theory\, according to which stimuli are encoded in working memory by selective short-term facilitation of corresponding recurrent synaptic connections. We further suggest that if synapses exhibit longer-term forms of facilitation\, e.g. synaptic augmentation\, encodings acquire a ‘primacy gradient’\, i.e. stimuli presented earlier are stronger encoded compared to later presented ones. We propose a simple way the order information can be retrieved. The new model also sheds new light on the important issue of working memory capacity. We suggest that one should distinguish between retrieval capacity which is limited to very few items\, and representational capacity that can be significantly larger.
URL:https://arni-institute.org/event/misha-tsodyks/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240404T133000
DTEND;TZID=UTC:20240404T144000
DTSTAMP:20260404T051301
CREATED:20240326T190451Z
LAST-MODIFIED:20240401T223135Z
UID:740-1712237400-1712241600@arni-institute.org
SUMMARY:Continual Learning Working Group
DESCRIPTION:Weekly Meeting Group Discussion: \nPaper Topic: https://arxiv.org/abs/2309.10105 \nZoom: https://columbiauniversity.zoom.us/j/94783759415?pwd=cTlDTDdCVk9vdEV0QzRKL0hKQW1Kdz09
URL:https://arni-institute.org/event/continual-learning-working-group-6/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240404T080000
DTEND;TZID=UTC:20240404T170000
DTSTAMP:20260404T051301
CREATED:20240315T190701Z
LAST-MODIFIED:20240315T190701Z
UID:649-1712217600-1712250000@arni-institute.org
SUMMARY:Data Science Day 2024
DESCRIPTION:“The Data Science Institute’s flagship annual event connects innovators in industry and government to Columbia researchers who are propelling advances across every sector with data science.”\nIf you are interested in the event please register on their event page.
URL:https://arni-institute.org/event/data-science-day-2024/
LOCATION:Alfred Lerner Hall\, 2920 Broadway\, New York\, NY\, 10027\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240328T133000
DTEND;TZID=UTC:20240328T144000
DTSTAMP:20260404T051301
CREATED:20240322T003508Z
LAST-MODIFIED:20240326T190536Z
UID:722-1711632600-1711636800@arni-institute.org
SUMMARY:Continual Learning Working Group
DESCRIPTION:Weekly Meeting Group Discussion: \nPaper Topic: https://arxiv.org/abs/2302.03241 \nZoom: https://columbiauniversity.zoom.us/j/94783759415?pwd=cTlDTDdCVk9vdEV0QzRKL0hKQW1Kdz09
URL:https://arni-institute.org/event/continual-learning-working-group-5/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240322T150000
DTEND;TZID=UTC:20240322T170000
DTSTAMP:20260404T051301
CREATED:20240319T234436Z
LAST-MODIFIED:20240320T000555Z
UID:699-1711119600-1711126800@arni-institute.org
SUMMARY:Animal Behavior Video Analysis Working Group
DESCRIPTION:Title: Brain Decodes Deep Nets\nPresenter: Jianbo Shi\, PhD\nGRASP Laboratory\nComputer and Information Science\nUniversity of Pennsylvania\n \nAbstract: We developed a surprising usage of brain encoding: using a brain fMRI prediction model to draw a picture of how a deep net processes information onto a brain.  Our tool provides a detailed analysis of large pre-trained vision models by mapping them onto the brain\, thus exposing their hidden layers and channels.   Our results show how different training methods matter: they lead to remarkable differences in hierarchical organization and scaling behavior. It also provides insight into finetuning: how large pre-trained models change when adapting to new datasets. \n  \n\nJoin Zoom Meeting:\nhttps://columbiauniversity.zoom.us/j/93542681364?pwd=eFlZSkhGY0JHZGlHSk8zSVRYdHRSZz09 \nMeeting ID: 935 4268 1364\nPasscode: 645004
URL:https://arni-institute.org/event/animal-behavior-video-analysis-working-group-5/
LOCATION:CSB 453\, Mudd Building\, 500 W 120th Street
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240322T000000
DTEND;TZID=UTC:20240322T000000
DTSTAMP:20260404T051301
CREATED:20240315T190133Z
LAST-MODIFIED:20240319T225406Z
UID:643-1711065600-1711065600@arni-institute.org
SUMMARY:Jennifer Groh
DESCRIPTION:Title: Multiplexing multiple signals in neural codes: new statistical tools and evidence \nAbstract: How the brain represents multiple objects is mysterious. Sensory neurons are broadly tuned\, producing overlap in the populations of neurons potentially activated by each object in the scene. This overlap raises questions about how distinct information is retained about each item. I will present a novel theory of neural representation\, positing that neural signals may interleave representations of individual items across time. Evidence for this theory has come from new statistical tools that overcome the limitations inherent to standard time-and-trial-pooled assessments of activity. This theory has implications for diverse domains of neuroscience\, including attention\, figure-ground segregation\, and grounded cognition.
URL:https://arni-institute.org/event/jennifer-groh/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240321T133000
DTEND;TZID=America/New_York:20240321T144000
DTSTAMP:20260404T051301
CREATED:20240314T201207Z
LAST-MODIFIED:20240322T003125Z
UID:637-1711027800-1711032000@arni-institute.org
SUMMARY:Continual Learning Working Group
DESCRIPTION:Weekly Meeting Group Discussion: \nPaper Topic: https://arxiv.org/abs/2102.01951 \nZoom: https://columbiauniversity.zoom.us/j/94783759415?pwd=cTlDTDdCVk9vdEV0QzRKL0hKQW1Kdz09
URL:https://arni-institute.org/event/continual-learning-working-group/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240315T113000
DTEND;TZID=UTC:20240315T113000
DTSTAMP:20260404T051301
CREATED:20240314T195510Z
LAST-MODIFIED:20240314T195946Z
UID:627-1710502200-1710502200@arni-institute.org
SUMMARY:Rui Ponte Costa
DESCRIPTION:Title: Brain-wide credit assignment: cortical and subcortical perspectives \nAbstract: The brain assigns credit to trillions of synapses remarkably well. How the brain achieves this feat is one of the great mysteries in neuroscience. Recently\, we have introduced Bursting cortico-cortical networks\, a computational model of hierarchical credit assignment that captures a large number of biological features while approximating deep learning algorithms (Greedy et al. NeurIPS 2022). I will show that in contrast to previous work this model (i) does not require a multi-phase learning process\, (ii) is consistent with experimental observations across multiple levels and (iii) provides efficient credit assignment across the cortical hierarchy. \nHowever\, these models often assume that behavioural feedback is readily available. How the brain learns efficiently despite the sparse nature of feedback remains unclear. Recently we have proposed that a subcortical region\, the cerebellum\, predicts behavioural feedback\, thereby unlocking learning in cortical networks from future feedback. We have introduced two views by which the cerebellum may help the cortex: (i) by driving cortical plasticity (Boven et al. Nature Comms 2023) or (ii) by driving cortical dynamics (Pemberton et al. bioRxiv). Together these two views suggest that cortico-cerebellar loops are a critical part of task acquisition\, switching\, and consolidation in the brain.
URL:https://arni-institute.org/event/rui-ponte-costa/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240307T133000
DTEND;TZID=UTC:20240307T144000
DTSTAMP:20260404T051301
CREATED:20240315T195437Z
LAST-MODIFIED:20240315T195437Z
UID:654-1709818200-1709822400@arni-institute.org
SUMMARY:Continual Learning Working Group
DESCRIPTION:Weekly Meeting Group Discussion: Paper Topic: https://arxiv.org/abs/1906.01076\nZoom: https://columbiauniversity.zoom.us/j/94783759415?pwd=cTlDTDdCVk9vdEV0QzRKL0hKQW1Kdz09
URL:https://arni-institute.org/event/continual-learning-working-group-2/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240229T133000
DTEND;TZID=UTC:20240229T144000
DTSTAMP:20260404T051301
CREATED:20240315T195658Z
LAST-MODIFIED:20240315T195658Z
UID:657-1709213400-1709217600@arni-institute.org
SUMMARY:Continual Learning Working Group
DESCRIPTION:Weekly Meeting Group Discussion: Paper Topic: https://arxiv.org/abs/2302.00487 \nZoom: https://columbiauniversity.zoom.us/j/97515072030?pwd=VGJONXR6bW9LVTN3VlZZSXdRZnNIdz09
URL:https://arni-institute.org/event/continual-learning-working-group-3/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240222T133000
DTEND;TZID=UTC:20240222T144000
DTSTAMP:20260404T051301
CREATED:20240315T195822Z
LAST-MODIFIED:20240315T195822Z
UID:659-1708608600-1708612800@arni-institute.org
SUMMARY:Continual Learning Working Group
DESCRIPTION:Weekly Meeting Group Discussion: Paper Topic: https://arxiv.org/abs/2302.00487\nZoom: https://columbiauniversity.zoom.us/j/3658091817?pwd=WHFJVzAwbDdQcFMzc2FreVplKzVMUT09
URL:https://arni-institute.org/event/continual-learning-working-group-4/
LOCATION:CEPSR 620\, Schapiro 530 W. 120th St
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240220T130000
DTEND;TZID=UTC:20240220T141500
DTSTAMP:20260404T051301
CREATED:20240315T200440Z
LAST-MODIFIED:20240315T200440Z
UID:661-1708434000-1708438500@arni-institute.org
SUMMARY:Generative AI Freespeech & Public Discourse
DESCRIPTION:ARNI coPI Kathy McKeown and ARNI faculty Carl Vondrick participate in the\nPanel 1: Empirical and Technological Questions: Current Landscape\, Challenges\, and Opportunities\nLink: https://www.engineering.columbia.edu/symposium-generative-ai-free-speech-public-discourse\nArticle: https://www.engineering.columbia.edu/news/navigating-generative-ai-and-its-impact-future-public-discourse?utm_source=newsletter&utm_medium=email&utm_campaign=highlights030124
URL:https://arni-institute.org/event/generative-ai-freespeech-public-discourse/
LOCATION:Forum Auditorium\, 601 W 125th St\, New York\, NY\, 10027\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240216T150000
DTEND;TZID=UTC:20240216T170000
DTSTAMP:20260404T051301
CREATED:20240315T200745Z
LAST-MODIFIED:20240315T200908Z
UID:665-1708095600-1708102800@arni-institute.org
SUMMARY:Animal Behavior Video Analysis Working Group
DESCRIPTION:Title: Mapping the landscape of social of social behavior using high-resolution 3D tracking of freely interacting animals \nPresenter: Ugne Klibaite\, PhD\nHarvard University\, Department of Organismic & Evolutionary Biology (PI\, Bence P. Ölveczky) \nAbstract: Social interaction is a fundamental component of animal behavior. However\, we lack tools to describe it with quantitative rigor\, limiting our understanding of its principles and the neuropsychiatric disorders\, like autism\, that perturb it. To address these limitations\, I and collaborators have developed a technique for high-resolution 3D tracking of freely interacting animals and their body-wide social touch patterns\, solving the challenging subject occlusion and part assignment problems using 3D geometric reasoning\, graph neural networks\, and semi-supervised learning. Using this technology\, I have collected and annotated over 34 million 3D postures in interacting rats\, featuring five new monogenic autism models lacking reports of social behavioral phenotypes. I will introduce a novel multi-scale approach which I have used to identify a rich landscape of stereotyped interactions\, synchrony\, and body contact across strains. This deep phenotyping approach revealed a spectrum of changes in rat autism models and in response to amphetamine\, and this framework has the potential to facilitate quantitative studies of social behaviors and their neurobiological underpinnings. \nJoin Zoom Meeting:\nhttps://columbiauniversity.zoom.us/j/94848687512pwd=d0d2L20wSUdZWGZ4dytuZ1YyaEt3QT09 \nMeeting ID: 948 4868 7512\nPasscode: 446335
URL:https://arni-institute.org/event/animal-behavior-video-analysis-working-group/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240119T150000
DTEND;TZID=UTC:20240119T170000
DTSTAMP:20260404T051301
CREATED:20240315T201042Z
LAST-MODIFIED:20240315T201042Z
UID:668-1705676400-1705683600@arni-institute.org
SUMMARY:Animal Behavior Video Analysis Working Group
DESCRIPTION:Title: Multimodal Learning from Pixels to People \nPresenter: Carl Vondrick \nAbstract: People experience the world through modalities of sight\, sound\, words\, touch\, and more. By leveraging their natural relationships and developing multimodal learning methods\, my research creates artificial perception systems with diverse skills\, including spatial\, physical\, logical\, and cognitive abilities\, for flexibly analyzing visual data. This multimodal approach provides versatile representations for tasks like 3D reconstruction\, visual question answering\, and object recognition\, while offering inherent explainability and excellent zero-shot generalization across tasks. By closely integrating diverse modalities\, we can overcome key challenges in machine learning and enable new capabilities for computer vision\, especially for the many upcoming applications where trust is required. \nJoin Zoom Meeting:\nhttps://columbiauniversity.zoom.us/j/96127949475pwd=TWxLa3A3a3lBRjdqbDBWMkRycHFMZz09 \nMeeting ID: 948 4868 7512\nPasscode: 446335
URL:https://arni-institute.org/event/animal-behavior-video-analysis-working-group-2/
LOCATION:CSB 480\, Mudd Building\, 500 W 120th Street
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20231201T150000
DTEND;TZID=UTC:20231201T170000
DTSTAMP:20260404T051301
CREATED:20240315T201133Z
LAST-MODIFIED:20240315T201133Z
UID:671-1701442800-1701450000@arni-institute.org
SUMMARY:Animal Behavior Video Analysis Working Group
DESCRIPTION:Title: Precise quantification of natural behavior with computer vision \nAbstract: To understand the neural control of movement\, cognition\, and social interaction\, we need to precisely quantify motor behaviors. Deep learning tools now enable to extract meaningful behavioral signals from raw videos\, in high spatiotemporal resolution. These technologies are gaining increasing adoption in system neuroscience and are transforming the field in many ways. We will provide an overview of the field\, present the limitations of some of the standard approaches\, and present some of our own work on pose tracking (keypoint detection) and perhaps behavioral segmentation (discovering discrete behavioral motifs). We look forward to exploring fresh perspectives on this important problem. \nJoin Zoom Meeting:\nhttps://columbiauniversity.zoom.us/j/95557736296pwd=V2tTNEVOellZMENGUDF5RXVwcUUyQT09 \nMeeting ID: 948 4868 7512\nPasscode: 446335
URL:https://arni-institute.org/event/animal-behavior-video-analysis-working-group-3/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
END:VCALENDAR