BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//ARNI - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:ARNI
X-ORIGINAL-URL:https://arni-institute.org
X-WR-CALDESC:Events for ARNI
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20260308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20261101T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250807T143000
DTEND;TZID=America/New_York:20250807T163000
DTSTAMP:20260504T114844
CREATED:20250703T154507Z
LAST-MODIFIED:20250729T153708Z
UID:1845-1754577000-1754584200@arni-institute.org
SUMMARY:Speaker: Kwabena Boahen ARNI WG Multi-resource-cost optimization of neural network models
DESCRIPTION:Title: From 2D Chips to 3D Brains \nAbstract: \nArtificial intelligence (AI) realizes a synaptocentric conception of the learning brain with dot-products and advances by performing twice as many multiplications every two months. But the semiconductor industry tiles twice as many multipliers on a chip only every two years. Moreover\, the returns from tiling these multipliers ever more densely now diminish\, because signals must travel relatively farther and farther\, expending energy and exhausting heat that scales quadratically. As a result\, communication is now much more expensive than computation. Much more so than in biological brains\, where energy-use scales linearly rather than quadratically with neuron count. That allows an 86-billion-neuron human brain to use as little power as a single lightbulb (25W) rather than as much as the entire US (3TW). Hence\, rescaling a chip’s energy-use from quadratic to linear is critical to scale AI sustainably from trillion (1012) parameters (mouse scale) today to a quadrillion (1015) parameters (human scale) in the next five years. But this would require communication cost to be reduced radically. Towards that end\, I will present a recent re-conception of the brain’s fundamental unit of computation that sparsifies signals by moving away from synaptocentric learning with dot-products to dendrocentric learning with sequence detectors. \nZoom: Request @ ARNI@columbia.edu
URL:https://arni-institute.org/event/speaker-arni-wg-multi-resource-cost-optimization-of-neural-network-models/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250827T140000
DTEND;TZID=America/New_York:20250827T160000
DTSTAMP:20260504T114844
CREATED:20250826T144924Z
LAST-MODIFIED:20250826T144924Z
UID:1952-1756303200-1756310400@arni-institute.org
SUMMARY:Speakers: Vinam Arora and Ji Xia – ARNI Frontier Models for Neuroscience and Behavior Working Group
DESCRIPTION:Title and Abstracts:  \n1st Speaker: Vinam Arora\, UPenn\nTitle: Know Thyself by Knowing Others: Learning Neuron Identity from Population Context\nAbstract: Identifying the functional identity of individual neurons is essential for interpreting circuit dynamics\, yet remains a major challenge in large-scale in vivo recordings where anatomical and molecular labels are often unavailable. Here we introduce NuCLR\, a self-supervised framework that learns context-aware representations of neuron identity by modeling each neuron’s role within the broader population. NuCLR employs a spatiotemporal transformer that captures both within-neuron dynamics and across-neuron interactions\, and is trained with a sample-wise contrastive objective that encourages stable\, discriminative embeddings across time. Across multiple open-access datasets\, NuCLR outperforms prior methods in both cell type and brain region classification. It enables zero-shot generalization to entirely new populations—without retraining or access to stimulus labels—offering a scalable approach for real-time\, functional decoding of neuron identity across diverse experimental settings. \n2nd Speaker: Ji Xia\, Columbia\nTitle: In painting the neural picture: Inferring Unrecorded Brain Area Dynamics from Multi-Animal Datasets.\nAbstract: Understanding how the brain drives memory-guided movements requires recording neural activity from the motor cortex and interconnected subcortical areas. Neuropixels probes now allow simultaneous recordings from subsets of these areas\, but no single session captures all areas of interest\, and different neurons are sampled from each area across sessions. This poses a key challenge: how to integrate neural data across sessions to reconstruct the complete multi-area picture. We address this with a transformer-based autoencoder that aligns neural activity into a shared latent space across sessions and animals\, separately for each brain area\, including those not recorded in a given session. This approach enables single-trial analysis of multi-area neural dynamics from all areas of interest. I am now working on improving this method\, and will discuss both its present challenges and promising directions for future work. \nZoom: Upon request at @ arni@columbia.edu.
URL:https://arni-institute.org/event/speakers-vinam-arora-and-ji-xia-arni-frontier-models-for-neuroscience-and-behavior-working-group/
LOCATION:Zuckerman Institute – L3-079\, 3227 Broadway\, New York\, NY\, 10027\, United States
END:VEVENT
END:VCALENDAR