- This event has passed.
Adam Charles
April 12 @ 11:30 am - 1:00 pm
Abstract: Uncovering the principles of neural computation requires 1) new methods to observe micron-level targets at scale and 2) interpretable models of high-dimensional time-series. In this talk I will cover recent advances in leveraging advanced data models based on latent sparsity and low-dimensionality to tackle key challenges in both domains. First I will discuss ongoing work in multi-photon data analysis. This work seeks to expand our capabilities to extract scientifically rich information from large-scale data of sub-micron targets that represent how circuits compute and how those computations adapt over time. Specifically, I will discuss recent machine learning image enhancement for tracking synaptic strength in-vivo at scale, and a morphology-independent image segmentation algorithm for identifying geometrically complex fluorescing objects (e.g., dendritic and wide-field imaging). Next I will discuss the analysis challenges if inferring meaningful representations of brain-wide activity provided by imaging advances. Specifically, brain-wide data represents many parallel and distributed computations. I will discuss recent work building on the intuition of the “neural data manifold”, and present a decomposed linear dynamical systems (dLDS) model that can capture the nonlinear and non-stationary properties of the neural trajectories along this manifold. dLDS learns a concise model of such dynamics by breaking up the system into several independent, overlapping systems that are each interpretable as linear systems. I will demonstrate how this model finds meaningful trajectories both in synthetic data and in “whole-brain” C. elegans imaging.