Evaluating sparsely structured selectivity in neural codes
PI: Stefano Fusi
Co-PI: Xaq Pitkow, CMU; Andreas Tolias, Stanford
Abstract
Sparsity has long been associated with good neural representations, and is thought to play a key role in learning and generalization by helping disentangle causal latent variables. There are many different types of sparsity, with different consequences: the most common is sparse activity (sparse coding) and sparse connectivity (e.g. anatomical wiring, probabilistic graphical models). Here we address sparse selectivity — the tuning to a small subset of latent variables. Our two aims address whether such selectivity is present, and what value it adds.
Publications
In progress
Resources
In progress
