Principles of flexible and robust decision making using multiple sensory modalities
PI: Pratik Chaudhari
Co-PI: Joshua Gold and Vijay Balasubramanian
Abstract
The ability of artificial neural networks to avoid overfitting is reflected in a certain characteristic structure called “sloppiness”. The signature of sloppiness is that the network has a Fisher Information Metric (FIM) with eigenvalues distributed uniformly on a logarithmic scale. This distribution indicates a large degree of redundancy in the learned parameters; i.e., there is one set of parameters that is tightly constrained by the data, another that can vary twice as much without affecting predictions, and so on. This project aims to expand and formalize our understanding of how sloppiness affects generalization in different artificial neural networks.
In addition, we aim to apply these ideas to understand how human and non-human primates learn tasks and adapt to changes in tasks that require inferences using multi-modal sensory inputs and in complex, dynamic environments, and their benefits are affected by upstream and downstream computation.
Publications
De Silva, Ashwin, Rahul Ramesh, Rubing Yang, Siyu Yu, Joshua T. Vogelstein, and Pratik Chaudhari. "Prospective Learning: Learning for a Dynamic Future." NeurIPS 2024.https://arxiv.org/abs/2411.00109
Ramesh, R., Bisulco, A., DiTullio, R.W., Wei, L., Balasubramanian, V., Daniilidis, K. and Chaudhari, P., 2024. "Many Perception Tasks are Highly Redundant Functions of their Input Data". arXiv preprint https://arxiv.org/abs/2407.13841
Rooke, S., Wang, Z., Di Tullio, R.W. and Balasubramanian, V., "Trading Place for Space: Increasing Location Resolution Reduces Contextual Capacity in Hippocampal Codes.", NeurIPS 2024. https://www.biorxiv.org/content/10.1101/2024.10.29.620785v1
Wang, Z., Di Tullio, R.W., Rooke, S. and Balasubramanian, V., "Time makes space: Emergence of place fields in networks encoding temporally continuous sensory experiences." NeurIPS 2024. https://arxiv.org/abs/2408.05798
Resources
Prospective Learning: Principled Extrapolation to the Future https://github.com/neurodata/prolearn and https://github.com/neurodata/prolearn/blob/main/tutorials/tutorial.ipynb
Time makes space: Emergence of place fields in networks encoding temporally continuous sensory experiences https://github.com/zhaozewang/place_cells_episodic_rnn