Modular computations in AI and neuroscience: Principles and applications
PI: Ken Miller
Co-PI: Larry Abbott, Columbia; Tahereh Toosi, Columbia
Abstract
Modularity represents a fundamental organizing principle in both biological and artificial intelligence systems. In the brain, this modularity manifests as functionally specialized neural circuits—perhaps most prominently in the visual system’s division into dorsal (”where/how”) and ventral (”what”) pathways specialized for spatial/action-oriented processing and object recognition, respectively1. This anatomical and functional segregation, conserved across species, suggests an evolutionary advantage to such modular organization (Fig 1.A2). In artificial intelligence, similar modularity principles have been proposed through various architectural innovations, particularly Mixture-of-Experts (MoE) approaches that selectively activate specialized sub-networks or ”experts” based on input characteristics 3,4. Large-scale neural network systems implementing these principles have consistently demonstrated substantial performance and efficiency gains through selective computation—activating only relevant modules. This functional specialization parallels the brain’s ability to allocate neural resources efficiently across dedicated processing streams. Despite these parallel developments, we lack a fundamental understanding of why such modularity emerges in brain and how to design effective modular systems. This gap represents a significant obstacle to both fields: in neuroscience, understanding the computational principles driving modular organization could reveal fundamental insights into neural information processing; in AI, principled approaches to modular design could lead to more efficient, robust, and adaptable systems. The central questions motivating this proposal are: I) What computational principles drive the emergence of modular organization in neural systems? II) Do similar principles apply across biological and artificial systems? III) Can we leverage these principles to develop more effective AI architectures and better understand brain function?
This research directly addresses ARNI’s core themes of Neural Mechanisms of Intelligence and Language and Vision, while incorporating aspects of Continual Learning through the investigation of how modular architectures enhance adaptation to new information. Bridging neuroscience and AI, we will uncover the computational principles driving modularity across domains, advancing our understanding of natural intelligence while enabling more adaptive artificial systems.
Publications
In progress
Resources
In progress
