Dynamic formation of invariant object representations
PI: Alan Stocker
Co-PI: Xaq Pitkow, CMU; Josh Gold, UPenn,
Abstract
Our proposed work is two-fold. On the experimental side, we will use object-tracking to assess the emergence of invariant object representations. We will characterize how these representations depend on parametric variations of a dynamic environmental context, i.e., the dynamics and structure of distractors and noise. We have started to test human participants’ ability to dynamically adapt their model (i.e., representation) of an object (Gabor patch) defined by its motion dynamics in the presence of noise. Our preliminary results show that people can dynamically adapt their tracking behavior to the timescale of the object motion (Fig. 1A,B). We plan to extend this approach to 2D tracking of objects with spatiotemporal features within a dynamic, structured environment (e.g., with distractor objects). We will use continuous psychophysical (Bonnen et al. 2015) in combination with change-detection tasks to probe and characterize if and how object representations change in response to dynamic changes in the spatiotemporal structure of the environment. On the theory and modeling side, we will use structured recurrent neural networks to perform the computations for this task. A natural normative framework for this task is a Hidden Markov Model (HMM) with continuous states, which can be used to infer and track the task-relevant latent states as well as the environmental dynamics. In the case of linear dynamics with
Gaussian additive noise, the HMM is a Kalman filter, which is a particular case of a linear dynamical system driven by sensory input. We will explore how constrained neural architectures for linear recurrent networks can learn and approximate these optimal computations. In particular, we will allow individual neural units to possess adaptive characteristics, minimally including their own latent temporal dynamics. These adaptive characteristics can be implemented as a linear dynamical system with additional internal latent states for each neuron, leading to an effective recurrence matrix with special block structure (Fig. 1C,D). We will contrast the performance characteristics of such a recurrent system with performance of an unconstrained linear recurrent system. Later we will generalize these results to nonlinear dynamical systems using an extended Kalman filter to track changing context, including process dynamics and latent process variability. These two threads of work will be conducted in close coordination, with the experiments and theory informing each other (we have weekly meetings to discuss progress and plans). As more data become available, we will compare the behaviors produced by a family of such networks to the human data.
Publications
In Progress
Resources
In Progress
