Towards Generalized Inference of Single-Trial Neural Population Dynamics

Prof. Chethan Pandarinath

Event Date

Location
Please register at: https://tinyurl.com/Neuroeng-Med-Nov
A recording of this presentation is available to UC Davis faculty, students, and researchers at this link

Chethan Pandarinath, PhD

Assistant Professor, Biomedical Engineering, Georgia Institute of Technology & Emory University

Neurosurgery, Emory University

Abstract

Large-scale recordings of neural activity are becoming ubiquitous, providing new opportunities to study network-level dynamics in diverse brain areas and during increasingly complex, natural behaviors. However, the sheer volume of data and its dynamical complexity are critical barriers to uncovering and interpreting these dynamics.

Deep learning methods are a particularly promising approach due to their ability to uncover meaningful relationships from large, complex, and noisy datasets. One such method, latent factor analysis via dynamical systems (LFADS), uses recurrent neural networks to infer latent dynamics from high-D neural spiking data. When applied to motor cortical (M1) activity during stereotyped behaviors, LFADS substantially improved the ability to uncover dynamics and their relation to subjects’ behaviors on a moment-by-moment, millisecond timescale.

However, applying LFADS to less-structured behaviors, or in brain areas that are not predominantly driven by intrinsic dynamics, is far more challenging. This is because LFADS, like many deep learning methods, requires careful hand-tuning of complex model hyperparameters (HPs) to achieve good performance. Here we demonstrate AutoLFADS, a large-scale, automated model tuning framework that can characterize dynamics in diverse brain areas without regard to behavior. AutoLFADS uses distributed computing to train dozens of models simultaneously while using evolutionary algorithms to optimally tune HPs in a completely unsupervised way. AutoLFADS required 10-fold less data to uncover dynamics from macaque M1/PMd, with better generalization to unseen behavioral conditions than previous LFADS models. We then tested data from the somatosensory and dorsomedial frontal cortices, areas with very different dynamics from M1/PMd. AutoLFADS produced precise estimates of population dynamics without any prior knowledge of the areas’ dynamics, tasks, or subjects’ behaviors, outperforming any individually-trained LFADS model obtained through random HP searches.

Finally, we present a cloud software package and comprehensive tutorials that enable new users to apply the method without needing dedicated computing resources.

Bio

Dr. Pandarinath joined the Coulter Department of Biomedical Engineering at Georgia Tech and Emory University in December 2016, after completing his PhD in Electrical Engineering at Cornell Univ. and a postdoc in Electrical Engineering & Neurosurgery at Stanford Univ.  

His work has spanned systems neuroscience and brain-machine interfaces across visual and motor systems. Dr. Pandarinath’s lab centers on understanding how the brain represents information and intention, and using this knowledge to develop high-performance, robust, and practical assistive devices for people with disabilities and neurological disorders. The goal is to take a dynamical systems approach to characterizing the activity of large populations of neurons, combined with rigorous systems engineering (signal processing, machine learning, control theory, real-time system design) to advance the performance of brain-machine interfaces and neuromodulatory devices. 

Dr. Pandarinath was the recipient of the Stanford Dean’s Fellowship and the Craig H. Neilsen Foundation Postdoctoral Fellowship in spinal cord injury research, and was a finalist for the 2015 Sammy Kuo Award in Neuroscience from the Stanford School of Medicine. His work was funded by the Sloan Foundation (Research Fellow in Neuroscience), NSF, DARPA, and NIH. 

Faculty host: Jochen Ditterich, PhD