61 research outputs found
Attracting Dynamics of Frontal Cortex Ensembles during Memory-Guided Decision-Making
A common theoretical view is that attractor-like properties of neuronal dynamics underlie cognitive processing. However, although often proposed theoretically, direct experimental support for the convergence of neural activity to stable population patterns as a signature of attracting states has been sparse so far, especially in higher cortical areas. Combining state space reconstruction theorems and statistical learning techniques, we were able to resolve details of anterior cingulate cortex (ACC) multiple single-unit activity (MSUA) ensemble dynamics during a higher cognitive task which were not accessible previously. The approach worked by constructing high-dimensional state spaces from delays of the original single-unit firing rate variables and the interactions among them, which were then statistically analyzed using kernel methods. We observed cognitive-epoch-specific neural ensemble states in ACC which were stable across many trials (in the sense of being predictive) and depended on behavioral performance. More interestingly, attracting properties of these cognitively defined ensemble states became apparent in high-dimensional expansions of the MSUA spaces due to a proper unfolding of the neural activity flow, with properties common across different animals. These results therefore suggest that ACC networks may process different subcomponents of higher cognitive tasks by transiting among different attracting states
Deep representation learning: Fundamentals, Perspectives, Applications, and Open Challenges
Machine Learning algorithms have had a profound impact on the field of
computer science over the past few decades. These algorithms performance is
greatly influenced by the representations that are derived from the data in the
learning process. The representations learned in a successful learning process
should be concise, discrete, meaningful, and able to be applied across a
variety of tasks. A recent effort has been directed toward developing Deep
Learning models, which have proven to be particularly effective at capturing
high-dimensional, non-linear, and multi-modal characteristics. In this work, we
discuss the principles and developments that have been made in the process of
learning representations, and converting them into desirable applications. In
addition, for each framework or model, the key issues and open challenges, as
well as the advantages, are examined
25th Annual Computational Neuroscience Meeting: CNS-2016
Abstracts of the 25th Annual Computational Neuroscience
Meeting: CNS-2016
Seogwipo City, Jeju-do, South Korea. 2–7 July 201
- …