26 research outputs found

    From neuronal populations to behavior: a computational journey

    Get PDF
    Cognitive behaviors originate in the responses of neuronal populations. We have a reasonable understanding of how the activity of a single neuron can be related to a specific behavior. However, it is still unclear how more complex behaviors are inferred from the responses of neuronal populations. This is a particularly timely problem because multi-neuronal recording techniques have recently become increasingly available, simultaneously spurring advances in the analysis of neuronal population data. These developments are, however, constrained by the challenges of combining theoretical and experimental approaches because both approaches have their unique set of constraints. A solution to this problem is to design computational models that are either derived or inspired by cortical computations

    Predicting oculomotor behaviour from correlated populations of posterior parietal neurons

    Get PDF
    Oculomotor function critically depends on how signals representing saccade direction and eye position are combined across neurons in the lateral intraparietal (LIP) area of the posterior parietal cortex. Here we show that populations of parietal neurons exhibit correlated variability, and that using these interneuronal correlations yields oculomotor predictions that are more accurate and also less uncertain. The structure of LIP population responses is therefore essential for reliable read-out of oculomotor behaviour

    Brain–machine interface for eye movements

    Get PDF
    A number of studies in tetraplegic humans and healthy nonhuman primates (NHPs) have shown that neuronal activity from reach-related cortical areas can be used to predict reach intentions using brain–machine interfaces (BMIs) and therefore assist tetraplegic patients by controlling external devices (e.g., robotic limbs and computer cursors). However, to our knowledge, there have been no studies that have applied BMIs to eye movement areas to decode intended eye movements. In this study, we recorded the activity from populations of neurons from the lateral intraparietal area (LIP), a cortical node in the NHP saccade system. Eye movement plans were predicted in real time using Bayesian inference from small ensembles of LIP neurons without the animal making an eye movement. Learning, defined as an increase in the prediction accuracy, occurred at the level of neuronal ensembles, particularly for difficult predictions. Population learning had two components: an update of the parameters of the BMI based on its history and a change in the responses of individual neurons. These results provide strong evidence that the responses of neuronal ensembles can be shaped with respect to a cost function, here the prediction accuracy of the BMI. Furthermore, eye movement plans could be decoded without the animals emitting any actual eye movements and could be used to control the position of a cursor on a computer screen. These findings show that BMIs for eye movements are promising aids for assisting paralyzed patients

    Inferring eye position from populations of lateral intraparietal neurons

    Get PDF
    Understanding how the brain computes eye position is essential to unraveling high-level visual functions such as eye movement planning, coordinate transformations and stability of spatial awareness. The lateral intraparietal area (LIP) is essential for this process. However, despite decades of research, its contribution to the eye position signal remains controversial. LIP neurons have recently been reported to inaccurately represent eye position during a saccadic eye movement, and to be too slow to support a role in high-level visual functions. We addressed this issue by predicting eye position and saccade direction from the responses of populations of LIP neurons. We found that both signals were accurately predicted before, during and after a saccade. Also, the dynamics of these signals support their contribution to visual functions. These findings provide a principled understanding of the coding of information in populations of neurons within an important node of the cortical network for visual-motor behaviors

    Decoding the activity of neuronal populations in macaque primary visual cortex

    Get PDF
    Visual function depends on the accuracy of signals carried by visual cortical neurons. Combining information across neurons should improve this accuracy because single neuron activity is variable. We examined the reliability of information inferred from populations of simultaneously recorded neurons in macaque primary visual cortex. We considered a decoding framework that computes the likelihood of visual stimuli from a pattern of population activity by linearly combining neuronal responses and tested this framework for orientation estimation and discrimination. We derived a simple parametric decoder assuming neuronal independence and a more sophisticated empirical decoder that learned the structure of the measured neuronal response distributions, including their correlated variability. The empirical decoder used the structure of these response distributions to perform better than its parametric variant, indicating that their structure contains critical information for sensory decoding. These results show how neuronal responses can best be used to inform perceptual decision-making

    Klassifikation und Merkmalsextraktion in Mensch und Maschine

    No full text
    Diese Dissertation befasst sich mit den Mechanismen, die Menschen verwenden, um Merkmale aus visuellen Reizen zu erzeugen und anschliessend zu klassifizieren. Es wird eine experimentelle Methode entwickelt, die menschliche Psychophysik mit maschinellem Lernen verbindet. Im Mittelpunkt der Arbeit steht ein Geschlechtsklassifikationsexperiment, das mit Hilfe der Kopfdatenbank des Max Planck Instituts durchgeführt wird. Hierzu werden verschiedene niedrig-dimensionale Merkmale aus den Gesichtsbildern extrahiert. Das Klassifikationsverfahren auf diesen Merkmalen ist durch eine Trennebene zwischen den beiden Klassen modelliert. Die Antworten der Versuchspersonen werden verglichen und korreliert mit der Distanz der Merkmale zur Trennebene. In dieser Arbeit wird bewiesen, dass maschinelles Lernen ein neues und wirksames algorithmisches Verfahren ist, um Einblicke in menschliche kognitive Prozesse zu erhalten. In einem ersten psychophysischen Klassifikationsexperiment wird gezeigt, dass eine hohe Fehlerrate und ein niedriges Vertrauen der Versuchspersonen einer längeren Verarbeitung der Information im Gehirn entsprechen. Ein zweites Klassifikationsexperiment auf den selben Reizen aber in unterschiedlicher Reihenfolge, bestätigt die Konsistenz der Antworten der Versuchspersonen und die Reproduzierbarkeit der folgenden Resultate. Es wird gezeigt, dass Trennebenen ein adäquates Modell sind, um die Klassifikation visueller Reize bei Menschen zu beschreiben. Reizmerkmale, die entfernt von der Trennebene sind, werden dabei genau, schnell und mit hohem Vertrauen klassifiziert. Es stellt sich heraus, dass Verfahren, die auf einer stückweis-linearen Trennebene basieren, weniger geeignet sind. Dahingegen beschreiben beispielbasierte Verfahren wie die Support Vector Machine oder die Relevance Vector Machine am besten das Verhalten der Versuchspersonen. Dies wird belegt durch Studien, die sowohl den Klassi-fikationsfehler vom Menschen und der Maschine vergleichen als auch deren Verhalten korrelieren. Der weitverbreitete Prototypenlerner schneidet am schlechtesten ab. Diese Resultate werden unterstützt durch eine Studie der stochastischen Komponente des menschlichen Klassifikationverfahrens: die Schätzung des Geschlechts ist inkonsequent zwischen dem ersten und zweiten Klassifikationsexperiment auf den Mustern nahe zur Trennebene. Im weiteren Rahmen erlauben die in dieser Arbeit durchgeführten Studien Aussagen über die Mechanismen der menschlichen Merkmalsextraktion. Die biologisch-bewiesene Relevanz von Gaborfilterantworten erweist sich auch in dem Kontext der hier durchgeführten Studien als geeignete Kodierung von Pixeldaten. Desweiteren erweist sich die Information enthalten in der Kombination von Textur- und Form-Flussfeldern als gut geeignet zur Beschreibung der menschlichen Merkmalsextraktion. Hier werden räumliche Korrespondenzen der Bildreize miteinbezogen. Mit Hilfe dieses Datentyps kann gezeigt werden, dass Menschen für diese Aufgabe wahrscheinlich eine Bilderbasis verwenden, die aus Musterteilen besteht und nicht aus Gesamtmustern. Letztlich werden die Merkmalsextraktionsverfahren hinsichtlich ihrer Spärlichkeit untersucht, wobei sich ein mittlerer Grad an Spärlichkeit als am besten erweist. Im weiteren werden Verfahren zur Modellierung des menschlichen Verhaltens bei Klassifikation von visuellen Reizen untersucht, die Aussagen über die Metrik der internen Gesichtsdarstellung erlauben. Dafür wird eine logistische Regression zwischen der Geschlechtseinschätzung der Versuchsperson für einen Reiz und der Distanz dieses Reizes zur Trennebene verwendet. Es wird gezeigt, dass eine Darstellung, die auf Antworten der Versuchsperson basiert, sich besser eignet, als eine Darstellung, die auf dem wahren Geschlecht basiert. Es stellt sich heraus, dass der Klassifikationsfehler ein schlechtes Mass zwischen Mensch und Maschine ist. In einem weiteren psychophysischen Klassifikationsexperiment werden die Trennebenen der Maschine verwendet, um neue Gesichtsreize zu erzeugen: diese liegen auf einer Geschlechtsachse, die senkrecht zur Trennebene steht. Die Unterscheidung durch die Versuchspersonen der Reize auf dieser Achse bestätigt die obigen Vorhersagen: die Support Vector Machine und die Relevance Vector Machine erweisen sich als besser als der Prototypenlerner, um das menschliche Klassifikationsverfahren zu modellieren. Mit diesem Experiment wird die "Psychophysik-maschinelles Lernen" Schleife geschlossen. In einem abschliessenden psychophysischen Experiment wird gezeigt, dass es schwieriger ist, maschinelles Lernen auf das Gedächnissverhalten des Menschen anzuwenden, obwohl sich maschinelles Lernen als gut erweist, um Merkmalextraktion und Klassifikation visueller Reize bei Menschen zu modellieren.This dissertation attempts to shed new light on the mechanisms used by human subjects to extract features from visual stimuli and for their subsequent classification. A methodology combining human psychophysics and machine learning is introduced, where feature extractors are modeled using methods from unsupervised machine learning whereas supervised machine learning is considered for classification. We consider a gender classification task using stimuli drawn from the Max Planck Institute face database. Once a feature extractor is chosen and the corresponding data representation is computed, the resulting feature vector is classied using a separating hyperplane (SH) between the classes. The behavioral responses of humans to one stimulus, in our study the gender estimate and its corresponding reaction time and confidence rating, are compared and correlated to the distance of the feature vector of this stimulus to the SH. It is successfully demonstrated that machine learning can be used as a novel method to \look into the human head" in an algorithmic way. In a first psychophysical classification experiment we note that a high classification error and a low confidence for humans are accompanied by a longer processing of information by the brain. Furthermore, a second classi-fication experiment on the same stimuli but in a different presentation order confirms the consistency and the reproducibility of the subjects' responses. Using several classification algorithms from supervised machine learning, we show that separating hyperplanes (SHs) are a plausible model to describe classification of visual stimuli by humans since stimuli represented by features distant from the SH are classified more accurately, faster and with higher confidence than the ones closer to the SH. A piecewise linear extension as in the K-means classifier seems however less adapted to model classification. Furthermore, the comparison of the classification algorithms indicates that the Support Vector Machine (SVM) and the Relevance Vector Machine (RVM), both exemplar-based classifiers, compare best to human classification performance and also exhibit the best man-machine correlations. The mean-of-class prototype learner, its popularity in neuroscience notwithstanding, is the least human-like classifier in all cases examined. These findings are corroborated by the stochastic nature of the human classi-fication between the first and second classification experiments: elements close to the SH are subject to more jitter in the subjects' gender estimation than elements distant from the SH. The above classification studies also give a hint at the mechanisms responsible for the computation of the feature vector corresponding to a stimulus, in other words the feature extraction procedure which is defined by the combination of a data type with a preprocessor. Gabor wavelet filters reveal to be the most suited preprocessor when considering the image pixel data type. The biological realism of both Gabor wavelets and the image data confirms the validity of our approach. Alternatively, the information contained in the data type defined by the combination of the texture and the shape maps of each face, these maps bringing each face into spatial correspondence with a reference face, is also shown to be useful when describing the internal face representation of humans. Non-negative Matrix Factorization applied on the texture-and-shape data type is demonstrated to describe well the preprocessing of visual information in humans, and this has three implications. First, humans seem to use a basis of images to encode visual information, what may suggest that models such as kernel maps are less adapted since they do not use a basis to decompose (visual) data. Second, this basis seems to be part-based, in contrast to Principal Component Analysis which yields a holistic basis. Third, this part-based basis is spatially not too sparse, excluding Independent Component Analysis. Both for the encodings and for the basis, a medium degree of sparseness is shown to be most adapted. Alternative approaches to model classification of visual stimuli by humans are subsequently introduced. In order to get novel insights into the metric of the human internal representation of faces, the above data is analyzed using logistic regression interpolations between the mean subjects' class estimate for a stimulus and the distance of this stimulus to the SH of each classifier. We show that a representation based upon the subjects' gender estimates is most appropriate, while the classification performance is demonstrated to be a poor measure when comparing man and machine. A novel psychophysical experiment is then designed where the hypotheses generated from machine learning are used to generate novel stimuli along a direction|the gender axis|orthogonal to the SH of each classifier. The study of the subjects' responses along these gender axes allows us then to infer the validity of the prediction given by machine learning. The results of this experiment|SVM and RVM are best while the prototype classifier is worst|validate the models given by machine learning and close the “psychophysics-machine learning" loop. We finally show in a psychophysical experiment that it is more difficult to cast concepts from machine learning into a formalism describing the memory mechanisms of humans. However, machine learning is demonstrated to be an appropriate model for feature extraction and classification of visual stimuli in humans given the particular task we chose

    Insights from machine learning applied to human visual classification

    No full text
    We attempt to understand visual classification in humans using both psychophysical and machine learning techniques. Frontal views of human faces were used for a gender classification task. Human subjects classified the faces and their gender judgment, reaction time and confidence rating were recorded. Several hyperplane learning algorithms were used on the same classification task using the Principal Components of the texture and flowfield representation of the faces. The classification performance of the learning algorithms was estimated using the face database with the true gender of the faces as labels, and also with the gender estimated by the subjects. We then correlated the human responses to the distance of the stimuli to the separating hyperplane of the learning algorithms. Our results suggest that human classification can be modeled by some hyperplane algorithms in the feature space we used. For classification, the brain needs more processing for stimuli close to that hyperplane than for those further away.

    Brain–machine interface for eye movements

    No full text
    corecore