874 research outputs found

    Principles of sensorimotor control and learning in complex motor tasks

    Get PDF
    The brain coordinates a continuous coupling between perception and action in the presence of uncertainty and incomplete knowledge about the world. This mapping is enabled by control policies and motor learning can be perceived as the update of such policies on the basis of improving performance given some task objectives. Despite substantial progress in computational sensorimotor control and empirical approaches to motor adaptation, to date it remains unclear how the brain learns motor control policies while updating its internal model of the world. In light of this challenge, we propose here a computational framework, which employs error-based learning and exploits the brain’s inherent link between forward models and feedback control to compute dynamically updated policies. The framework merges optimal feedback control (OFC) policy learning with a steady system identification of task dynamics so as to explain behavior in complex object manipulation tasks. Its formalization encompasses our empirical findings that action is learned and generalised both with regard to a body-based and an object-based frame of reference. Importantly, our approach predicts successfully how the brain makes continuous decisions for the generation of complex trajectories in an experimental paradigm of unfamiliar task conditions. A complementary method proposes an expansion of the motor learning perspective at the level of policy optimisation to the level of policy exploration. It employs computational analysis to reverse engineer and subsequently assess the control process in a whole body manipulation paradigm. Another contribution of this thesis is to associate motor psychophysics and computational motor control to their underlying neural foundation; a link which calls for further advancement in motor neuroscience and can inform our theoretical insight to sensorimotor processes in a context of physiological constraints. To this end, we design, build and test an fMRI-compatible haptic object manipulation system to relate closed-loop motor control studies to neurophysiology. The system is clinically adjusted and employed to host a naturalistic object manipulation paradigm on healthy human subjects and Friedreich’s ataxia patients. We present methodology that elicits neuroimaging correlates of sensorimotor control and learning and extracts longitudinal neurobehavioral markers of disease progression (i.e. neurodegeneration). Our findings enhance the understanding of sensorimotor control and learning mechanisms that underlie complex motor tasks. They furthermore provide a unified methodological platform to bridge the divide between behavior, computation and neural implementation with promising clinical and technological implications (e.g. diagnostics, robotics, BMI).Open Acces

    Decoding Continuous Variables from Neuroimaging Data: Basic and Clinical Applications

    Get PDF
    The application of statistical machine learning techniques to neuroimaging data has allowed researchers to decode the cognitive and disease states of participants. The majority of studies using these techniques have focused on pattern classification to decode the type of object a participant is viewing, the type of cognitive task a participant is completing, or the disease state of a participant's brain. However, an emerging body of literature is extending these classification studies to the decoding of values of continuous variables (such as age, cognitive characteristics, or neuropsychological state) using high-dimensional regression methods. This review details the methods used in such analyses and describes recent results. We provide specific examples of studies which have used this approach to answer novel questions about age and cognitive and disease states. We conclude that while there is still much to learn about these methods, they provide useful information about the relationship between neural activity and age, cognitive state, and disease state, which could not have been obtained using traditional univariate analytical methods

    Annotated Bibliography: Anticipation

    Get PDF

    Applications of brain imaging methods in driving behaviour research

    Get PDF
    Applications of neuroimaging methods have substantially contributed to the scientific understanding of human factors during driving by providing a deeper insight into the neuro-cognitive aspects of driver brain. This has been achieved by conducting simulated (and occasionally, field) driving experiments while collecting driver brain signals of certain types. Here, this sector of studies is comprehensively reviewed at both macro and micro scales. Different themes of neuroimaging driving behaviour research are identified and the findings within each theme are synthesised. The surveyed literature has reported on applications of four major brain imaging methods. These include Functional Magnetic Resonance Imaging (fMRI), Electroencephalography (EEG), Functional Near-Infrared Spectroscopy (fNIRS) and Magnetoencephalography (MEG), with the first two being the most common methods in this domain. While collecting driver fMRI signal has been particularly instrumental in studying neural correlates of intoxicated driving (e.g. alcohol or cannabis) or distracted driving, the EEG method has been predominantly utilised in relation to the efforts aiming at development of automatic fatigue/drowsiness detection systems, a topic to which the literature on neuro-ergonomics of driving particularly has shown a spike of interest within the last few years. The survey also reveals that topics such as driver brain activity in semi-automated settings or the brain activity of drivers with brain injuries or chronic neurological conditions have by contrast been investigated to a very limited extent. Further, potential topics in relation to driving behaviour are identified that could benefit from the adoption of neuroimaging methods in future studies

    Towards a learning fingerprint: new methods and paradigms for complex motor skill learning in fMRI

    Get PDF
    Functional Magnetic Resonance Imaging (fMRI) research in sensorimotor learning focus on two separate paradigms: (1) task-based (tfMRI), where brain changes are evaluated ac- cording to activity elicited by performance of the task, or (2) task-free, i.e., resting-state (rsfMRI), where changes are reflected in spontaneous, internally generated brain activity. While the former paradigm allows careful control and manipulation of the task, the later allows unrestrained motor learning tasks to take place beyond the limitations of the scanner environment. Machine learning approaches attempting to model these two types of measure- ments together to explain physiological effects of learning remained unexplored. Although these paradigms yield results showing considerable overlap between their topographical pat- terns, they are usually treated separately. Consequently, their relationship, and how or if any behaviorally relevant neural information processing mediates it, remains unclear. To resolve this ambiguity, new methodology was developed guided by questions of sensorimotor learning in motor tasks having dynamics completely specified mathematically. First, basic fMRI methodological considerations were made. Machine learning methods that claimed to predict individual tfMRI task maps from rsfMRI activity were improved. In reviewing previous methodology, most methods were found to underperform against trivial baseline model performances based on massive group averaging. New methods were devel- oped that remedies this problem to a great extent. Benchmark comparisons and model evaluation metrics demonstrating empirical properties related to this predictive mapping previously unconsidered were also further developed. With these newly formed empirical ob- servations, a relationship between individual prediction scores and behavioral performance measured during the task could be established. Second, a complex motor learning task performed during an fMRI measurement was designed to relate learning effects observed in both types of measurements from a single longitudinal learning session. Participants measured while performing the task show they learn to exploit a property that drives brain activity in certain regions towards a state requiring less active control and error correction. Reconfiguration of functional activity in task-evoked and task- free activity from these behavioral learning effects were investigated, applying methodology developed earlier in an attempt to relate them together. Predictions of individual task- evoked responses from rsfMRI provide a relative measure of dependence, however, remain limited for reasons understood from the methodological study. No rsfMRI reconfiguration due to learning was detected, yet changes over the course of learning in task-evoked activity appear significant. Increasing recruitment of the Default Mode Network (DMN) during the task explain these changes. These results support that minimal reconfiguration of the cortex suggestive of plasticity effects are needed to find task solutions in a passively stable space

    A sensorimotor account of visual attention in natural behaviour

    Get PDF
    The real-world sensorimotor paradigm is based on the premise that sufficient ecological complexity is a prerequisite for inducing naturally relevant sensorimotor relations in the experimental context. The aim of this thesis is to embed visual attention research within the real-world sensorimotor paradigm using an innovative mobile gaze-tracking system (EyeSeeCam, Schneider et al., 2009). Common laboratory set-ups in the field of attention research fail to create natural two-way interaction between observer and situation because they deliver pre-selected stimuli and human observer is essentially neutral or passive. EyeSeeCam, by contrast, permits an experimental design whereby the observer freely and spontaneously engages in real-world situations. By aligning a video camera in real time to the movements of the eyes, the system directly measures the observer’s perspective in a video recording and thus allows us to study vision in the context of authentic human behaviour, namely as resulting from past actions and as originating future actions. The results of this thesis demonstrate that (1) humans, when freely exploring natural environments, prefer directing their attention to local structural features of the world, (2) eyes, head and body perform distinct functions throughout this process, and (3) coordinated eye and head movements do not fully stabilize but rather continuously adjust the retinal image also during periods of quasi-stable “fixation”. These findings validate and extend the common laboratory concept of feature salience within whole-body sensorimotor actions outside the laboratory. Head and body movements roughly orient gaze, potentially driven by early stages of processing. The eyes then fine-tune the direction of gaze, potentially during higher-level stages of visual-spatial behaviour (Studies 1 and 2). Additional head-centred recordings reveal distinctive spatial biases both in the visual stimulation and the spatial allocation of gaze generated in a particular real-world situation. These spatial structures may result both from the environment and form the idiosyncrasies of the natural behaviour afforded by the situation. By contrast, when the head-centred videos are re-played as stimuli in the laboratory, gaze directions reveal a bias towards the centre of the screen. This “central bias” is likely a consequence of the laboratory set-up with its limitation to eye-in-head movements and its restricted screen (Study 3). Temporal analysis of natural visual behaviour reveals frequent synergistic interactions of eye and head that direct rather than stabilize gaze in the quasi-stable eye movement periods following saccades, leading to rich temporal dynamics of real-world retinal input (Study 4) typically not addressed in laboratory studies. Direct comparison to earlier data with respect to the visual system of cats (CatCam), frequently taken as proxy for human vision, shows that stabilizing eye movements play an even less dominant role in the natural behaviour of cats. This highlights the importance of realistic temporal dynamics of vision for models and experiments (Study 5). The approach and findings presented in this thesis demonstrate the need for and feasibility of real- world research on visual attention. Real-world paradigms permit the identification of relevant features triggered in the natural interplay between internal-physiological and external-situational sensorimotor factors. Realistic spatial and temporal characteristics of eye, head and body interactions are essential qualitative properties of reliable sensorimotor models of attention but difficult to obtain under laboratory conditions. Taken together, the data and theory presented in this thesis suggest that visual attention does not represent a pre-processing stage of object recognition but rather is an integral component of embodied action in the real world

    The neuro-cognitive representation of word meaning resolved in space and time.

    Get PDF
    One of the core human abilities is that of interpreting symbols. Prompted with a perceptual stimulus devoid of any intrinsic meaning, such as a written word, our brain can access a complex multidimensional representation, called semantic representation, which corresponds to its meaning. Notwithstanding decades of neuropsychological and neuroimaging work on the cognitive and neural substrate of semantic representations, many questions are left unanswered. The research in this dissertation attempts to unravel one of them: are the neural substrates of different components of concrete word meaning dissociated? In the first part, I review the different theoretical positions and empirical findings on the cognitive and neural correlates of semantic representations. I highlight how recent methodological advances, namely the introduction of multivariate methods for the analysis of distributed patterns of brain activity, broaden the set of hypotheses that can be empirically tested. In particular, they allow the exploration of the representational geometries of different brain areas, which is instrumental to the understanding of where and when the various dimensions of the semantic space are activated in the brain. Crucially, I propose an operational distinction between motor-perceptual dimensions (i.e., those attributes of the objects referred to by the words that are perceived through the senses) and conceptual ones (i.e., the information that is built via a complex integration of multiple perceptual features). In the second part, I present the results of the studies I conducted in order to investigate the automaticity of retrieval, topographical organization, and temporal dynamics of motor-perceptual and conceptual dimensions of word meaning. First, I show how the representational spaces retrieved with different behavioral and corpora-based methods (i.e., Semantic Distance Judgment, Semantic Feature Listing, WordNet) appear to be highly correlated and overall consistent within and across subjects. Second, I present the results of four priming experiments suggesting that perceptual dimensions of word meaning (such as implied real world size and sound) are recovered in an automatic but task-dependent way during reading. Third, thanks to a functional magnetic resonance imaging experiment, I show a representational shift along the ventral visual path: from perceptual features, preferentially encoded in primary visual areas, to conceptual ones, preferentially encoded in mid and anterior temporal areas. This result indicates that complementary dimensions of the semantic space are encoded in a distributed yet partially dissociated way across the cortex. Fourth, by means of a study conducted with magnetoencephalography, I present evidence of an early (around 200 ms after stimulus onset) simultaneous access to both motor-perceptual and conceptual dimensions of the semantic space thanks to different aspects of the signal: inter-trial phase coherence appears to be key for the encoding of perceptual while spectral power changes appear to support encoding of conceptual dimensions. These observations suggest that the neural substrates of different components of symbol meaning can be dissociated in terms of localization and of the feature of the signal encoding them, while sharing a similar temporal evolution

    Decoding the consumer’s brain: Neural representations of consumer experience

    Get PDF
    Understanding consumer experience – what consumers think about brands, how they feel about services, whether they like certain products – is crucial to marketing practitioners. ‘Neuromarketing’, as the application of neuroscience in marketing research is called, has generated excitement with the promise of understanding consumers’ minds by probing their brains directly. Recent advances in neuroimaging analysis leverage machine learning and pattern classification techniques to uncover patterns from neuroimaging data that can be associated with thoughts and feelings. In this dissertation, I measure brain responses of consumers by functional magnetic resonance imaging (fMRI) in order to ‘decode’ their mind. In three different studies, I have demonstrated how different aspects of consumer experience can be studied with fMRI recordings. First, I study how consumers think about brand image by comparing their brain responses during passive viewing of visual templates (photos depicting various social scenarios) to those during active visualizing of a brand’s image. Second, I use brain responses during viewing of affective pictures to decode emotional responses during watching of movie-trailers. Lastly, I examine whether marketing videos that evoke s

    Generative Embedding for Model-Based Classification of fMRI Data

    Get PDF
    Decoding models, such as those underlying multivariate classification algorithms, have been increasingly used to infer cognitive or clinical brain states from measures of brain activity obtained by functional magnetic resonance imaging (fMRI). The practicality of current classifiers, however, is restricted by two major challenges. First, due to the high data dimensionality and low sample size, algorithms struggle to separate informative from uninformative features, resulting in poor generalization performance. Second, popular discriminative methods such as support vector machines (SVMs) rarely afford mechanistic interpretability. In this paper, we address these issues by proposing a novel generative-embedding approach that incorporates neurobiologically interpretable generative models into discriminative classifiers. Our approach extends previous work on trial-by-trial classification for electrophysiological recordings to subject-by-subject classification for fMRI and offers two key advantages over conventional methods: it may provide more accurate predictions by exploiting discriminative information encoded in ‘hidden’ physiological quantities such as synaptic connection strengths; and it affords mechanistic interpretability of clinical classifications. Here, we introduce generative embedding for fMRI using a combination of dynamic causal models (DCMs) and SVMs. We propose a general procedure of DCM-based generative embedding for subject-wise classification, provide a concrete implementation, and suggest good-practice guidelines for unbiased application of generative embedding in the context of fMRI. We illustrate the utility of our approach by a clinical example in which we classify moderately aphasic patients and healthy controls using a DCM of thalamo-temporal regions during speech processing. Generative embedding achieves a near-perfect balanced classification accuracy of 98% and significantly outperforms conventional activation-based and correlation-based methods. This example demonstrates how disease states can be detected with very high accuracy and, at the same time, be interpreted mechanistically in terms of abnormalities in connectivity. We envisage that future applications of generative embedding may provide crucial advances in dissecting spectrum disorders into physiologically more well-defined subgroups
    corecore