780 research outputs found

    The impact of multisensory integration and perceptual load in virtual reality settings on performance, workload and presence

    Get PDF
    Real-world experience is typically multimodal. Evidence indicates that the facilitation in the detection of multisensory stimuli is modulated by the perceptual load, the amount of information involved in the processing of the stimuli. Here, we used a realistic virtual reality environment while concomitantly acquiring Electroencephalography (EEG) and Galvanic Skin Response (GSR) to investigate how multisensory signals impact target detection in two conditions, high and low perceptual load. Different multimodal stimuli (auditory and vibrotactile) were presented, alone or in combination with the visual target. Results showed that only in the high load condition, multisensory stimuli significantly improve performance, compared to visual stimulation alone. Multisensory stimulation also decreases the EEG-based workload. Instead, the perceived workload, according to the "NASA Task Load Index" questionnaire, was reduced only by the trimodal condition (i.e., visual, auditory, tactile). This trimodal stimulation was more effective in enhancing the sense of presence, that is the feeling of being in the virtual environment, compared to the bimodal or unimodal stimulation. Also, we show that in the high load task, the GSR components are higher compared to the low load condition. Finally, the multimodal stimulation (Visual-Audio-Tactile-VAT and Visual-Audio-VA) induced a significant decrease in latency, and a significant increase in the amplitude of the P300 potentials with respect to the unimodal (visual) and visual and tactile bimodal stimulation, suggesting a faster and more effective processing and detection of stimuli if auditory stimulation is included. Overall, these findings provide insights into the relationship between multisensory integration and human behavior and cognition

    Multimodal Affect Recognition: Current Approaches and Challenges

    Get PDF
    Many factors render multimodal affect recognition approaches appealing. First, humans employ a multimodal approach in emotion recognition. It is only fitting that machines, which attempt to reproduce elements of the human emotional intelligence, employ the same approach. Second, the combination of multiple-affective signals not only provides a richer collection of data but also helps alleviate the effects of uncertainty in the raw signals. Lastly, they potentially afford us the flexibility to classify emotions even when one or more source signals are not possible to retrieve. However, the multimodal approach presents challenges pertaining to the fusion of individual signals, dimensionality of the feature space, and incompatibility of collected signals in terms of time resolution and format. In this chapter, we explore the aforementioned challenges while presenting the latest scholarship on the topic. Hence, we first discuss the various modalities used in affect classification. Second, we explore the fusion of modalities. Third, we present publicly accessible multimodal datasets designed to expedite work on the topic by eliminating the laborious task of dataset collection. Fourth, we analyze representative works on the topic. Finally, we summarize the current challenges in the field and provide ideas for future research directions

    Predictive cognition in dementia: the case of music

    Get PDF
    The clinical complexity and pathological diversity of neurodegenerative diseases impose immense challenges for diagnosis and the design of rational interventions. To address these challenges, there is a need to identify new paradigms and biomarkers that capture shared pathophysiological processes and can be applied across a range of diseases. One core paradigm of brain function is predictive coding: the processes by which the brain establishes predictions and uses them to minimise prediction errors represented as the difference between predictions and actual sensory inputs. The processes involved in processing unexpected events and responding appropriately are vulnerable in common dementias but difficult to characterise. In my PhD work, I have exploited key properties of music – its universality, ecological relevance and structural regularity – to model and assess predictive cognition in patients representing major syndromes of frontotemporal dementia – non-fluent variant PPA (nfvPPA), semantic-variant PPA (svPPA) and behavioural-variant FTD (bvFTD) - and Alzheimer’s disease relative to healthy older individuals. In my first experiment, I presented patients with well-known melodies containing no deviants or one of three types of deviant - acoustic (white-noise burst), syntactic (key-violating pitch change) or semantic (key-preserving pitch change). I assessed accuracy detecting melodic deviants and simultaneously-recorded pupillary responses to these deviants. I used voxel-based morphometry to define neuroanatomical substrates for the behavioural and autonomic processing of these different types of deviants, and identified a posterior temporo-parietal network for detection of basic acoustic deviants and a more anterior fronto-temporo-striatal network for detection of syntactic pitch deviants. In my second chapter, I investigated the ability of patients to track the statistical structure of the same musical stimuli, using a computational model of the information dynamics of music to calculate the information-content of deviants (unexpectedness) and entropy of melodies (uncertainty). I related these information-theoretic metrics to performance for detection of deviants and to ‘evoked’ and ‘integrative’ pupil reactivity to deviants and melodies respectively and found neuroanatomical correlates in bilateral dorsal and ventral striatum, hippocampus, superior temporal gyri, right temporal pole and left inferior frontal gyrus. Together, chapters 3 and 4 revealed new hypotheses about the way FTD and AD pathologies disrupt the integration of predictive errors with predictions: a retained ability of AD patients to detect deviants at all levels of the hierarchy with a preserved autonomic sensitivity to information-theoretic properties of musical stimuli; a generalized impairment of surprise detection and statistical tracking of musical information at both a cognitive and autonomic levels for svPPA patients underlying a diminished precision of predictions; the exact mirror profile of svPPA patients in nfvPPA patients with an abnormally high rate of false-alarms with up-regulated pupillary reactivity to deviants, interpreted as over-precise or inflexible predictions accompanied with normal cognitive and autonomic probabilistic tracking of information; an impaired behavioural and autonomic reactivity to unexpected events with a retained reactivity to environmental uncertainty in bvFTD patients. Chapters 5 and 6 assessed the status of reward prediction error processing and updating via actions in bvFTD. I created pleasant and aversive musical stimuli by manipulating chord progressions and used a classic reinforcement-learning paradigm which asked participants to choose the visual cue with the highest probability of obtaining a musical ‘reward’. bvFTD patients showed reduced sensitivity to the consequence of an action and lower learning rate in response to aversive stimuli compared to reward. These results correlated with neuroanatomical substrates in ventral and dorsal attention networks, dorsal striatum, parahippocampal gyrus and temporo-parietal junction. Deficits were governed by the level of environmental uncertainty with normal learning dynamics in a structured and binarized environment but exacerbated deficits in noisier environments. Impaired choice accuracy in noisy environments correlated with measures of ritualistic and compulsive behavioural changes and abnormally reduced learning dynamics correlated with behavioural changes related to empathy and theory-of-mind. Together, these experiments represent the most comprehensive attempt to date to define the way neurodegenerative pathologies disrupts the perceptual, behavioural and physiological encoding of unexpected events in predictive coding terms

    Neuromarketing: a review of research and implications for marketing

    Get PDF
    In this research, we reviewed existing studies which used neuromarketing techniques in various fields of research. The results revealed that most attempts in neuromarketing have been made for business research. This research provides important results on the use of neuromarketing techniques, their limitations and implications for marketing research. We hope that this research will provide useful information about the neuromarketing techniques, their applications and help the researchers in conducting the research on neuromarketing with insight into the state-of-the-art of development methods

    Novel methods to evaluate blindsight and develop rehabilitation strategies for patients with cortical blindness

    Full text link
    20 à 57 % des victimes d'un accident vasculaire cérébral (AVC) sont diagnostiqués aves des déficits visuels qui réduisent considérablement leur qualité de vie. Parmi les cas extrêmes de déficits visuels, nous retrouvons les cécités corticales (CC) qui se manifestent lorsque la région visuelle primaire (V1) est atteinte. Jusqu'à présent, il n'existe aucune approche permettant d'induire la restauration visuelle des fonctions et, dans la plupart des cas, la plasticité est insuffisante pour permettre une récupération spontanée. Par conséquent, alors que la perte de la vue est considérée comme permanente, des fonctions inconscientes mais importantes, connues sous le nom de vision aveugle (blindsight), pourraient être utiles pour les stratégies de réhabilitation visuelle, ce qui suscite un vif intérêt dans le domaine des neurosciences cognitives. La vision aveugle est un phénomène rare qui dépeint une dissociation entre la performance et la conscience, principalement étudiée dans des études de cas. Dans le premier chapitre de cette thèse, nous avons abordé plusieurs questions concernant notre compréhension de la vision aveugle. Comme nous le soutenons, une telle compréhension pourrait avoir une influence significative sur la réhabilitation clinique des patients souffrant de CC. Par conséquent, nous proposons une stratégie unique pour la réhabilitation visuelle qui utilise les principes du jeu vidéo pour cibler et potentialiser les mécanismes neuronaux dans le cadre de l'espace de travail neuronal global, qui est expliqué théoriquement dans l'étude 1 et décrit méthodologiquement dans l'étude 5. En d'autres termes, nous proposons que les études de cas, en conjonction avec des critères méthodologiques améliorés, puissent identifier les substrats neuronaux qui soutiennent la vision aveugle et inconsciente. Ainsi, le travail de cette thèse a fourni trois expériences empiriques (études 2, 3 et 4) en utilisant de nouveaux standards dans l'analyse électrophysiologique qui décrivent les cas de patients SJ présentant une cécité pour les scènes complexes naturelles affectives et ML présentant une cécité pour les stimuli de mouvement. Dans les études 2 et 3, nous avons donc sondé les substrats neuronaux sous-corticaux et corticaux soutenant la cécité affective de SJ en utilisant la MEG et nous avons comparé ces corrélats à sa perception consciente. L’étude 4 nous a permis de caractériser les substrats de la détection automatique des changements en l'absence de conscience visuelle, mesurée par la négativité de discordance (en anglais visual mismatch negativity : vMMN) chez ML et dans un groupe neurotypique. Nous concluons en proposant la vMMN comme biomarqueur neuronal du traitement inconscient dans la vision normale et altérée indépendante des évaluations comportementales. Grâce à ces procédures, nous avons pu aborder certains débats ouverts dans la littérature sur la vision aveugle et sonder l'existence de voies neurales secondaires soutenant le comportement inconscient. En conclusion, cette thèse propose de combiner les perspectives empiriques et cliniques en utilisant des avancées méthodologiques et de nouvelles méthodes pour comprendre et cibler les substrats neurophysiologiques sous-jacents à la vision aveugle. Il est important de noter que le cadre offert par cette thèse de doctorat pourrait aider les études futures à construire des outils thérapeutiques ciblés efficaces et des stratégies de réhabilitation multimodale.20 to 57% of victims of a cerebrovascular accident (CVA) develop visual deficits that considerably reduce their quality of life. Among the extreme cases of visual deficits, we find cortical blindness (CC) which manifests when the primary visual region (V1) is affected. Until now, there is no approach that induces restoration of visual function and in most cases, plasticity is insufficient to allow spontaneous recovery. Therefore, while sight loss is considered permanent, unconscious yet important functions, known as blindsight, could be of use for visual rehabilitation strategies raising strong interest in cognitive neurosciences. Blindsight is a rare phenomenon that portrays a dissociation between performance and consciousness mainly investigated in case reports. In the first chapter of this thesis, we’ve addressed multiple issues about our comprehension of blindsight and conscious perception. As we argue, such understanding might have a significant influence on clinical rehabilitation patients suffering from CB. Therefore, we propose a unique strategy for visual rehabilitation that uses video game principles to target and potentiate neural mechanisms within the global neuronal workspace framework, which is theoretically explained in study 1 and methodologically described in study 5. In other words, we propose that case reports, in conjunction with improved methodological criteria, might identify the neural substrates that support blindsight and unconscious processing. Thus, the work in this Ph.D. work provided three empirical experiments (studies 2, 3, and 4) that used new standards in electrophysiological analyses as they describe the cases of patients SJ presenting blindsight for affective natural complex scenes and ML presenting blindsight for motion stimuli. In studies 2 and 3, we probed the subcortical and cortical neural substrates supporting SJ’s affective blindsight using MEG as we compared these unconscious correlates to his conscious perception. Study 4 characterizes the substrates of automatic detection of changes in the absence of visual awareness as measured by the visual mismatch negativity (vMMN) in ML and a neurotypical group. We conclude by proposing the vMMN as a neural biomarker of unconscious processing in normal and altered vision independent of behavioral assessments. As a result of these procedures, we were able to address certain open debates in the blindsight literature and probe the existence of secondary neural pathways supporting unconscious behavior. In conclusion, this thesis proposes to combine empirical and clinical perspectives by using methodological advances and novel methods to understand and target the neurophysiological substrates underlying blindsight. Importantly, the framework offered by this doctoral dissertation might help future studies build efficient targeted therapeutic tools and multimodal rehabilitation training

    How musical rhythms entrain the human brain : clarifying the neural mechanisms of sensory-motor entrainment to rhythms

    Get PDF
    When listening to music, people across cultures tend to spontaneously perceive and move the body along a periodic pulse-like meter. Increasing evidence suggests that this ability is supported by neural mechanisms that selectively amplify periodicities corresponding to the perceived metric pulses. However, the nature of these neural mechanisms, i.e., the endogenous or exogenous factors that may selectively enhance meter periodicities in brain responses to rhythm, remains largely unknown. This question was investigated in a series of studies in which the electroencephalogram (EEG) of healthy participants was recorded while they listened to musical rhythm. From this EEG, selective contrast at meter periodicities in the elicited neural activity was captured using frequency-tagging, a method allowing direct comparison of this contrast between the sensory input, EEG response, biologically-plausible models of auditory subcortical processing, and behavioral output. The results show that the selective amplification of meter periodicities is shaped by a continuously updated combination of factors including sound spectral content, long-term training and recent context, irrespective of attentional focus and beyond auditory subcortical nonlinear processing. Together, these observations demonstrate that perception of rhythm involves a number of processes that transform the sensory input via fixed low-level nonlinearities, but also through flexible mappings shaped by prior experience at different timescales. These higher-level neural mechanisms could represent a neurobiological basis for the remarkable flexibility and stability of meter perception relative to the acoustic input, which is commonly observed within and across individuals. Fundamentally, the current results add to the evidence that evolution has endowed the human brain with an extraordinary capacity to organize, transform, and interact with rhythmic signals, to achieve adaptive behavior in a complex dynamic environment

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Simple acoustic features can explain phoneme-based predictions of cortical responses to speech

    Get PDF
    When we listen to speech, we have to make sense of a waveform of sound pressure. Hierarchical models of speech perception assume that, to extract semantic meaning, the signal is transformed into unknown, intermediate neuronal representations. Traditionally, studies of such intermediate representations are guided by linguistically defined concepts, such as phonemes. Here, we argue that in order to arrive at an unbiased understanding of the neuronal responses to speech, we should focus instead on representations obtained directly from the stimulus. We illustrate our view with a data-driven, information theoretic analysis of a dataset of 24 young, healthy humans who listened to a 1 h narrative while their magnetoencephalogram (MEG) was recorded. We find that two recent results, the improved performance of an encoding model in which annotated linguistic and acoustic features were combined and the decoding of phoneme subgroups from phoneme-locked responses, can be explained by an encoding model that is based entirely on acoustic features. These acoustic features capitalize on acoustic edges and outperform Gabor-filtered spectrograms, which can explicitly describe the spectrotemporal characteristics of individual phonemes. By replicating our results in publicly available electroencephalography (EEG) data, we conclude that models of brain responses based on linguistic features can serve as excellent benchmarks. However, we believe that in order to further our understanding of human cortical responses to speech, we should also explore low-level and parsimonious explanations for apparent high-level phenomena
    corecore