70 research outputs found

    Tradeoff between User Experience and BCI Classification Accuracy with Frequency Modulated Steady-State Visual Evoked Potentials

    Get PDF
    Steady-state visual evoked potentials (SSVEPs) have been widely employed for the control of brain-computer interfaces (BCIs) because they are very robust, lead to high performance, and allow for a high number of commands. However, such flickering stimuli often also cause user discomfort and fatigue, especially when several light sources are used simultaneously. Different variations of SSVEP driving signals have been proposed to increase user comfort. Here, we investigate the suitability of frequency modulation of a high frequency carrier for SSVEP-BCIs. We compared BCI performance and user experience between frequency modulated (FM) and traditional sinusoidal (SIN) SSVEPs in an offline classification paradigm with four independently flickering light-emitting diodes which were overtly attended (fixated). While classification performance was slightly reduced with the FM stimuli, the user comfort was significantly increased. Comparing the SSVEPs for covert attention to the stimuli (without fixation) was not possible, as no reliable SSVEPs were evoked. Our results reveal that several, simultaneously flickering, light emitting diodes can be used to generate FM-SSVEPs with different frequencies and the resulting occipital electroencephalography (EEG) signals can be classified with high accuracy. While the performance we report could be further improved with adjusted stimuli and algorithms, we argue that the increased comfort is an important result and suggest the use of FM stimuli for future SSVEP-BCI applications

    Online Tracking of the Contents of Conscious Perception Using Real-Time fMRI

    Get PDF
    Perception is an active process that interprets and structures the stimulus input based on assumptions about its possible causes. We use real-time functional magnetic resonance imaging (rtfMRI) to investigate a particularly powerful demonstration of dynamic object integration in which the same physical stimulus intermittently elicits categorically different conscious object percepts. In this study, we simulated an outline object that is moving behind a narrow slit. With such displays, the physically identical stimulus can elicit categorically different percepts that either correspond closely to the physical stimulus (vertically moving line segments) or represent a hypothesis about the underlying cause of the physical stimulus (a horizontally moving object that is partly occluded). In the latter case, the brain must construct an object from the input sequence. Combining rtfMRI with machine learning techniques we show that it is possible to determine online the momentary state of a subject’s conscious percept from time resolved BOLD-activity. In addition, we found that feedback about the currently decoded percept increased the decoding rates compared to prior fMRI recordings of the same stimulus without feedback presentation. The analysis of the trained classifier revealed a brain network that discriminates contents of conscious perception with antagonistic interactions between early sensory areas that represent physical stimulus properties and higher-tier brain areas. During integrated object percepts, brain activity decreases in early sensory areas and increases in higher-tier areas. We conclude that it is possible to use BOLD responses to reliably track the contents of conscious visual perception with a relatively high temporal resolution. We suggest that our approach can also be used to investigate the neural basis of auditory object formation and discuss the results in the context of predictive coding theory

    Recognizing Frustration of Drivers From Face Video Recordings and Brain Activation Measurements With Functional Near-Infrared Spectroscopy

    Get PDF
    Experiencing frustration while driving can harm cognitive processing, result in aggressive behavior and hence negatively influence driving performance and traffic safety. Being able to automatically detect frustration would allow adaptive driver assistance and automation systems to adequately react to a driver’s frustration and mitigate potential negative consequences. To identify reliable and valid indicators of driver’s frustration, we conducted two driving simulator experiments. In the first experiment, we aimed to reveal facial expressions that indicate frustration in continuous video recordings of the driver’s face taken while driving highly realistic simulator scenarios in which frustrated or non-frustrated emotional states were experienced. An automated analysis of facial expressions combined with multivariate logistic regression classification revealed that frustrated time intervals can be discriminated from non-frustrated ones with accuracy of 62.0% (mean over 30 participants). A further analysis of the facial expressions revealed that frustrated drivers tend to activate muscles in the mouth region (chin raiser, lip pucker, lip pressor). In the second experiment, we measured cortical activation with almost whole-head functional near-infrared spectroscopy (fNIRS) while participants experienced frustrating and non-frustrating driving simulator scenarios. Multivariate logistic regression applied to the fNIRS measurements allowed us to discriminate between frustrated and non-frustrated driving intervals with higher accuracy of 78.1% (mean over 12 participants). Frustrated driving intervals were indicated by increased activation in the inferior frontal, putative premotor and occipito-temporal cortices. Our results show that facial and cortical markers of frustration can be informative for time resolved driver state identification in complex realistic driving situations. The markers derived here can potentially be used as an input for future adaptive driver assistance and automation systems that detect driver frustration and adaptively react to mitigate it

    Demonstrating Brain-Level Interactions Between Visuospatial Attentional Demands and Working Memory Load While Driving Using Functional Near-Infrared Spectroscopy

    Get PDF
    Driving is a complex task concurrently drawing on multiple cognitive resources. Yet, there is a lack of studies investigating interactions at the brain-level among different driving subtasks in dual-tasking. This study investigates how visuospatial attentional demands related to increased driving difficulty interacts with different working memory load (WML) levels at the brain level. Using multichannel whole-head high density functional near-infrared spectroscopy (fNIRS) brain activation measurements, we aimed to predict driving difficulty level, both separate for each WML level and with a combined model. Participants drove for approximately 60 min on a highway with concurrent traffic in a virtual reality driving simulator. In half of the time, the course led through a construction site with reduced lane width, increasing visuospatial attentional demands. Concurrently, participants performed a modified version of the n-back task with five different WML levels (from 0-back up to 4-back), forcing them to continuously update, memorize, and recall the sequence of the previous ‘n’ speed signs and adjust their speed accordingly. Using multivariate logistic ridge regression, we were able to correctly predict driving difficulty in 75.0% of the signal samples (1.955 Hz sampling rate) across 15 participants in an out-of-sample cross-validation of classifiers trained on fNIRS data separately for each WML level. There was a significant effect of the WML level on the driving difficulty prediction accuracies [range 62.2–87.1%; χ2(4) = 19.9, p < 0.001, Kruskal–Wallis H test] with highest prediction rates at intermediate WML levels. On the contrary, training one classifier on fNIRS data across all WML levels severely degraded prediction performance (mean accuracy of 46.8%). Activation changes in the bilateral dorsal frontal (putative BA46), bilateral inferior parietal (putative BA39), and left superior parietal (putative BA7) areas were most predictive to increased driving difficulty. These discriminative patterns diminished at higher WML levels indicating that visuospatial attentional demands and WML involve interacting underlying brain processes. The changing pattern of driving difficulty related brain areas across WML levels could indicate potential changes in the multitasking strategy with level of WML demand, in line with the multiple resource theory

    AI-assisted ethics? considerations of AI simulation for the ethical assessment and design of assistive technologies

    Get PDF
    Current ethical debates on the use of artificial intelligence (AI) in healthcare treat AI as a product of technology in three ways. First, by assessing risks and potential benefits of currently developed AI-enabled products with ethical checklists; second, by proposing ex ante lists of ethical values seen as relevant for the design and development of assistive technology, and third, by promoting AI technology to use moral reasoning as part of the automation process. The dominance of these three perspectives in the discourse is demonstrated by a brief summary of the literature. Subsequently, we propose a fourth approach to AI, namely, as a methodological tool to assist ethical reflection. We provide a concept of an AI-simulation informed by three separate elements: 1) stochastic human behavior models based on behavioral data for simulating realistic settings, 2) qualitative empirical data on value statements regarding internal policy, and 3) visualization components that aid in understanding the impact of changes in these variables. The potential of this approach is to inform an interdisciplinary field about anticipated ethical challenges or ethical trade-offs in concrete settings and, hence, to spark a re-evaluation of design and implementation plans. This may be particularly useful for applications that deal with extremely complex values and behavior or with limitations on the communication resources of affected persons (e.g., persons with dementia care or for care of persons with cognitive impairment). Simulation does not replace ethical reflection but does allow for detailed, context-sensitive analysis during the design process and prior to implementation. Finally, we discuss the inherently quantitative methods of analysis afforded by stochastic simulations as well as the potential for ethical discussions and how simulations with AI can improve traditional forms of thought experiments and future-oriented technology assessment

    BOLD responses in human V1 to local structure in natural scenes: Implications for theories of visual coding

    Get PDF
    In this study we tested predictions of two important theories of visual coding, contrast energy and sparse coding theory, on the dependence of population activity level and metabolic demands on spatial structure of the visual input. With carefully calibrated displays we find that in humans neither the V1 blood oxygenation level dependent (BOLD) response nor the initial visually evoked fields in magnetoencephalography (MEG) are sensitive to phase perturbations in photographs of natural scenes. As a control, we quantitatively show that the applied phase perturbations decrease sparseness (kurtosis) of our stimuli but preserve their root mean square (RMS) contrast. Importantly, we show that the lack of sensitivity of the V1 population response level to phase perturbations is not due to a lack of sensitivity of our methods because V1 responses were highly sensitive to variations of image RMS contrast. Our results suggest that the transition from a sparse to a distributed neural code in the early visual system induced by reducing image sparseness has negligible consequences for population metabolic cost. This result imposes a novel and important empirical constraint on quantitative models of sparse coding: Population metabolic rate and population activation level is sensitive to second order statistics (RMS contrast) of the input but not to its spatial phase and fourth order statistics (kurtosis)
    • …
    corecore