35 research outputs found

    Assessment of mental workload across cognitive tasks using a passive brain-computer interface based on mean negative theta-band amplitudes

    Get PDF
    Brain-computer interfaces (BCI) can provide real-time and continuous assessments of mental workload in different scenarios, which can subsequently be used to optimize human-computer interaction. However, assessment of mental workload is complicated by the task-dependent nature of the underlying neural signals. Thus, classifiers trained on data from one task do not generalize well to other tasks. Previous attempts at classifying mental workload across different cognitive tasks have therefore only been partially successful. Here we introduce a novel algorithm to extract frontal theta oscillations from electroencephalographic (EEG) recordings of brain activity and show that it can be used to detect mental workload across different cognitive tasks. We use a published data set that investigated subject dependent task transfer, based on Filter Bank Common Spatial Patterns. After testing, our approach enables a binary classification of mental workload with performances of 92.00 and 92.35%, respectively for either low or high workload vs. an initial no workload condition, with significantly better results than those of the previous approach. It, nevertheless, does not perform beyond chance level when comparing high vs. low workload conditions. Also, when an independent component analysis was done first with the data (and before any additional preprocessing procedure), even though we achieved more stable classification results above chance level across all tasks, it did not perform better than the previous approach. These mixed results illustrate that while the proposed algorithm cannot replace previous general-purpose classification methods, it may outperform state-of-the-art algorithms in specific (workload) comparisons

    Ozone deposition impact assessments for forest canopies require accurate ozone flux partitioning on diurnal timescales

    Get PDF
    Dry deposition is an important sink of tropospheric ozone that affects surface concentrations and impacts crop yields, the land carbon sink, and the terrestrial water cycle. Dry deposition pathways include plant uptake via stomata and non-stomatal removal by soils, leaf surfaces, and chemical reactions. Observational studies indicate that ozone deposition exhibits substantial temporal variability that is not reproduced by atmospheric chemistry models due to a simplified representation of vegetation uptake processes in these models. In this study, we explore the importance of stomatal and non-stomatal uptake processes in driving ozone dry deposition variability on diurnal to seasonal timescales. Specifically, we compare two land surface ozone uptake parameterizations - a commonly applied big leaf parameterization (W89; Wesely, 1989) and a multi-layer model (MLC-CHEM) constrained with observations - to multi-year ozone flux observations at two European measurement sites (Ispra, Italy, and Hyytiala, Finland). We find that W89 cannot reproduce the diurnal cycle in ozone deposition due to a misrepresentation of stomatal and non-stomatal sinks at our two study sites, while MLC-CHEM accurately reproduces the different sink pathways. Evaluation of non-stomatal uptake further corroborates the previously found important roles of wet leaf uptake in the morning under humid conditions and soil uptake during warm conditions. The misrepresentation of stomatal versus non-stomatal uptake in W89 results in an overestimation of growing season cumulative ozone uptake (CUO), a metric for assessments of vegetation ozone damage, by 18 % (Ispra) and 28 % (Hyytiala), while MLC-CHEM reproduces CUO within 7 % of the observation-inferred values. Our results indicate the need to accurately describe the partitioning of the ozone atmosphere-biosphere flux over the in-canopy stomatal and non-stomatal loss pathways to provide more confidence in atmospheric chemistry model simulations of surface ozone mixing ratios and deposition fluxes for large-scale vegetation ozone impact assessments.Peer reviewe

    Tracing Pilots’ Situation Assessment by Neuroadaptive Cognitive Modeling

    Get PDF
    This study presents the integration of a passive brain-computer interface (pBCI) and cognitive modeling as a method to trace pilots’ perception and processing of auditory alerts and messages during operations. Missing alerts on the flight deck can result in out-of-the-loop problems that can lead to accidents. By tracing pilots’ perception and responses to alerts, cognitive assistance can be provided based on individual needs to ensure they maintain adequate situation awareness. Data from 24 participating aircrew in a simulated flight study that included multiple alerts and air traffic control messages in single pilot setup are presented. A classifier was trained to identify pilots’ neurophysiological reactions to alerts and messages from participants’ electroencephalogram (EEG). A neuroadaptive ACT-R model using EEG data was compared to a conventional normative model regarding accuracy in representing individual pilots. Results show that passive BCI can distinguish between alerts that are processed by the pilot as task-relevant or irrelevant in the cockpit based on the recorded EEG. The neuroadaptive model’s integration of this data resulted in significantly higher performance of 87% overall accuracy in representing individual pilots’ responses to alerts and messages compared to 72% accuracy of a normative model that did not consider EEG data. We conclude that neuroadaptive technology allows for implicit measurement and tracing of pilots’ perception and processing of alerts on the flight deck. Careful handling of uncertainties inherent to passive BCI and cognitive modeling shows how the representation of pilot cognitive states can be improved iteratively for providing assistance.TU Berlin, Open-Access-Mittel – 202

    Can a “state of the art” chemistry transport model simulate Amazonian tropospheric chemistry?

    Get PDF
    We present an evaluation of a nested high-resolution Goddard Earth Observing System (GEOS)-Chem chemistry transport model simulation of tropospheric chemistry over tropical South America. The model has been constrained with two isoprene emission inventories: (1) the canopy-scale Model of Emissions of Gases and Aerosols from Nature (MEGAN) and (2) a leaf-scale algorithm coupled to the Lund-Potsdam-Jena General Ecosystem Simulator (LPJ-GUESS) dynamic vegetation model, and the model has been run using two different chemical mechanisms that contain alternative treatments of isoprene photo-oxidation. Large differences of up to 100 Tg C yr^(−1) exist between the isoprene emissions predicted by each inventory, with MEGAN emissions generally higher. Based on our simulations we estimate that tropical South America (30–85°W, 14°N–25°S) contributes about 15–35% of total global isoprene emissions. We have quantified the model sensitivity to changes in isoprene emissions, chemistry, boundary layer mixing, and soil NO_x emissions using ground-based and airborne observations. We find GEOS-Chem has difficulty reproducing several observed chemical species; typically hydroxyl concentrations are underestimated, whilst mixing ratios of isoprene and its oxidation products are overestimated. The magnitude of model formaldehyde (HCHO) columns are most sensitive to the choice of chemical mechanism and isoprene emission inventory. We find GEOS-Chem exhibits a significant positive bias (10–100%) when compared with HCHO columns from the Scanning Imaging Absorption Spectrometer for Atmospheric Chartography (SCIAMACHY) and Ozone Monitoring Instrument (OMI) for the study year 2006. Simulations that use the more detailed chemical mechanism and/or lowest isoprene emissions provide the best agreement to the satellite data, since they result in lower-HCHO columns

    Good scientific practice in MEEG Research: Progress and Perspectives

    Get PDF
    Good Scientific Practice (GSP) refers to both explicit and implicit rules, recommendations, and guidelines that help scientists to produce work that is of the highest quality at any given time, and to efficiently share that work with the community for further scrutiny or utilization.For experimental research using magneto- and electroencephalography (MEEG), GSP includes specific standards and guidelines for technical competence, which are periodically updated and adapted to new findings. However, GSP also needs to be periodically revisited in a broader light. At the LiveMEEG 2020 conference, a reflection on GSP was fostered that included explicitly documented guidelines and technical advances, but also emphasized intangible GSP: a general awareness of personal, organizational, and societal realities and how they can influence MEEG research.This article provides an extensive report on most of the LiveMEEG contributions and new literature, with the additional aim to synthesize ongoing cultural changes in GSP. It first covers GSP with respect to cognitive biases and logical fallacies, pre-registration as a tool to avoid those and other early pitfalls, and a number of resources to enable collaborative and reproducible research as a general approach to minimize misconceptions. Second, it covers GSP with respect to data acquisition, analysis, reporting, and sharing, including new tools and frameworks to support collaborative work. Finally, GSP is considered in light of ethical implications of MEEG research and the resulting responsibility that scientists have to engage with societal challenges.Considering among other things the benefits of peer review and open access at all stages, the need to coordinate larger international projects, the complexity of MEEG subject matter, and today's prioritization of fairness, privacy, and the environment, we find that current GSP tends to favor collective and cooperative work, for both scientific and for societal reasons

    The next step?:an investigation into mobile location-based social networking

    No full text

    Neuroadaptive Technologie: Konzepte, Methoden, und Validierungen

    No full text
    This dissertation presents conceptual, methodological, and experimental advances in the field of neuroadaptive technology. Neuroadaptive technology refers to the category of technology that uses implicit input obtained from brain activity in order to adapt itself, e.g. to enable implicit interaction. Implicit input refers to any input obtained by a receiver that was not intended as such by the sender. Neuroadaptive technology thus detects naturally-occurring brain activity that was not intended for communication or control, and uses it to enable novel human-computer interaction paradigms. Part I of this dissertation presents different categories of neuroadaptive systems, and introduces cognitive probing, a method in which the technology deliberately elicits a brain response from the user in order to learn from it. Part II introduces two tools to help validate some core methods related to neuroadaptive technology: SEREEGA, with which electroencephalographic data can be simulated, and a classifier visualisation technique revealing which (cortical) areas a brain-computer interface classifier focused on. Finally, Part III presents two experimental studies illustrating and validating the technology described in Part I using the methods from Part II. In particular, it is demonstrated how neuroadaptive technology can be used to enable implicit cursor control using cognitive probing. Additional experimentation revealed that brain activity elicited by cursor movements can reflect internal, subjective interpretations. These experiments thus highlight both the potential benefits and the potential ethical, legal, and societal concerns of neuroadaptive technology.In dieser Dissertation werden konzeptuelle, methodologische, und experimentelle Fortschritte auf dem Gebiet der neuroadaptiven Technologie vorgestellt. Neuroadaptive Technologie bezieht sich auf die Kategorie der Technologien, die impliziten Input aus der HirnaktivitĂ€t unter Verwendung einer passiven Hirn-Computer-Schnittstelle verwenden, um sich selbst anzupassen, z.B. um implizite Interaktion zu ermöglichen. Impliziter Input bezeichnet jede Eingabe, die von einem EmpfĂ€nger erhalten wird, jedoch von dem Sender nicht als solche beabsichtigt war. Die neuroadaptive Technologie erkennt also natĂŒrlich auftretende HirnaktivitĂ€t, die nicht fĂŒr Kommunikation oder Kontrolle gedacht war, und nutzt sie, um neuartige Mensch-Computer-Interaktionsparadigmen zu ermöglichen. Teil I dieser Dissertation stellt verschiedene Kategorien von neuroadaptiven Systemen vor und fĂŒhrt cognitive probing ('kognitive Sondierung') ein: eine Methode, bei der die Technologie dem Benutzer absichtlich eine Gehirnreaktion entlockt, um von ihr zu lernen. Teil II stellt zwei Werkzeuge vor, die bei der Validierung einiger Kernmethoden der neuroadaptiven Technologie helfen sollen: SEREEGA, mit dem elektroenzephalographische Daten simuliert werden können, und eine Klassifikator-Visualisierungsmethode um zu erkennen, auf welche (kortikalen) Bereiche sich ein Klassifikator konzentriert. Schließlich werden in Teil III zwei experimentelle Studien vorgestellt, die die in Teil I beschriebene Technologien mit den Methoden aus Teil II veranschaulichen und validieren. Insbesondere wird gezeigt, wie neuroadaptive Technologie eingesetzt werden kann, um implizite Cursorsteuerung mittels cognitive probing zu ermöglichen. ZusĂ€tzliche Experimente zeigten, dass die durch die Cursorbewegungen hervorgerufene HirnaktivitĂ€t interne, subjektive Interpretationen widerspiegeln kann. Diese Experimente verdeutlichen somit sowohl den potenziellen Nutzen als auch die möglichen ethischen, rechtlichen, und gesellschaftlichen Bedenken, die in Teil I ebenfalls angesprochen wurden
    corecore