2,026 research outputs found

    Working Memory in Writing: Empirical Evidence From the Dual-Task Technique

    Get PDF
    The dual-task paradigm recently played a major role in understanding the role of working memory in writing. By reviewing recent findings in this field of research, this article highlights how the use of the dual-task technique allowed studying processing and short-term storage functions of working memory involved in writing. With respect to processing functions of working memory (namely, attentional and executive functions), studies investigated resources allocation, step-by-step management and parallel coordination of the writing processes. With respect to short-term storage in working memory, experiments mainly attempted to test Kellogg's (1996) proposals on the relationship between the writing processes and the slave systems of working memory. It is concluded that the dual-task technique revealed fruitful in understanding the relationship between writing and working memory

    Learning Representations from EEG with Deep Recurrent-Convolutional Neural Networks

    Full text link
    One of the challenges in modeling cognitive events from electroencephalogram (EEG) data is finding representations that are invariant to inter- and intra-subject differences, as well as to inherent noise associated with such data. Herein, we propose a novel approach for learning such representations from multi-channel EEG time-series, and demonstrate its advantages in the context of mental load classification task. First, we transform EEG activities into a sequence of topology-preserving multi-spectral images, as opposed to standard EEG analysis techniques that ignore such spatial information. Next, we train a deep recurrent-convolutional network inspired by state-of-the-art video classification to learn robust representations from the sequence of images. The proposed approach is designed to preserve the spatial, spectral, and temporal structure of EEG which leads to finding features that are less sensitive to variations and distortions within each dimension. Empirical evaluation on the cognitive load classification task demonstrated significant improvements in classification accuracy over current state-of-the-art approaches in this field.Comment: To be published as a conference paper at ICLR 201

    A Human-Centric Metaverse Enabled by Brain-Computer Interface: A Survey

    Full text link
    The growing interest in the Metaverse has generated momentum for members of academia and industry to innovate toward realizing the Metaverse world. The Metaverse is a unique, continuous, and shared virtual world where humans embody a digital form within an online platform. Through a digital avatar, Metaverse users should have a perceptual presence within the environment and can interact and control the virtual world around them. Thus, a human-centric design is a crucial element of the Metaverse. The human users are not only the central entity but also the source of multi-sensory data that can be used to enrich the Metaverse ecosystem. In this survey, we study the potential applications of Brain-Computer Interface (BCI) technologies that can enhance the experience of Metaverse users. By directly communicating with the human brain, the most complex organ in the human body, BCI technologies hold the potential for the most intuitive human-machine system operating at the speed of thought. BCI technologies can enable various innovative applications for the Metaverse through this neural pathway, such as user cognitive state monitoring, digital avatar control, virtual interactions, and imagined speech communications. This survey first outlines the fundamental background of the Metaverse and BCI technologies. We then discuss the current challenges of the Metaverse that can potentially be addressed by BCI, such as motion sickness when users experience virtual environments or the negative emotional states of users in immersive virtual applications. After that, we propose and discuss a new research direction called Human Digital Twin, in which digital twins can create an intelligent and interactable avatar from the user's brain signals. We also present the challenges and potential solutions in synchronizing and communicating between virtual and physical entities in the Metaverse

    Multi-modal post-editing of machine translation

    Get PDF
    As MT quality continues to improve, more and more translators switch from traditional translation from scratch to PE of MT output, which has been shown to save time and reduce errors. Instead of mainly generating text, translators are now asked to correct errors within otherwise helpful translation proposals, where repetitive MT errors make the process tiresome, while hard-to-spot errors make PE a cognitively demanding activity. Our contribution is three-fold: first, we explore whether interaction modalities other than mouse and keyboard could well support PE by creating and testing the MMPE translation environment. MMPE allows translators to cross out or hand-write text, drag and drop words for reordering, use spoken commands or hand gestures to manipulate text, or to combine any of these input modalities. Second, our interviews revealed that translators see value in automatically receiving additional translation support when a high CL is detected during PE. We therefore developed a sensor framework using a wide range of physiological and behavioral data to estimate perceived CL and tested it in three studies, showing that multi-modal, eye, heart, and skin measures can be used to make translation environments cognition-aware. Third, we present two multi-encoder Transformer architectures for APE and discuss how these can adapt MT output to a domain and thereby avoid correcting repetitive MT errors.Angesichts der stetig steigenden QualitĂ€t maschineller Übersetzungssysteme (MÜ) post-editieren (PE) immer mehr Übersetzer die MÜ-Ausgabe, was im Vergleich zur herkömmlichen Übersetzung Zeit spart und Fehler reduziert. Anstatt primĂ€r Text zu generieren, mĂŒssen Übersetzer nun Fehler in ansonsten hilfreichen ÜbersetzungsvorschlĂ€gen korrigieren. Dennoch bleibt die Arbeit durch wiederkehrende MÜ-Fehler mĂŒhsam und schwer zu erkennende Fehler fordern die Übersetzer kognitiv. Wir tragen auf drei Ebenen zur Verbesserung des PE bei: Erstens untersuchen wir, ob andere InteraktionsmodalitĂ€ten als Maus und Tastatur das PE unterstĂŒtzen können, indem wir die Übersetzungsumgebung MMPE entwickeln und testen. MMPE ermöglicht es, Text handschriftlich, per Sprache oder ĂŒber Handgesten zu verĂ€ndern, Wörter per Drag & Drop neu anzuordnen oder all diese EingabemodalitĂ€ten zu kombinieren. Zweitens stellen wir ein Sensor-Framework vor, das eine Vielzahl physiologischer und verhaltensbezogener Messwerte verwendet, um die kognitive Last (KL) abzuschĂ€tzen. In drei Studien konnten wir zeigen, dass multimodale Messung von Augen-, Herz- und Hautmerkmalen verwendet werden kann, um Übersetzungsumgebungen an die KL der Übersetzer anzupassen. Drittens stellen wir zwei Multi-Encoder-Transformer-Architekturen fĂŒr das automatische Post-Editieren (APE) vor und erörtern, wie diese die MÜ-Ausgabe an eine DomĂ€ne anpassen und dadurch die Korrektur von sich wiederholenden MÜ-Fehlern vermeiden können.Deutsche Forschungsgemeinschaft (DFG), Projekt MMP

    From a simple EHR to the market lead: what technologies to add

    Get PDF
    Electronic health records (EHRs) can store, capture, and present patient data in an organized way that improves physicians’ workflow and patient care. This makes EHRs key to addressing many of today’s health care challenges. An interdisciplinary review and qualitative study of artificial intelligence, machine learning, natural language processing, and real-time location services in health care was conducted. The results show that in an industry where digitization is key, several recommendations can be made to leverage these technologies in ways that can improve current systems and help EHR vendors become the market lead

    An Evaluation Of Learning Employing Natural Language Processing And Cognitive Load Assessment

    Get PDF
    One of the key goals of Pedagogy is to assess learning. Various paradigms exist and one of this is Cognitivism. It essentially sees a human learner as an information processor and the mind as a black box with limited capacity that should be understood and studied. With respect to this, an approach is to employ the construct of cognitive load to assess a learner\u27s experience and in turn design instructions better aligned to the human mind. However, cognitive load assessment is not an easy activity, especially in a traditional classroom setting. This research proposes a novel method for evaluating learning both employing subjective cognitive load assessment and natural language processing. It makes use of primary, empirical and deductive methods. In details, on one hand, cognitive load assessment is performed using well-known self-reporting instruments, borrowed from Human Factors, namely the Nasa Task Load Index and the Workload Profile. On the other hand, Natural Language Processing techniques, borrowed from Artificial Intelligence, are employed to calculate semantic similarity of textual information, provided by learners after attending a typical third-level class, and the content of the class itself. Subsequently, an investigation of the relationship of cognitive load assessment and textual similarity is performed to assess learning
    • 

    corecore