2,844 research outputs found

    EEG-based Spanish Language Proficiency Classification: An EEG Power Spectrum and Cross-Spectrum Analysis

    Get PDF
    Second language proficiency may be predicted with electrophysiological techniques. In a machine learning application, this electrophysiological data may be used for language instructors and language students to assess their language learning. This study identifies how electroencephalogram (EEG) power spectrum and cross spectrum data of the brain cortex relates to Spanish second language (L2) proficiency of 20 Spanish language students of varying proficiency levels at the University of New Hampshire. The two metrics for assessing cortical power and processing were event-related desynchronization (ERD)—a measure of relative change in power—of the alpha (8-12 Hz) brain frequency band, and alpha and beta (13-30Hz) brain frequency band coherence—a relative measure of spectral correlation between two cortical areas, respectively. Alpha ERD and alpha and beta coherence were calculated from EEG data collected on participants of ACTFL Spanish L2 proficiency levels Novice, Intermediate, and Advance while listening to three audio conditions of varying Spanish language difficulty. Significant differences in both alpha and beta coherence were found between proficiency groups. Higher proficiency Spanish L2 students exhibited more bilateral alpha and beta coherence dominance in the frontal and central cortices while the lower proficiency Spanish L2 students demonstrated greater unilateral alpha and beta coherence between the posterior cortices and Broca and Wernicke’s Area. This suggests that higher proficiency simultaneous bilinguals utilize the frontoparietal and fronto-occipital networks for achieving language comprehension and focus

    Deep fusion of multi-channel neurophysiological signal for emotion recognition and monitoring

    Get PDF
    How to fuse multi-channel neurophysiological signals for emotion recognition is emerging as a hot research topic in community of Computational Psychophysiology. Nevertheless, prior feature engineering based approaches require extracting various domain knowledge related features at a high time cost. Moreover, traditional fusion method cannot fully utilise correlation information between different channels and frequency components. In this paper, we design a hybrid deep learning model, in which the 'Convolutional Neural Network (CNN)' is utilised for extracting task-related features, as well as mining inter-channel and inter-frequency correlation, besides, the 'Recurrent Neural Network (RNN)' is concatenated for integrating contextual information from the frame cube sequence. Experiments are carried out in a trial-level emotion recognition task, on the DEAP benchmarking dataset. Experimental results demonstrate that the proposed framework outperforms the classical methods, with regard to both of the emotional dimensions of Valence and Arousal

    Embedding mobile learning into everyday life settings

    Get PDF
    The increasing ubiquity of smartphones has changed the way we interact with information and acquire new knowledge. The prevalence of personal mobile devices in our everyday lives creates new opportunities for learning that exceed the narrow boundaries of a school’s classroom and provide the foundations for lifelong learning. Learning can now happen whenever and wherever we are; whether on the sofa at home, on the bus during our commute, or on a break at work. However, the flexibility offered by mobile learning also creates its challenges. Being able to learn anytime and anywhere does not necessarily result in learning uptake. Without the school environment’s controlled schedule and teacher guidance, the learners must actively initiate learning activities, keep up repetition schedules, and cope with learning in interruption-prone everyday environments. Both interruptions and infrequent repetition can harm the learning process and long-term memory retention. We argue that current mobile learning applications insufficiently support users in coping with these challenges. In this thesis, we explore how we can utilize the ubiquity of mobile devices to ensure frequent engagement with the content, focusing primarily on language learning and supporting users in dealing with learning breaks and interruptions. Following a user-centered design approach, we first analyzed mobile learning behavior in everyday settings. Based on our findings, we proposed concepts and designs, developed research prototypes, and evaluated them in laboratory and field evaluations with a specific focus on user experience. To better understand users’ learning behavior with mobile devices, we first characterized their interaction with mobile learning apps through a detailed survey and a diary study. Both methods confirmed the enormous diversity in usage situations and preferences. We observed that learning often happens unplanned, infrequently, among the company of friends or family, or while simultaneously performing secondary tasks such as watching TV or eating. The studies further uncovered a significant prevalence of interruptions in everyday settings that affected users’ learning behavior, often leading to suspension and termination of the learning activities. We derived design implications to support learning in diverse situations, particularly aimed at mitigating the adverse effects of multitasking and interruptions. The proposed strategies should help designers and developers create mobile learning applications that adapt to the opportunities and challenges of learning in everyday mobile settings. We explored four main challenges, emphasizing that (1) we need to consider that Learning in Everyday Settings is Diverse and Interruption-prone, (2) learning performance is affected by Irregular and Infrequent Practice Behavior, (3) we need to move From Static to Personalized Learning, and (4) that Interruptions and Long Learning Breaks can Negatively Affect Performance. To tackle these challenges, we propose to embed learning into everyday smartphone interactions, which could foster frequent engagement with – and implicitly personalize – learning content (according to users’ interests and skills). Further, we investigate how memory cues could be applied to support task resumption after interruptions in mobile learning. To confirm that our idea of embedding learning into everyday interactions can increase exposure, we developed an application integrating learning tasks into the smartphone authentication process. Since unlocking the smartphone is a frequently performed action without any other purpose, our subjects appreciated the idea of utilizing this process to perform quick and simple learning interactions. Evidence from a comparative user study showed that embedding learning tasks into the unlocking mechanism led to significantly more interactions with the learning content without impairing the learning quality. We further explored a method for embedding language comprehension assessment into users’ digital reading and listening activities. By applying physiological measurements as implicit input, we reliably detected unknown words during laboratory evaluations. Identifying such knowledge gaps could be used for the provision of in-situ support and to inform the generation of personalized language learning content tailored to users’ interests and proficiency levels. To investigate memory cueing as a concept to support task resumption after interruptions, we complemented a theoretical literature analysis of existing applications with two research probes implementing and evaluating promising design concepts. We showed that displaying memory cues when the user resumes the learning activity after an interruption improves their subjective user experience. A subsequent study presented an outlook on the generalizability of memory cues beyond the narrow use case of language learning. We observed that the helpfulness of memory cues for reflecting on prior learning is highly dependent on the design of the cues, particularly the granularity of the presented information. We consider interactive cues for specific memory reactivation (e.g., through multiple-choice questions) a promising scaffolding concept for connecting individual micro-learning sessions when learning in everyday settings. The tools and applications described in this thesis are a starting point for designing applications that support learning in everyday settings. We broaden the understanding of learning behavior and highlight the impact of interruptions in our busy everyday lives. While this thesis focuses mainly on language learning, the concepts and methods have the potential to be generalized to other domains, such as STEM learning. We reflect on the limitations of the presented concepts and outline future research perspectives that utilize the ubiquity of mobile devices to design mobile learning interactions for everyday settings.Die AllgegenwĂ€rtigkeit von Smartphones verĂ€ndert die Art und Weise wie wir mit Informationen umgehen und Wissen erwerben. Die weite Verbreitung von mobilen EndgerĂ€ten in unserem tĂ€glichen Leben fĂŒhrt zu neuen Möglichkeiten des Lernens, welche ĂŒber die engen Grenzen eines Klassenraumes hinausreichen und das Fundament fĂŒr lebenslanges Lernen schaffen. Lernen kann nun zu jeder Zeit und an jedem Ort stattfinden: auf dem Sofa Zuhause, im Bus wĂ€hrend des Pendelns oder in der Pause auf der Arbeit. Die FlexibilitĂ€t des mobilen Lernens geht jedoch zeitgleich mit Herausforderungen einher. Ohne den kontrollierten Ablaufplan und die UnterstĂŒtzung der Lehrpersonen im schulischen Umfeld sind die Lernenden selbst dafĂŒr verantwortlich, aktiv Lernsitzungen zu initiieren, Wiederholungszyklen einzuhalten und Lektionen in unterbrechungsanfĂ€lligen Alltagssituationen zu meistern. Sowohl Unterbrechungen als auch unregelmĂ€ĂŸige Wiederholung von Inhalten können den Lernprozess behindern und der Langzeitspeicherung der Informationen schaden. Wir behaupten, dass aktuelle mobile Lernanwendungen die Nutzer*innen nur unzureichend in diesen Herausforderungen unterstĂŒtzen. In dieser Arbeit erforschen wir, wie wir uns die AllgegenwĂ€rtigkeit mobiler EndgerĂ€te zunutze machen können, um zu erreichen, dass Nutzer*innen regelmĂ€ĂŸig mit den Lerninhalten interagieren. Wir fokussieren uns darauf, sie im Umgang mit Unterbrechungen und Lernpausen zu unterstĂŒtzen. In einem nutzerzentrierten Designprozess analysieren wir zunĂ€chst das Lernverhalten auf mobilen EndgerĂ€ten in alltĂ€glichen Situationen. Basierend auf den Erkenntnissen schlagen wir Konzepte und Designs vor, entwickeln Forschungsprototypen und werten diese in Labor- und Feldstudien mit Fokus auf User Experience (wörtl. “Nutzererfahrung”) aus. Um das Lernverhalten von Nutzer*innen mit mobilen EndgerĂ€ten besser zu verstehen, versuchen wir zuerst die Interaktionen mit mobilen Lernanwendungen durch eine detaillierte Umfrage und eine Tagebuchstudie zu charakterisieren. Beide Methoden bestĂ€tigen eine enorme Vielfalt von Nutzungssituationen und -prĂ€ferenzen. Wir beobachten, dass Lernen oft ungeplant, unregelmĂ€ĂŸig, im Beisein von Freunden oder Familie, oder wĂ€hrend der AusĂŒbung anderer TĂ€tigkeiten, beispielsweise Fernsehen oder Essen, stattfindet. Die Studien decken zudem Unterbrechungen in Alltagssituationen auf, welche das Lernverhalten der Nutzer*innen beeinflussen und oft zum Aussetzen oder Beenden der LernaktivitĂ€t fĂŒhren. Wir leiten Implikationen ab, um Lernen in vielfĂ€ltigen Situationen zu unterstĂŒtzen und besonders die negativen EinflĂŒsse von Multitasking und Unterbrechungen abzuschwĂ€chen. Die vorgeschlagenen Strategien sollen Designer*innen und Entwickler*innen helfen, mobile Lernanwendungen zu erstellen, welche sich den Möglichkeiten und Herausforderungen von Lernen in Alltagssituationen anpassen. Wir haben vier zentrale Herausforderungen identifiziert: (1) Lernen in Alltagssituationen ist divers und anfĂ€llig fĂŒr Unterbrechungen; (2) Die Lerneffizienz wird durch unregelmĂ€ĂŸiges Wiederholungsverhalten beeinflusst; (3) Wir mĂŒssen von statischem zu personalisiertem Lernen ĂŒbergehen; (4) Unterbrechungen und lange Lernpausen können dem Lernen schaden. Um diese Herausforderungen anzugehen, schlagen wir vor, Lernen in alltĂ€gliche Smartphoneinteraktionen einzubetten. Dies fĂŒhrt zu einer vermehrten BeschĂ€ftigung mit Lerninhalten und könnte zu einer impliziten Personalisierung von diesen anhand der Interessen und FĂ€higkeiten der Nutzer*innen beitragen. Zudem untersuchen wir, wie Memory Cues (wörtl. “GedĂ€chtnishinweise”) genutzt werden können, um das Fortsetzen von Aufgaben nach Unterbrechungen im mobilen Lernen zu erleichtern. Um zu zeigen, dass unsere Idee des Einbettens von Lernaufgaben in alltĂ€gliche Interaktionen wirklich die BeschĂ€ftigung mit diesen erhöht, haben wir eine Anwendung entwickelt, welche Lernaufgaben in den Entsperrprozess von Smartphones integriert. Da die Authentifizierung auf dem MobilgerĂ€t eine hĂ€ufig durchgefĂŒhrte Aktion ist, welche keinen weiteren Mehrwert bietet, begrĂŒĂŸten unsere Studienteilnehmenden die Idee, den Prozess fĂŒr die DurchfĂŒhrung kurzer und einfacher Lerninteraktionen zu nutzen. Ergebnisse aus einer vergleichenden Nutzerstudie haben gezeigt, dass die Einbettung von Aufgaben in den Entsperrprozess zu signifikant mehr Interaktionen mit den Lerninhalten fĂŒhrt, ohne dass die LernqualitĂ€t beeintrĂ€chtigt wird. Wir haben außerdem eine Methode untersucht, welche die Messung von SprachverstĂ€ndnis in die digitalen Lese- und HöraktivitĂ€ten der Nutzer*innen einbettet. Mittels physiologischer Messungen als implizite Eingabe können wir in Laborstudien zuverlĂ€ssig unbekannte Wörter erkennen. Die Aufdeckung solcher WissenslĂŒcken kann genutzt werden, um in-situ UntestĂŒtzung bereitzustellen und um personalisierte Lerninhalte zu generieren, welche auf die Interessen und das Wissensniveau der Nutzer*innen zugeschnitten sind. Um Memory Cues als Konzept fĂŒr die UnterstĂŒtzung der Aufgabenfortsetzung nach Unterbrechungen zu untersuchen, haben wir eine theoretische Literaturanalyse von bestehenden Anwendungen um zwei Forschungsarbeiten erweitert, welche vielversprechende Designkonzepte umsetzen und evaluieren. Wir haben gezeigt, dass die PrĂ€sentation von Memory Cues die subjektive User Experience verbessert, wenn der Nutzer die LernaktivitĂ€t nach einer Unterbrechung fortsetzt. Eine Folgestudie stellt einen Ausblick auf die Generalisierbarkeit von Memory Cues dar, welcher ĂŒber den Tellerrand des Anwendungsfalls Sprachenlernen hinausschaut. Wir haben beobachtet, dass der Nutzen von Memory Cues fĂŒr das Reflektieren ĂŒber gelernte Inhalte stark von dem Design der Cues abhĂ€ngt, insbesondere von der GranularitĂ€t der prĂ€sentierten Informationen. Wir schĂ€tzen interaktive Cues zur spezifischen GedĂ€chtnisaktivierung (z.B. durch Mehrfachauswahlfragen) als einen vielversprechenden UnterstĂŒtzungsansatz ein, welcher individuelle Mikrolerneinheiten im Alltag verknĂŒpfen könnte. Die Werkzeuge und Anwendungen, die in dieser Arbeit beschrieben werden, sind ein Startpunkt fĂŒr das Design von Anwendungen, welche das Lernen in Alltagssituationen unterstĂŒtzen. Wir erweitern das VerstĂ€ndnis, welches wir von Lernverhalten im geschĂ€ftigen Alltagsleben haben und heben den Einfluss von Unterbrechungen in diesem hervor. WĂ€hrend sich diese Arbeit hauptsĂ€chlich auf das Lernen von Sprachen fokussiert, haben die vorgestellten Konzepte und Methoden das Potential auf andere Bereiche ĂŒbertragen zu werden, beispielsweise das Lernen von MINT Themen. Wir reflektieren ĂŒber die Grenzen der prĂ€sentierten Konzepte und skizzieren Perspektiven fĂŒr zukĂŒnftige Forschungsarbeiten, welche sich die AllgegenwĂ€rtigkeit von mobilen EndgerĂ€ten zur Gestaltung von Lernanwendungen fĂŒr den Alltag zunutze machen

    Learning Style Classification via EEG Sub-band Spectral Centroid Frequency Features

    Get PDF
    Kolb’s Experiential Learning Theory postulates that in learning, knowledge is created by the learners’ ability to absorb and transform experience. Many studies have previously suggested that at rest, the brain emits signatures that can be associated with cognitive and behavioural patterns. Hence, the study attempts to characterise and classify learning styles from EEG using the spectral centroid frequency features. Initially, learning style of 68 university students has been assessed using Kolb’s Learning Style Inventory. Resting EEG is then recorded from the prefrontal cortex. Next, the EEG is pre-processed and filtered into alpha and theta sub-bands in which the spectral centroid frequencies are computed from the corresponding power spectral densities. The dataset is further enhanced to 160 samples via synthetic EEG. The obtained features are then used as input to the k-nearest neighbour classifier that is incorporated with k-fold cross-validation. Feature classification via k-nearest neighbour has attained five-fold mean training and testing accuracies of 100% and 97.5%, respectively. Hence, results show that the alpha and theta spectral centroid frequencies represent distinct and stable EEG signature to distinguish learning styles from the resting brain.DOI:http://dx.doi.org/10.11591/ijece.v4i6.683

    iMind: Uma ferramenta inteligente para suporte de compreensĂŁo de conteĂșdo

    Get PDF
    Usually while reading, content comprehension difficulty affects individual performance. Comprehension difficulties, e. g., could lead to a slow learning process, lower work quality, and inefficient decision-making. This thesis introduces an intelligent tool called “iMind” which uses wearable devices (e.g., smartwatches) to evaluate user comprehension difficulties and engagement levels while reading digital content. Comprehension difficulty can occur when there are not enough mental resources available for mental processing. The mental resource for mental processing is the cognitive load (CL). Fluctuations of CL lead to physiological manifestation of the autonomic nervous system (ANS), which can be measured by wearables, like smartwatches. ANS manifestations are, e. g., an increase in heart rate. With low-cost eye trackers, it is possible to correlate content regions to the measurements of ANS manifestation. In this sense, iMind uses a smartwatch and an eye tracker to identify comprehension difficulty at content regions level (where the user is looking). The tool uses machine learning techniques to classify content regions as difficult or non-difficult based on biometric and non-biometric features. The tool classified regions with a 75% accuracy and 80% f-score with Linear regression (LR). With the classified regions, it will be possible, in the future, to create contextual support for the reader in real-time by, e.g., translating the sentences that induced comprehension difficulty.Normalmente durante a leitura, a dificuldade de compreensĂŁo pode afetar o desempenho da leitura. A dificuldade de compreensĂŁo pode levar a um processo de aprendizagem mais lento, menor qualidade de trabalho ou uma ineficiente tomada de decisĂŁo. Esta tese apresenta uma ferramenta inteligente chamada “iMind” que usa dispositivos vestĂ­veis (por exemplo, smartwatches) para avaliar a dificuldade de compreensĂŁo do utilizador durante a leitura de conteĂșdo digital. A dificuldade de compreensĂŁo pode ocorrer quando nĂŁo hĂĄ recursos mentais disponĂ­veis suficientes para o processamento mental. O recurso usado para o processamento mental Ă© a carga cognitiva (CL). As flutuaçÔes de CL levam a manifestaçÔes fisiolĂłgicas do sistema nervoso autĂŽnomo (ANS), manifestaçÔes essas, que pode ser medido por dispositivos vestĂ­veis, como smartwatches. As manifestaçÔes do ANS sĂŁo, por exemplo, um aumento da frequĂȘncia cardĂ­aca. Com eye trackers de baixo custo, Ă© possĂ­vel correlacionar manifestação do ANS com regiĂ”es do texto, por exemplo. Neste sentido, a ferramenta iMind utiliza um smartwatch e um eye tracker para identificar dificuldades de compreensĂŁo em regiĂ”es de conteĂșdo (para onde o utilizador estĂĄ a olhar). Adicionalmente a ferramenta usa tĂ©cnicas de machine learning para classificar regiĂ”es de conteĂșdo como difĂ­ceis ou nĂŁo difĂ­ceis com base em features biomĂ©tricos e nĂŁo biomĂ©tricos. A ferramenta classificou regiĂ”es com uma precisĂŁo de 75% e f-score de 80% usando regressĂŁo linear (LR). Com a classificação das regiĂ”es em tempo real, serĂĄ possĂ­vel, no futuro, criar suporte contextual para o leitor em tempo real onde, por exemplo, as frases que induzem dificuldade de compreensĂŁo sĂŁo traduzidas

    Identification of EEG signal patterns between adults with dyslexia and normal controls

    Get PDF
    Electroencephalography (EEG) is one of the most useful techniques used to represent behaviours of the brain and helps explore valuable insights through the measurement of brain electrical activity. Hence, it plays a vital role in detecting neurological disorders such as epilepsy. Dyslexia is a hidden learning disability with a neurological origin affecting a significant amount of the world population. Studies show unique brain structures and behaviours in individuals with dyslexia and these variations have become more evident with the use of techniques such as EEG, Functional Magnetic Resonance Imaging (fMRI), Magnetoencephalography (MEG) and Positron Emission Tomography (PET). In this thesis, we are particularly interested in discussing the use of EEG to explore unique brain activities of adults with dyslexia. We attempt to discover unique EEG signal patterns between adults with dyslexia compared to normal controls while performing tasks that are more challenging for individuals with dyslexia. These tasks include real--‐word reading, nonsense--‐ word reading, passage reading, Rapid Automatized Naming (RAN), writing, typing, browsing the web, table interpretation and typing of random numbers. Each participant was instructed to perform these specific tasks while staying seated in front of a computer screen with the EEG headset setup on his or her head. The EEG signals captured during these tasks were examined using a machine learning classification framework, which includes signal preprocessing, frequency sub--‐band decomposition, feature extraction, classification and verification. Cubic Support Vector Machine (CSVM) classifiers were developed for separate brain regions of each specified task in order to determine the optimal brain regions and EEG sensors that produce the most unique EEG signal patterns between the two groups. The research revealed that adults with dyslexia generated unique EEG signal patterns compared to normal controls while performing the specific tasks. One of the vital discoveries of this research was that the nonsense--‐words classifiers produced higher Validation Accuracies (VA) compared to real--‐ words classifiers, confirming difficulties in phonological decoding skills seen in individuals with dyslexia are reflected in the EEG signal patterns, which was detected in the left parieto--‐occipital. It was also uncovered that all three reading tasks showed the same optimal brain region, and RAN which is known to have a relationship to reading also showed optimal performance in an overlapping region, demonstrating the likelihood that the association between reading and RAN reflects in the EEG signal patterns. Finally, we were able to discover brain regions that produced exclusive EEG signal patterns between the two groups that have not been reported before for writing, typing, web browsing, table interpretation and typing of random numbers

    Drawing, Handwriting Processing Analysis: New Advances and Challenges

    No full text
    International audienceDrawing and handwriting are communicational skills that are fundamental in geopolitical, ideological and technological evolutions of all time. drawingand handwriting are still useful in defining innovative applications in numerous fields. In this regard, researchers have to solve new problems like those related to the manner in which drawing and handwriting become an efficient way to command various connected objects; or to validate graphomotor skills as evident and objective sources of data useful in the study of human beings, their capabilities and their limits from birth to decline

    Accent intelligibility across native and non-native accent pairings:investigating links with electrophysiological measures of word recognition

    Get PDF
    The intelligibility of accented speech in noise depends on the interaction of the accents of the talker and the listener. However, it is not yet clear how this influence arises. Accent familiarity is commonly proposed to be a major contributor to accent intelligibility, but recent evidence suggests that the similarity between talker and listener accents may also be able to account for accent intelligibility across talker-listener pairings. In addition, differences in accent intelligibility are also often only found in the presence of other adverse conditions, so it is not clear if the talker-listener pairing also influences speech processing in quiet conditions. This research had two main aims; to further investigate the relationship between accent similarity and intelligibility, and to use online EEG methods to explore the possible presence of talker-listener pairing related differences on speech perception in quiet conditions. English and Spanish listeners listened to Standard Southern British English (SSBE), Glaswegian English (GE) and Spanish-accented English (SpE) in a speech-in-noise recognition task, and also completed an event-related potential (ERP) task to elicit the PMN and N400 responses. Accent similarity was measured using the ACCDIST metric. Results showed the same (or extremely similar) patterns in accent intelligibility and accent similarity for both listener groups, giving further support to the hypothesis that accent similarity can contribute to the level of intelligibility of an accent within a talker-listener pairing. ERP data also suggest that speech processing in quiet is influenced by the talker-listener pairing. The PMN, which relates to phonological processing, seems particularly dependent on a match between talker and listener accent, but the more semantic N400 showed some flexibility in the ability to process accented speech

    Rapid neural processing of grammatical tone in second language learners

    Get PDF
    The present dissertation investigates how beginner learners process grammatical tone in a second language and whether their processing is influenced by phonological transfer. Paper I focuses on the acquisition of Swedish grammatical tone by beginner learners from a non-tonal language, German. Results show that non-tonal beginner learners do not process the grammatical regularities of the tones but rather treat them akin to piano tones. A rightwards-going spread of activity in response to pitch difference in Swedish tones possibly indicates a process of tone sensitisation. Papers II to IV investigate how artificial grammatical tone, taught in a word-picture association paradigm, is acquired by German and Swedish learners. The results of paper II show that interspersed mismatches between grammatical tone and picture referents evoke an N400 only for the Swedish learners. Both learner groups produce N400 responses to picture mismatches related to grammatically meaningful vowel changes. While mismatch detection quickly reaches high accuracy rates, tone mismatches are least accurately and most slowly detected in both learner groups. For processing of the grammatical L2 words outside of mismatch contexts, the results of paper III reveal early, preconscious and late, conscious processing in the Swedish learner group within 20 minutes of acquisition (word recognition component, ELAN, LAN, P600). German learners only produce late responses: a P600 within 20 minutes and a LAN after sleep consolidation. The surprisingly rapid emergence of early grammatical ERP components (ELAN, LAN) is attributed to less resource-heavy processing outside of violation contexts. Results of paper IV, finally, indicate that memory trace formation, as visible in the word recognition component at ~50 ms, is only possible at the highest level of formal and functional similarity, that is, for words with falling tone in Swedish participants. Together, the findings emphasise the importance of phonological transfer in the initial stages of second language acquisition and suggest that the earlier the processing, the more important the impact of phonological transfer

    EEG Analysis Method to Detect Unspoken Answers to Questions Using MSNNs

    Get PDF
    Brain–computer interfaces (BCI) facilitate communication between the human brain and computational systems, additionally offering mechanisms for environmental control to enhance human life. The current study focused on the application of BCI for communication support, especially in detecting unspoken answers to questions. Utilizing a multistage neural network (MSNN) replete with convolutional and pooling layers, the proposed method comprises a threefold approach: electroencephalogram (EEG) measurements, EEG feature extraction, and answer classification. The EEG signals of the participants are captured as they mentally respond with “yes” or “no” to the posed questions. Feature extraction was achieved through an MSNN composed of three distinct convolutional neural network models. The first model discriminates between the EEG signals with and without discernible noise artifacts, whereas the subsequent two models are designated for feature extraction from EEG signals with or without such noise artifacts. Furthermore, a support vector machine is employed to classify the answers to the questions. The proposed method was validated via experiments using authentic EEG data. The mean and standard deviation values for sensitivity and precision of the proposed method were 99.6% and 0.2%, respectively. These findings demonstrate the viability of attaining high accuracy in a BCI by preliminarily segregating the EEG signals based on the presence or absence of artifact noise and underscore the stability of such classification. Thus, the proposed method manifests prospective advantages of separating EEG signals characterized by noise artifacts for enhanced BCI performance
    • 

    corecore