113 research outputs found
A Methodology for Adaptive Competence Assessment and Learning Path Creation in ISAC
In this paper a technique is presented how to realise adaptive competence assessment and the creation of adaptive learning paths for the ISAC system. ISAC is an intelligent tutoring system which supports the learner in solving problems in applied mathematics. It is able to monitor and support the learner in each calculation step. However, it does not support building user and competence profiles and sequencing of problems and learning objects based on the personal needs. Therefore, a technique has been developed and integrated with ISAC, which allows for assessing the competence profile of learners and creating learning paths adaptively based on the assessed competences. Development has been done in a modular way which also provides other features such as goal setting and visual feedback of skill gaps and progress
Automated Generation of User Guidance by Combining Computation and Deduction
Herewith, a fairly old concept is published for the first time and named
"Lucas Interpretation". This has been implemented in a prototype, which has
been proved useful in educational practice and has gained academic relevance
with an emerging generation of educational mathematics assistants (EMA) based
on Computer Theorem Proving (CTP).
Automated Theorem Proving (ATP), i.e. deduction, is the most reliable
technology used to check user input. However ATP is inherently weak in
automatically generating solutions for arbitrary problems in applied
mathematics. This weakness is crucial for EMAs: when ATP checks user input as
incorrect and the learner gets stuck then the system should be able to suggest
possible next steps.
The key idea of Lucas Interpretation is to compute the steps of a calculation
following a program written in a novel CTP-based programming language, i.e.
computation provides the next steps. User guidance is generated by combining
deduction and computation: the latter is performed by a specific language
interpreter, which works like a debugger and hands over control to the learner
at breakpoints, i.e. tactics generating the steps of calculation. The
interpreter also builds up logical contexts providing ATP with the data
required for checking user input, thus combining computation and deduction.
The paper describes the concepts underlying Lucas Interpretation so that open
questions can adequately be addressed, and prerequisites for further work are
provided.Comment: In Proceedings THedu'11, arXiv:1202.453
Predicting mental imagery based BCI performance from personality, cognitive profile and neurophysiological patterns
Mental-Imagery based Brain-Computer Interfaces (MI-BCIs) allow their users to send commands
to a computer using their brain-activity alone (typically measured by ElectroEncephaloGraphy—
EEG), which is processed while they perform specific mental tasks. While very
promising, MI-BCIs remain barely used outside laboratories because of the difficulty
encountered by users to control them. Indeed, although some users obtain good control
performances after training, a substantial proportion remains unable to reliably control an
MI-BCI. This huge variability in user-performance led the community to look for predictors of
MI-BCI control ability. However, these predictors were only explored for motor-imagery
based BCIs, and mostly for a single training session per subject. In this study, 18 participants
were instructed to learn to control an EEG-based MI-BCI by performing 3 MI-tasks, 2
of which were non-motor tasks, across 6 training sessions, on 6 different days. Relationships
between the participants’ BCI control performances and their personality, cognitive
profile and neurophysiological markers were explored. While no relevant relationships with
neurophysiological markers were found, strong correlations between MI-BCI performances
and mental-rotation scores (reflecting spatial abilities) were revealed. Also, a predictive
model of MI-BCI performance based on psychometric questionnaire scores was proposed.
A leave-one-subject-out cross validation process revealed the stability and reliability of this
model: it enabled to predict participants’ performance with a mean error of less than 3
points. This study determined how users’ profiles impact their MI-BCI control ability and
thus clears the way for designing novel MI-BCI training protocols, adapted to the profile of
each user
A Generalized Framework for Quantifying the Dynamics of EEG Event-Related Desynchronization
Brains were built by evolution to react swiftly to environmental challenges. Thus, sensory stimuli must be processed ad hoc, i.e., independent—to a large extent—from the momentary brain state incidentally prevailing during stimulus occurrence. Accordingly, computational neuroscience strives to model the robust processing of stimuli in the presence of dynamical cortical states. A pivotal feature of ongoing brain activity is the regional predominance of EEG eigenrhythms, such as the occipital alpha or the pericentral mu rhythm, both peaking spectrally at 10 Hz. Here, we establish a novel generalized concept to measure event-related desynchronization (ERD), which allows one to model neural oscillatory dynamics also in the presence of dynamical cortical states. Specifically, we demonstrate that a somatosensory stimulus causes a stereotypic sequence of first an ERD and then an ensuing amplitude overshoot (event-related synchronization), which at a dynamical cortical state becomes evident only if the natural relaxation dynamics of unperturbed EEG rhythms is utilized as reference dynamics. Moreover, this computational approach also encompasses the more general notion of a “conditional ERD,” through which candidate explanatory variables can be scrutinized with regard to their possible impact on a particular oscillatory dynamics under study. Thus, the generalized ERD represents a powerful novel analysis tool for extending our understanding of inter-trial variability of evoked responses and therefore the robust processing of environmental stimuli
Recommended from our members
Target-directed motor imagery of the lower limb enhances event-related desynchronization
Event-related desynchronization/synchronization (ERD/S) is an electroencephalogram (EEG) feature widely used as control signals for Brain-Computer Interfaces (BCIs). Never- theless, the underlying neural mechanisms and functions of ERD/S are largely unknown, thus investigating them is crucial to improve the reliability of ERD/S-based BCIs. This study aimed to identify Motor Imagery (MI) conditions that enhance ERD/S. We investigated fol- lowing three questions: 1) whether target-directed MI affects ERD/S, 2) whether MI with sound imagery affects ERD/S, and 3) whether ERD/S has a body part dependency of MI. Nine participants took part in the experiments of four MI conditions; they were asked to imagine right foot dorsiflexion (F), right foot dorsiflexion and the sound of a bass drum when the sole touched the floor (FS), right leg extension (L), and right leg extension directed toward a soccer ball (LT). Statistical comparison revealed that there were significant differ- ences between conditions L and LT in beta-band ERD and conditions F and L in beta-band ERS. These results suggest that mental rehearsal of target-directed lower limb movement without real sensory stimuli can enhance beta-band ERD; furthermore, MI of foot dorsiflex- ion induces significantly larger beta-band ERS than that of leg extension. These findings could be exploited for the training of BCIs such as powered prosthetics for disabled person and neurorehabilitation system for stroke patients
Brain-Computer Interface Based on Generation of Visual Images
This paper examines the task of recognizing EEG patterns that correspond to performing three mental tasks: relaxation and imagining of two types of pictures: faces and houses. The experiments were performed using two EEG headsets: BrainProducts ActiCap and Emotiv EPOC. The Emotiv headset becomes widely used in consumer BCI application allowing for conducting large-scale EEG experiments in the future. Since classification accuracy significantly exceeded the level of random classification during the first three days of the experiment with EPOC headset, a control experiment was performed on the fourth day using ActiCap. The control experiment has shown that utilization of high-quality research equipment can enhance classification accuracy (up to 68% in some subjects) and that the accuracy is independent of the presence of EEG artifacts related to blinking and eye movement. This study also shows that computationally-inexpensive Bayesian classifier based on covariance matrix analysis yields similar classification accuracy in this problem as a more sophisticated Multi-class Common Spatial Patterns (MCSP) classifier
Evidence for Human Fronto-Central Gamma Activity during Long-Term Memory Encoding of Word Sequences
Although human gamma activity (30–80 Hz) associated with visual processing is often reported, it is not clear to what extend gamma activity can be reliably detected non-invasively from frontal areas during complex cognitive tasks such as long term memory (LTM) formation. We conducted a memory experiment composed of 35 blocks each having three parts: LTM encoding, working memory (WM) maintenance and LTM retrieval. In the LTM encoding and WM maintenance parts, participants had to respectively encode or maintain the order of three sequentially presented words. During LTM retrieval subjects had to reproduce these sequences. Using magnetoencephalography (MEG) we identified significant differences in the gamma and beta activity. Robust gamma activity (55–65 Hz) in left BA6 (supplementary motor area (SMA)/pre-SMA) was stronger during LTM rehearsal than during WM maintenance. The gamma activity was sustained throughout the 3.4 s rehearsal period during which a fixation cross was presented. Importantly, the difference in gamma band activity correlated with memory performance over subjects. Further we observed a weak gamma power difference in left BA6 during the first half of the LTM rehearsal interval larger for successfully than unsuccessfully reproduced word triplets. In the beta band, we found a power decrease in left anterior regions during LTM rehearsal compared to WM maintenance. Also this suppression of beta power correlated with memory performance over subjects. Our findings show that an extended network of brain areas, characterized by oscillatory activity in different frequency bands, supports the encoding of word sequences in LTM. Gamma band activity in BA6 possibly reflects memory processes associated with language and timing, and suppression of beta activity at left frontal sensors is likely to reflect the release of inhibition directly associated with the engagement of language functions
Exploring spatial-frequency-sequential relationships for motor imagery classification with recurrent neural network
Abstract Background Conventional methods of motor imagery brain computer interfaces (MI-BCIs) suffer from the limited number of samples and simplified features, so as to produce poor performances with spatial-frequency features and shallow classifiers. Methods Alternatively, this paper applies a deep recurrent neural network (RNN) with a sliding window cropping strategy (SWCS) to signal classification of MI-BCIs. The spatial-frequency features are first extracted by the filter bank common spatial pattern (FB-CSP) algorithm, and such features are cropped by the SWCS into time slices. By extracting spatial-frequency-sequential relationships, the cropped time slices are then fed into RNN for classification. In order to overcome the memory distractions, the commonly used gated recurrent unit (GRU) and long-short term memory (LSTM) unit are applied to the RNN architecture, and experimental results are used to determine which unit is more suitable for processing EEG signals. Results Experimental results on common BCI benchmark datasets show that the spatial-frequency-sequential relationships outperform all other competing spatial-frequency methods. In particular, the proposed GRU-RNN architecture achieves the lowest misclassification rates on all BCI benchmark datasets. Conclusion By introducing spatial-frequency-sequential relationships with cropping time slice samples, the proposed method gives a novel way to construct and model high accuracy and robustness MI-BCIs based on limited trials of EEG signals
Observational Learning of New Movement Sequences Is Reflected in Fronto-Parietal Coherence
Mankind is unique in her ability for observational learning, i.e. the transmission of acquired knowledge and behavioral repertoire through observation of others' actions. In the present study we used electrophysiological measures to investigate brain mechanisms of observational learning. Analysis investigated the possible functional coupling between occipital (alpha) and motor (mu) rhythms operating in the 10Hz frequency range for translating “seeing” into “doing”. Subjects observed movement sequences consisting of six consecutive left or right hand button presses directed at one of two target-buttons for subsequent imitation. Each movement sequence was presented four times, intervened by short pause intervals for sequence rehearsal. During a control task subjects observed the same movement sequences without a requirement for subsequent reproduction. Although both alpha and mu rhythms desynchronized during the imitation task relative to the control task, modulations in alpha and mu power were found to be largely independent from each other over time, arguing against a functional coupling of alpha and mu generators during observational learning. This independence was furthermore reflected in the absence of coherence between occipital and motor electrodes overlaying alpha and mu generators. Instead, coherence analysis revealed a pair of symmetric fronto-parietal networks, one over the left and one over the right hemisphere, reflecting stronger coherence during observation of movements than during pauses. Individual differences in fronto-parietal coherence were furthermore found to predict imitation accuracy. The properties of these networks, i.e. their fronto-parietal distribution, their ipsilateral organization and their sensitivity to the observation of movements, match closely with the known properties of the mirror neuron system (MNS) as studied in the macaque brain. These results indicate a functional dissociation between higher order areas for observational learning (i.e. parts of the MNS as reflected in 10Hz coherence measures) and peripheral structures (i.e. lateral occipital gyrus for alpha; central sulcus for mu) that provide low-level support for observation and motor imagery of action sequences
Time-Frequency Analysis of Chemosensory Event-Related Potentials to Characterize the Cortical Representation of Odors in Humans
BACKGROUND: The recording of olfactory and trigeminal chemosensory event-related potentials (ERPs) has been proposed as an objective and non-invasive technique to study the cortical processing of odors in humans. Until now, the responses have been characterized mainly using across-trial averaging in the time domain. Unfortunately, chemosensory ERPs, in particular, olfactory ERPs, exhibit a relatively low signal-to-noise ratio. Hence, although the technique is increasingly used in basic research as well as in clinical practice to evaluate people suffering from olfactory disorders, its current clinical relevance remains very limited. Here, we used a time-frequency analysis based on the wavelet transform to reveal EEG responses that are not strictly phase-locked to onset of the chemosensory stimulus. We hypothesized that this approach would significantly enhance the signal-to-noise ratio of the EEG responses to chemosensory stimulation because, as compared to conventional time-domain averaging, (1) it is less sensitive to temporal jitter and (2) it can reveal non phase-locked EEG responses such as event-related synchronization and desynchronization. METHODOLOGY/PRINCIPAL FINDINGS: EEG responses to selective trigeminal and olfactory stimulation were recorded in 11 normosmic subjects. A Morlet wavelet was used to characterize the elicited responses in the time-frequency domain. We found that this approach markedly improved the signal-to-noise ratio of the obtained EEG responses, in particular, following olfactory stimulation. Furthermore, the approach allowed characterizing non phase-locked components that could not be identified using conventional time-domain averaging. CONCLUSION/SIGNIFICANCE: By providing a more robust and complete view of how odors are represented in the human brain, our approach could constitute the basis for a robust tool to study olfaction, both for basic research and clinicians
- …