32 research outputs found

    Towards EEG-based BCI driven by emotions for addressing BCI-Illiteracy: a meta-analytic review

    Get PDF
    Many critical aspects affect the correct operation of a Brain Computer Interface. The term BCI-illiteracy' describes the impossibility of using a BCI paradigm. At present, a universal solution does not exist and seeking innovative protocols to drive a BCI is mandatory. This work presents a meta-analytic review on recent advances in emotions recognition with the perspective of using emotions as voluntary, stimulus-independent, commands for BCIs. 60 papers, based on electroencephalography measurements, were selected to evaluate what emotions have been most recognised and what brain regions were activated by them. It was found that happiness, sadness, anger and calm were the most recognised emotions. Relevant discriminant locations for emotions recognition and for the particular case of discrete emotions recognition were identified in the temporal, frontal and parietal areas. The meta-analysis was mainly performed on stimulus-elicited emotions, due to the limited amount of literature about self-induced emotions. The obtained results represent a good starting point for the development of BCI driven by emotions and allow to: (1) ascertain that emotions are measurable and recognisable one from another (2) select a subset of most recognisable emotions and the corresponding active brain regions

    トクシュウ トクシマケン ノ キュウキュウ イリョウ ト チイキ イリョウ : ゲンジョウ ト テンボウ : カントウゲン

    Get PDF
    We introduce GFrames, a novel local reference frame (LRF) construction for 3D meshes and point clouds. GFrames are based on the computation of the intrinsic gradient of a scalar field defined on top of the input shape. The resulting tangent vector field defines a repeatable tangent direction of the local frame at each point; importantly, it directly inherits the properties and invariance classes of the underlying scalar function, making it remarkably robust under strong sampling artifacts, vertex noise, as well as non-rigid deformations. Existing local descriptors can directly benefit from our repeatable frames, as we showcase in a selection of 3D vision and shape analysis applications where we demonstrate state-of-the-art performance in a variety of challenging settings

    Optimizing Nozzle Travel Time in Proton Therapy

    Get PDF
    Proton therapy is a cancer therapy that is more expensive than classical radiotherapy but that is considered the gold standard in several situations. Since there is also a limited amount of delivering facilities for this techniques, it is fundamental to increase the number of treated patients over time. The objective of this work is to offer an insight on the problem of the optimization of the part of the delivery time of a treatment plan that relates to the movements of the system. We denote it as the Nozzle Travel Time Problem (NTTP), in analogy with the Leaf Travel Time Problem (LTTP) in classical radiotherapy. In particular this work: (i) describes a mathematical model for the delivery system and formalize the optimization problem for finding the optimal sequence of movements of the system (nozzle and bed) that satisfies the covering of the prescribed irradiation directions; (ii) provides an optimization pipeline that solves the problem for instances with an amount of irradiation directions much greater than those usually employed in the clinical practice; (iii) reports preliminary results about the effects of employing two different resolution strategies within the aforementioned pipeline, that rely on an exact Traveling Salesman Problem (TSP) solver, Concorde, and an efficient Vehicle Routing Problem (VRP) heuristic, VROOM

    Subject - specific - frequency - band for motor imagery EEG signal recognition based on common spatial spectral pattern

    Get PDF
    Over the last decade, processing of biomedical signals using machine learning algorithms has gained widespread attention. Amongst these, one of the most important signals is electroencephalography (EEG) signal that is used to monitor the brain activities. Brain-computer-interface (BCI) has also become a hot topic of research where EEG signals are usually acquired using non-invasive sensors. In this work, we propose a scheme based on common spatial spectral pattern (CSSP) and optimization of temporal filters for improved motor imagery (MI) EEG signal recognition. CSSP is proposed as it improves the spatial resolution while the temporal filter is optimized for each subject as the frequency band which contains most significant information varies amongst different subjects. The proposed scheme is evaluated using two publicly available datasets: BCI competition III dataset IVa and BCI competition IV dataset 1. The proposed scheme obtained promising results and outperformed other state-of-the-art methods. The findings of this work will be beneficial for developing improved BCI systems

    Consistent detection of global predicates

    No full text

    Personalized models for facial emotion recognition through transfer learning

    Get PDF
    Emotions represent a key aspect of human life and behavior. In recent years, automatic recognition of emotions has become an important component in the fields of affective computing and human-machine interaction. Among many physiological and kinematic signals that could be used to recognize emotions, acquiring facial expression images is one of the most natural and inexpensive approaches. The creation of a generalized, inter-subject, model for emotion recognition from facial expression is still a challenge, due to anatomical, cultural and environmental differences. On the other hand, using traditional machine learning approaches to create a subject-customized, personal, model would require a large dataset of labelled samples. For these reasons, in this work, we propose the use of transfer learning to produce subject-specific models for extracting the emotional content of facial images in the valence/arousal dimensions. Transfer learning allows us to reuse the knowledge assimilated from a large multi-subject dataset by a deep-convolutional neural network and employ the feature extraction capability in the single subject scenario. In this way, it is possible to reduce the amount of labelled data necessary to train a personalized model, with respect to relying just on subjective data. Our results suggest that generalized transferred knowledge, in conjunction with a small amount of personal data, is sufficient to obtain high recognition performances and improvement with respect to both a generalized model and personal models. For both valence and arousal dimensions, quite good performances were obtained (RMSE = 0.09 and RMSE = 0.1 for valence and arousal, respectively). Overall results suggested that both the transferred knowledge and the personal data helped in achieving this improvement, even though they alternated in providing the main contribution. Moreover, in this task, we observed that the benefits of transferring knowledge are so remarkable that no specific active or passive sampling techniques are needed for selecting images to be labelled

    Emotion Recognition for Human-Robot Interaction: Recent Advances and Future Perspectives

    No full text
    A fascinating challenge in the field of human–robot interaction is the possibility to endow robots with emotional intelligence in order to make the interaction more intuitive, genuine, and natural. To achieve this, a critical point is the capability of the robot to infer and interpret human emotions. Emotion recognition has been widely explored in the broader fields of human–machine interaction and affective computing. Here, we report recent advances in emotion recognition, with particular regard to the human–robot interaction context. Our aim is to review the state of the art of currently adopted emotional models, interaction modalities, and classification strategies and offer our point of view on future developments and critical issues. We focus on facial expressions, body poses and kinematics, voice, brain activity, and peripheral physiological responses, also providing a list of available datasets containing data from these modalities

    A Deep Learning Approach for Mood Recognition from Wearable Data

    No full text
    Emotion and mood recognition plays a key role in human-robot interaction, especially in the context of socially assistive robotics. Mood-aware robots could be useful as companions and social assistants for elders and people affected by depression and other mood disorders. An interesting option for continuously tracking a user's mood is the use of wearable and mobile devices. However, the classification of the mood from physiological and kinematics data is still a challenge, due to intersubjects differences: on one hand, 'one-fits-all' classification approaches usually achieve lower accuracy than person-specific methods; on the other hand, personalized models require in general a large amount of data from a single subject to be trained and, therefore, becomes effective after long periods of acquisition. In this paper, we propose a deep learning approach for mood recognition from a publicly available dataset that includes a gyroscope, accelerometer, and heart-rate data. We propose the use of long-short term memory networks (LSTM), testing them both as classifiers and as features extractors in hybrid models. We compared their performances both against and in conjunction with traditional machine learning approaches, namely support vector machines (SVM) and Gaussian mixture models (GMM). We also consider transfer learning strategies to reduce the amount of personal data needed to train the model. Our results show that the use of LSTMs significantly improves the classification accuracy with respect to machine learning approaches, especially if employed as feature extractors and combined with SVM. However, we observed that transfer learning does not achieve significant results in boosting the training of a personalized model

    Classification strategies for a single-trial binary Brain Computer Interface based on remembering unpleasant odors

    No full text
    A Brain Computer Interface (BCI) is a useful instrument to support human communication. In recent years, BCI systems have been frequently implemented by using EEG. Regarding the communication paradigm used, there exists a very large number of strategies and, recently, the remembering of unpleasant odors has been also defined. However, the quality of the signals collected by this last paradigm is very poor, due to the absence of a real stimulus (the stimulus consists in remembering a disgusting situation). For this reason, a crucial node is the choice of a very efficient classification algorithm to improve the accuracy of the BCI. The present paper describes a and compares classification strategies for such type of BCI systems. The proposed methods and the experimental setup are described and experimental measurements are presented and discussed
    corecore