16 research outputs found

    Determinant of Covariance Matrix Model Coupled with AdaBoost Classification Algorithm for EEG Seizure Detection

    Get PDF
    Experts usually inspect electroencephalogram (EEG) recordings page-by-page in order to identify epileptic seizures, which leads to heavy workloads and is time consuming. However, the efficient extraction and effective selection of informative EEG features is crucial in assisting clinicians to diagnose epilepsy accurately. In this paper, a determinant of covariance matrix (Cov–Det) model is suggested for reducing EEG dimensionality. First, EEG signals are segmented into intervals using a sliding window technique. Then, Cov–Det is applied to each interval. To construct a features vector, a set of statistical features are extracted from each interval. To eliminate redundant features, the Kolmogorov–Smirnov (KST) and Mann–Whitney U (MWUT) tests are integrated, the extracted features ranked based on KST and MWUT metrics, and arithmetic operators are adopted to construe the most pertinent classified features for each pair in the EEG signal group. The selected features are then fed into the proposed AdaBoost Back-Propagation neural network (AB_BP_NN) to effectively classify EEG signals into seizure and free seizure segments. Finally, the AB_BP_NN is compared with several classical machine learning techniques; the results demonstrate that the proposed mode of AB_BP_NN provides insignificant false positive rates, simpler design, and robustness in classifying epileptic signals. Two datasets, the Bern–Barcelona and Bonn datasets, are used for performance evaluation. The proposed technique achieved an average accuracy of 100% and 98.86%, respectively, for the Bern–Barcelona and Bonn datasets, which is considered a noteworthy improvement compared to the current state-of-the-art methods

    Design of a Multi-biometric Platform, based on physical traits and physiological measures: Face, Iris, Ear, ECG and EEG

    Get PDF
    Security and safety is one the main concerns both for governments and for private companies in the last years so raising growing interests and investments in the area of biometric recognition and video surveillance, especially after the sad happenings of September 2001. Outlays assessments of the U.S. government for the years 2001-2005 estimate that the homeland security spending climbed from 56.0billionsofdollarsin2001toalmost56.0 billions of dollars in 2001 to almost 100 billion of 2005. In this lapse of time, new pattern recognition techniques have been developed and, even more important, new biometric traits have been investigated and refined; besides the well-known physical and behavioral characteristics, also physiological measures have been studied, so providing more features to enhance discrimination capabilities of individuals. This dissertation proposes the design of a multimodal biometric platform, FAIRY, based on the following biometric traits: ear, face, iris EEG and ECG signals. In the thesis the modular architecture of the platform has been presented, together with the results obtained for the solution to the recognition problems related to the different biometrics and their possible fusion. Finally, an analysis of the pattern recognition issues concerning the area of videosurveillance has been discussed

    Ubiquitous Technologies for Emotion Recognition

    Get PDF
    Emotions play a very important role in how we think and behave. As such, the emotions we feel every day can compel us to act and influence the decisions and plans we make about our lives. Being able to measure, analyze, and better comprehend how or why our emotions may change is thus of much relevance to understand human behavior and its consequences. Despite the great efforts made in the past in the study of human emotions, it is only now, with the advent of wearable, mobile, and ubiquitous technologies, that we can aim to sense and recognize emotions, continuously and in real time. This book brings together the latest experiences, findings, and developments regarding ubiquitous sensing, modeling, and the recognition of human emotions

    Improved Activity Recognition Combining Inertial Motion Sensors and Electroencephalogram Signals

    Get PDF
    Human activity recognition and neural activity analysis are the basis for human computational neureoethology research dealing with the simultaneous analysis of behavioral ethogram descriptions and neural activity measurements. Wireless electroencephalography (EEG) and wireless inertial measurement units (IMU) allow the realization of experimental data recording with improved ecological validity where the subjects can be carrying out natural activities while data recording is minimally invasive. Specifically, we aim to show that EEG and IMU data fusion allows improved human activity recognition in a natural setting. We have defined an experimental protocol composed of natural sitting, standing and walking activities, and we have recruited subjects in two sites: in-house (N = 4) and out-house (N = 12) populations with different demographics. Experimental protocol data capture was carried out with validated commercial systems. Classifier model training and validation were carried out with scikit-learn open source machine learning python package. EEG features consist of the amplitude of the standard EEG frequency bands. Inertial features were the instantaneous position of the body tracked points after a moving average smoothing to remove noise. We carry out three validation processes: a 10-fold cross-validation process per experimental protocol repetition, (b) the inference of the ethograms, and (c) the transfer learning from each experimental protocol repetition to the remaining repetitions. The in-house accuracy results were lower and much more variable than the out-house sessions results. In general, random forest was the best performing classifier model. Best cross-validation results, ethogram accuracy, and transfer learning were achieved from the fusion of EEG and IMUs data. Transfer learning behaved poorly compared to classification on the same protocol repetition, but it has accuracy still greater than 0.75 on average for the out-house data sessions. Transfer leaning accuracy among repetitions of the same subject was above 0.88 on average. Ethogram prediction accuracy was above 0.96 on average. Therefore, we conclude that wireless EEG and IMUs allow for the definition of natural experimental designs with high ecological validity toward human computational neuroethology research. The fusion of both EEG and IMUs signals improves activity and ethogram recognitionThis work has been partially supported by FEDER funds through MINECO Project TIN2017-85827-P. Special thanks to Naiara Vidal from IMH who conducted the recruitment process in the framework of Langileok project funded by the Elkartek program. This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 777720

    State of the Art of Audio- and Video-Based Solutions for AAL

    Get PDF
    It is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters. Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals. Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely lifelogging and self-monitoring, remote monitoring of vital signs, emotional state recognition, food intake monitoring, activity and behaviour recognition, activity and personal assistance, gesture recognition, fall detection and prevention, mobility assessment and frailty recognition, and cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed
    corecore