86 research outputs found

    Making sense of pervasive signals: a machine learning approach

    Full text link
    This study focused on challenges come from noisy and complex pervasive data. We proposed new Bayesian nonparametric models to infer co-patterns from multi-channel data collected from pervasive devices. By making sense of pervasive data, the study contributes to the development of Machine Learning and Data Mining in Big Data era

    Deep-seeded Clustering for Unsupervised Valence-Arousal Emotion Recognition from Physiological Signals

    Full text link
    Emotions play a significant role in the cognitive processes of the human brain, such as decision making, learning and perception. The use of physiological signals has shown to lead to more objective, reliable and accurate emotion recognition combined with raising machine learning methods. Supervised learning methods have dominated the attention of the research community, but the challenge in collecting needed labels makes emotion recognition difficult in large-scale semi- or uncontrolled experiments. Unsupervised methods are increasingly being explored, however sub-optimal signal feature selection and label identification challenges unsupervised methods' accuracy and applicability. This article proposes an unsupervised deep cluster framework for emotion recognition from physiological and psychological data. Tests on the open benchmark data set WESAD show that deep k-means and deep c-means distinguish the four quadrants of Russell's circumplex model of affect with an overall accuracy of 87%. Seeding the clusters with the subject's subjective assessments helps to circumvent the need for labels.Comment: 7 pages, 1 figure, 2 table

    Advanced Biometrics with Deep Learning

    Get PDF
    Biometrics, such as fingerprint, iris, face, hand print, hand vein, speech and gait recognition, etc., as a means of identity management have become commonplace nowadays for various applications. Biometric systems follow a typical pipeline, that is composed of separate preprocessing, feature extraction and classification. Deep learning as a data-driven representation learning approach has been shown to be a promising alternative to conventional data-agnostic and handcrafted pre-processing and feature extraction for biometric systems. Furthermore, deep learning offers an end-to-end learning paradigm to unify preprocessing, feature extraction, and recognition, based solely on biometric data. This Special Issue has collected 12 high-quality, state-of-the-art research papers that deal with challenging issues in advanced biometric systems based on deep learning. The 12 papers can be divided into 4 categories according to biometric modality; namely, face biometrics, medical electronic signals (EEG and ECG), voice print, and others

    An approach based on tunicate swarm algorithm to solve partitional clustering problem

    Get PDF
    The tunicate swarm algorithm (TSA) is a newly proposed population-based swarm optimizer for solving global optimization problems. TSA uses best solution in the population in order improve the intensification and diversification of the tunicates. Thus, the possibility of finding a better position for search agents has increased. The aim of the clustering algorithms is to distributed the data instances into some groups according to similar and dissimilar features of instances. Therefore, with a proper clustering algorithm the dataset will be separated to some groups and it’s expected that the similarities of groups will be minimum. In this work, firstly, an approach based on TSA has proposed for solving partitional clustering problem. Then, the TSA is implemented on ten different clustering problems taken from UCI Machine Learning Repository, and the clustering performance of the TSA is compared with the performances of the three well known clustering algorithms such as fuzzy c-means, k-means and k-medoids. The experimental results and comparisons show that the TSA based approach is highly competitive and robust optimizer for solving the partitional clustering problems

    Intelligent Biosignal Processing in Wearable and Implantable Sensors

    Get PDF
    This reprint provides a collection of papers illustrating the state-of-the-art of smart processing of data coming from wearable, implantable or portable sensors. Each paper presents the design, databases used, methodological background, obtained results, and their interpretation for biomedical applications. Revealing examples are brain–machine interfaces for medical rehabilitation, the evaluation of sympathetic nerve activity, a novel automated diagnostic tool based on ECG data to diagnose COVID-19, machine learning-based hypertension risk assessment by means of photoplethysmography and electrocardiography signals, Parkinsonian gait assessment using machine learning tools, thorough analysis of compressive sensing of ECG signals, development of a nanotechnology application for decoding vagus-nerve activity, detection of liver dysfunction using a wearable electronic nose system, prosthetic hand control using surface electromyography, epileptic seizure detection using a CNN, and premature ventricular contraction detection using deep metric learning. Thus, this reprint presents significant clinical applications as well as valuable new research issues, providing current illustrations of this new field of research by addressing the promises, challenges, and hurdles associated with the synergy of biosignal processing and AI through 16 different pertinent studies. Covering a wide range of research and application areas, this book is an excellent resource for researchers, physicians, academics, and PhD or master students working on (bio)signal and image processing, AI, biomaterials, biomechanics, and biotechnology with applications in medicine

    Biometrics

    Get PDF
    Biometrics uses methods for unique recognition of humans based upon one or more intrinsic physical or behavioral traits. In computer science, particularly, biometrics is used as a form of identity access management and access control. It is also used to identify individuals in groups that are under surveillance. The book consists of 13 chapters, each focusing on a certain aspect of the problem. The book chapters are divided into three sections: physical biometrics, behavioral biometrics and medical biometrics. The key objective of the book is to provide comprehensive reference and text on human authentication and people identity verification from both physiological, behavioural and other points of view. It aims to publish new insights into current innovations in computer systems and technology for biometrics development and its applications. The book was reviewed by the editor Dr. Jucheng Yang, and many of the guest editors, such as Dr. Girija Chetty, Dr. Norman Poh, Dr. Loris Nanni, Dr. Jianjiang Feng, Dr. Dongsun Park, Dr. Sook Yoon and so on, who also made a significant contribution to the book

    Acute myocardial infarction patient data to assess healthcare utilization and treatments.

    Get PDF
    The goal of this study is to use a data mining framework to assess the three main treatments for acute myocardial infarction: thrombolytic therapy, percutaneous coronary intervention (percutaneous angioplasty), and coronary artery bypass surgery. The need for a data mining framework in this study arises because of the use of real world data rather than highly clean and homogenous data found in most clinical trials and epidemiological studies. The assessment is based on determining a profile of patients undergoing an episode of acute myocardial infarction, determine resource utilization by treatment, and creating a model that predicts each treatment resource utilization and cost. Text Mining is used to find a subset of input attributes that characterize subjects who undergo the different treatments for acute myocardial infarction as well as distinct resource utilization profiles. Classical statistical methods are used to evaluate the results of text clustering. The features selected by supervised learning are used to build predictive models for resource utilization and are compared with those features selected by traditional statistical methods for a predictive model with the same outcome. Sequence analysis is used to determine the sequence of treatment of acute myocardial infarction. The resulting sequence is used to construct a probability tree that defines the basis for cost effectiveness analysis that compares acute myocardial infarction treatments. To determine effectiveness, survival analysis methodology is implemented to assess the occurrence of death during the hospitalization, the likelihood of a repeated episode of acute myocardial infarction, and the length of time between reoccurrence of an episode of acute myocardial infarction or the occurrence of death. The complexity of this study was mainly based on the data source used: administrative data from insurance claims. Such data source was not originally designed for the study of health outcomes or health resource utilization. However, by transforming record tables from many-to-many relations to one-to-one relations, they became useful in tracking the evolution of disease and disease outcomes. Also, by transforming tables from a wide-format to a long-format, the records became analyzable by many data mining algorithms. Moreover, this study contributed to field of applied mathematics and public health by implementing a sequence analysis on consecutive procedures to determine the sequence of events that describe the evolution of a hospitalization for acute myocardial infarction. This same data transformation and algorithm can be used in the study of rare diseases whose evolution is not well understood

    Proceedings of ICMMB2014

    Get PDF

    Agrupamiento dinámico de complejos QRS en tiempo real

    Get PDF
    Esta tesis se encuadra dentro del ámbito del análisis automático de las señales electrocardiográficas (ECG) y, en ella, se presenta un método de agrupamiento de latidos en tiempo real de carácter adaptativo, cuyo objetivo es facilitar la separación de los latidos presentes en un registro electrocardiográfico multicanal en función de su ritmo y patrón de activación/propagación en el tejido cardíaco, representándolos mediante un conjunto dinámico de grupos. Se ha desarrollado una implementación que ha permitido verificar el cumplimiento de las restricciones temporales necesarias para su ejecución en tiempo real , y realizar una validación sobre las bases de datos de referencia "MIT-BIH Arrhythmia Database" y "AHA ECG Database" siguiendo las recomendaciones de los estándares internacionales. En la bibliografía no se ha encontrado referencia a ningún trabajo que aborde el objetivo del agrupamiento dinámico, por lo que se han comparado los resultados obtenido con los publicados para métodos de agrupamiento estático, mostrando un rendimiento igual o superior
    • …
    corecore