5 research outputs found

    Guest Editorial Cardiovascular Health Informatics: Risk Screening and Intervention

    Get PDF
    Despite enormous efforts to prevent cardiovascular disease (CVD) in the past, it remains the leading cause of death in most countries worldwide. Around two-thirds of these deaths are due to acute events, which frequently occur suddenly and are often fatal beforemedical care can be given. New strategies for screening and early intervening CVD, in addition to the conventional methods, are therefore needed in order to provide personalized and pervasive healthcare. In this special issue, selected emerging technologies in health informatics for screening and intervening CVDs are reported. These papers include reviews or original contributions on 1) new potential genetic biomarkers for screening CVD outcomes and high-throughput techniques for mining genomic data; 2) new imaging techniques for obtaining faster and higher resolution images of cardiovascular imaging biomarkers such as the cardiac chambers and atherosclerotic plaques in coronary arteries, as well as possible automatic segmentation, identification, or fusion algorithms; 3) new physiological biomarkers and novel wearable and home healthcare technologies for monitoring them in daily lives; 4) new personalized prediction models of plaque formation and progression or CVD outcomes; and 5) quantifiable indices and wearable systems to measure them for early intervention of CVD through lifestyle changes. It is hoped that the proposed technologies and systems covered in this special issue can result in improved CVD management and treatment at the point of need, offering a better quality of life to the patient

    Automatic identification of physical activity intensity and modality from the fusion of accelerometry and heart rate data

    Get PDF
    Background: Physical activity (PA) is essential to prevent and to treat a variety of chronic diseases. The automated detection and quantification of PA over time empowers lifestyle interventions, facilitating reliable exercise tracking and data-driven counseling. Methods: We propose and compare various combinations of machine learning (ML) schemes for the automatic classification of PA from multi-modal data, simultaneously captured by a biaxial accelerometer and a heart rate (HR) monitor. Intensity levels (low/moderate/vigorous) were recognized, as well as for vigorous exercise, its modality (sustained aerobic/resistance/mixed). In total, 178.63 h of data about PA intensity (65.55% low/18.96% moderate/15.49% vigorous) and 17.00 h about modality were collected in two experiments: one in free-living conditions, another in a fitness center under controlled protocols. The structure used for automatic classification comprised: a) definition of 42 time-domain signal features, b) dimensionality reduction, c) data clustering, and d) temporal filtering to exploit time redundancy by means of a Hidden Markov Model (HMM). Four dimensionality reduction techniques and four clustering algorithms were studied. In order to cope with class imbalance in the dataset, a custom performance metric was defined to aggregate recognition accuracy, precision and recall. Results: The best scheme, which comprised a projection through Linear Discriminant Analysis (LDA) and k-means clustering, was evaluated in leave-one-subject-out cross-validation; notably outperforming the standard industry procedures for PA intensity classification: score 84.65%, versus up to 63.60%. Errors tended to be brief and to appear around transients. Conclusions: The application of ML techniques for pattern identification and temporal filtering allowed to merge accelerometry and HR data in a solid manner, and achieved markedly better recognition performances than the standard methods for PA intensity estimation

    Passive RFID Module with LSTM Recurrent Neural Network Activity Classification Algorithm for Ambient Assisted Living

    Get PDF
    YesHuman activity recognition from sensor data is a critical research topic to achieve remote health monitoring and ambient assisted living (AAL). In AAL, sensors are integrated into conventional objects aimed to support targets capabilities through digital environments that are sensitive, responsive and adaptive to human activities. Emerging technological paradigms to support AAL within the home or community setting offers people the prospect of a more individually focused care and improved quality of living. In the present work, an ambient human activity classification framework that augments information from the received signal strength indicator (RSSI) of passive RFID tags to obtain detailed activity profiling is proposed. Key indices of position, orientation, mobility, and degree of activities which are critical to guide reliable clinical management decisions using 4 volunteers are employed to simulate the research objective. A two-layer, fully connected sequence long short-term memory recurrent neural network model (LSTM RNN) is employed. The LSTM RNN model extracts the feature of RSS from the sensor data and classifies the sampled activities using SoftMax. The performance of the LSTM model is evaluated for different data size and the hyper-parameters of the RNN are adjusted to optimal states, which results in an accuracy of 98.18%. The proposed framework suits well for smart health and smart homes which offers pervasive sensing environment for the elderly, persons with disability and chronic illness

    Computer Vision Algorithms for Mobile Camera Applications

    Get PDF
    Wearable and mobile sensors have found widespread use in recent years due to their ever-decreasing cost, ease of deployment and use, and ability to provide continuous monitoring as opposed to sensors installed at fixed locations. Since many smart phones are now equipped with a variety of sensors, including accelerometer, gyroscope, magnetometer, microphone and camera, it has become more feasible to develop algorithms for activity monitoring, guidance and navigation of unmanned vehicles, autonomous driving and driver assistance, by using data from one or more of these sensors. In this thesis, we focus on multiple mobile camera applications, and present lightweight algorithms suitable for embedded mobile platforms. The mobile camera scenarios presented in the thesis are: (i) activity detection and step counting from wearable cameras, (ii) door detection for indoor navigation of unmanned vehicles, and (iii) traffic sign detection from vehicle-mounted cameras. First, we present a fall detection and activity classification system developed for embedded smart camera platform CITRIC. In our system, the camera platform is worn by the subject, as opposed to static sensors installed at fixed locations in certain rooms, and, therefore, monitoring is not limited to confined areas, and extends to wherever the subject may travel including indoors and outdoors. Next, we present a real-time smart phone-based fall detection system, wherein we implement camera and accelerometer based fall-detection on Samsung Galaxy S™ 4. We fuse these two sensor modalities to have a more robust fall detection system. Then, we introduce a fall detection algorithm with autonomous thresholding using relative-entropy within the class of Ali-Silvey distance measures. As another wearable camera application, we present a footstep counting algorithm using a smart phone camera. This algorithm provides more accurate step-count compared to using only accelerometer data in smart phones and smart watches at various body locations. As a second mobile camera scenario, we study autonomous indoor navigation of unmanned vehicles. A novel approach is proposed to autonomously detect and verify doorway openings by using the Google Project Tango™ platform. The third mobile camera scenario involves vehicle-mounted cameras. More specifically, we focus on traffic sign detection from lower-resolution and noisy videos captured from vehicle-mounted cameras. We present a new method for accurate traffic sign detection, incorporating Aggregate Channel Features and Chain Code Histograms, with the goal of providing much faster training and testing, and comparable or better performance, with respect to deep neural network approaches, without requiring specialized processors. Proposed computer vision algorithms provide promising results for various useful applications despite the limited energy and processing capabilities of mobile devices
    corecore