1,894 research outputs found

    CaloriNet: From silhouettes to calorie estimation in private environments

    Get PDF
    We propose a novel deep fusion architecture, CaloriNet, for the online estimation of energy expenditure for free living monitoring in private environments, where RGB data is discarded and replaced by silhouettes. Our fused convolutional neural network architecture is trainable end-to-end, to estimate calorie expenditure, using temporal foreground silhouettes alongside accelerometer data. The network is trained and cross-validated on a publicly available dataset, SPHERE_RGBD + Inertial_calorie. Results show state-of-the-art minimum error on the estimation of energy expenditure (calories per minute), outperforming alternative, standard and single-modal techniques.Comment: 11 pages, 7 figure

    Cross-Modal Health State Estimation

    Full text link
    Individuals create and consume more diverse data about themselves today than any time in history. Sources of this data include wearable devices, images, social media, geospatial information and more. A tremendous opportunity rests within cross-modal data analysis that leverages existing domain knowledge methods to understand and guide human health. Especially in chronic diseases, current medical practice uses a combination of sparse hospital based biological metrics (blood tests, expensive imaging, etc.) to understand the evolving health status of an individual. Future health systems must integrate data created at the individual level to better understand health status perpetually, especially in a cybernetic framework. In this work we fuse multiple user created and open source data streams along with established biomedical domain knowledge to give two types of quantitative state estimates of cardiovascular health. First, we use wearable devices to calculate cardiorespiratory fitness (CRF), a known quantitative leading predictor of heart disease which is not routinely collected in clinical settings. Second, we estimate inherent genetic traits, living environmental risks, circadian rhythm, and biological metrics from a diverse dataset. Our experimental results on 24 subjects demonstrate how multi-modal data can provide personalized health insight. Understanding the dynamic nature of health status will pave the way for better health based recommendation engines, better clinical decision making and positive lifestyle changes.Comment: Accepted to ACM Multimedia 2018 Conference - Brave New Ideas, Seoul, Korea, ACM ISBN 978-1-4503-5665-7/18/1

    Real-time human ambulation, activity, and physiological monitoring:taxonomy of issues, techniques, applications, challenges and limitations

    Get PDF
    Automated methods of real-time, unobtrusive, human ambulation, activity, and wellness monitoring and data analysis using various algorithmic techniques have been subjects of intense research. The general aim is to devise effective means of addressing the demands of assisted living, rehabilitation, and clinical observation and assessment through sensor-based monitoring. The research studies have resulted in a large amount of literature. This paper presents a holistic articulation of the research studies and offers comprehensive insights along four main axes: distribution of existing studies; monitoring device framework and sensor types; data collection, processing and analysis; and applications, limitations and challenges. The aim is to present a systematic and most complete study of literature in the area in order to identify research gaps and prioritize future research directions

    Deep learning-based energy expenditure estimation in assisted and non-assisted gait using inertial, EMG, and heart rate wearable sensors

    Get PDF
    Energy expenditure is a key rehabilitation outcome and is starting to be used in robotics-based rehabilitation through human-in-the-loop control to tailor robot assistance towards reducing patients’ energy effort. However, it is usually assessed by indirect calorimetry which entails a certain degree of invasiveness and provides delayed data, which is not suitable for controlling robotic devices. This work proposes a deep learning-based tool for steady-state energy expenditure estimation based on more ergonomic sensors than indirect calorimetry. The study innovates by estimating the energy expenditure in assisted and non-assisted conditions and in slow gait speeds similarly to impaired subjects. This work explores and benchmarks the long short-term memory (LSTM) and convolutional neural network (CNN) as deep learning regressors. As inputs, we fused inertial data, electromyography, and heart rate signals measured by on-body sensors from eight healthy volunteers walking with and without assistance from an ankle-foot exoskeleton at 0.22, 0.33, and 0.44 m/s. LSTM and CNN were compared against indirect calorimetry using a leave-one-subject-out cross-validation technique. Results showed the suitability of this tool, especially CNN, that demonstrated root-mean-squared errors of 0.36 W/kg and high correlation (ρ > 0.85) between target and estimation (R¯2 = 0.79). CNN was able to discriminate the energy expenditure between assisted and non-assisted gait, basal, and walking energy expenditure, throughout three slow gait speeds. CNN regressor driven by kinematic and physiological data was shown to be a more ergonomic technique for estimating the energy expenditure, contributing to the clinical assessment in slow and robotic-assisted gait and future research concerning human-in-the-loop control.This work has been supported in part by the FEDER Funds through the COMPETE 2020-Programa Operacional Competitividade e Internacionalizacao (POCI) and P2020 with the Reference Project SmartOs Grant POCI-01-0247-FEDER-039868, and by FCT national funds, under the national support to R&D units grant, through the reference project UIDB/04436/2020 and UIDP/04436/2020, under the FCT scholarship with reference 2020.05708.BD, and under the Stimulus of Scientific Employment with the grant 2020.03393.CEECIND

    Energy expenditure estimation using visual and inertial sensors

    Get PDF
    © The Institution of Engineering and Technology 2017. Deriving a person's energy expenditure accurately forms the foundation for tracking physical activity levels across many health and lifestyle monitoring tasks. In this study, the authors present a method for estimating calorific expenditure from combined visual and accelerometer sensors by way of an RGB-Depth camera and a wearable inertial sensor. The proposed individual-independent framework fuses information from both modalities which leads to improved estimates beyond the accuracy of single modality and manual metabolic equivalents of task (MET) lookup table based methods. For evaluation, the authors introduce a new dataset called SPHERE_RGBD + Inertial_calorie, for which visual and inertial data are simultaneously obtained with indirect calorimetry ground truth measurements based on gas exchange. Experiments show that the fusion of visual and inertial data reduces the estimation error by 8 and 18% compared with the use of visual only and inertial sensor only, respectively, and by 33% compared with a MET-based approach. The authors conclude from their results that the proposed approach is suitable for home monitoring in a controlled environment

    Towards a more efficient human-exoskeleton assistance

    Get PDF
    There is evidence that the energy expended by humans can be reduced by wearing lower limb exoskeletons with user-oriented assistance strategies, such as human-in-theloop (HITL) controllers. HITL algorithms can be implemented in exoskeletons for the automatic and online optimization of controller parameters, such as the torque profile, depending on the energy expenditure (EE) measured in real-time. This way, it is possible to minimize the EE and tailor the exoskeleton assistance for each specific user. But measuring EE is not trivial. It is more commonly estimated by indirect calorimetry, however, this method requires expensive equipment, takes too long, and is infeasible for everyday use in the real world. Therefore, this study explores machine and deep learning regression models (RMs) as EE estimators in different motor activities based on data acquired by wearable sensors and anthropometric features. Several inputs were tested but the best performance was achieved by the heart rate, the 3-axis acceleration of the chest, wrist, thigh, and ankle, and the body mass index. Results from a public dataset are presented, after the preprocessing of the data. The bestperforming RM was an exponential Gaussian process regressor (GPR), that obtained root-mean-squared errors of 0.56 W/kg, 0.45 W/kg, and 0.60 W/kg for the standing, sitting, and walking activities, respectively. The GPR model outperformed a support vector machine, a boosted decision tree, a bagged decision tree, and a convolutional neural network.COMPETE 2020—Programa Operacional Competitividade e Internacionalização (POCI) and P2020 with the Reference Project SmartOs Grant POCI-01-0247-FEDER-039868

    Unsupervised Heart-rate Estimation in Wearables With Liquid States and A Probabilistic Readout

    Full text link
    Heart-rate estimation is a fundamental feature of modern wearable devices. In this paper we propose a machine intelligent approach for heart-rate estimation from electrocardiogram (ECG) data collected using wearable devices. The novelty of our approach lies in (1) encoding spatio-temporal properties of ECG signals directly into spike train and using this to excite recurrently connected spiking neurons in a Liquid State Machine computation model; (2) a novel learning algorithm; and (3) an intelligently designed unsupervised readout based on Fuzzy c-Means clustering of spike responses from a subset of neurons (Liquid states), selected using particle swarm optimization. Our approach differs from existing works by learning directly from ECG signals (allowing personalization), without requiring costly data annotations. Additionally, our approach can be easily implemented on state-of-the-art spiking-based neuromorphic systems, offering high accuracy, yet significantly low energy footprint, leading to an extended battery life of wearable devices. We validated our approach with CARLsim, a GPU accelerated spiking neural network simulator modeling Izhikevich spiking neurons with Spike Timing Dependent Plasticity (STDP) and homeostatic scaling. A range of subjects are considered from in-house clinical trials and public ECG databases. Results show high accuracy and low energy footprint in heart-rate estimation across subjects with and without cardiac irregularities, signifying the strong potential of this approach to be integrated in future wearable devices.Comment: 51 pages, 12 figures, 6 tables, 95 references. Under submission at Elsevier Neural Network
    corecore