4,592 research outputs found

    Estimating Energy Cost of Physical Activities from Video Using 3D-CNN Networks

    Get PDF
    This research proposes a machine learning model that can estimate the energy cost of physical activities from video input. Currently, wearable sensors are commonly used for this purpose, but they have limitations in terms of practicality and accuracy. A deep learning model using three dimensional convolutional neural network (3D-CNN) architecture was used to process the video data and predict the energy cost in terms of metabolic equivalents (METs). The proposed model was evaluated on a dataset of physical activity videos and achieved an average accuracy of 71% on energy category prediction task and an root mean squared error (RMSE) of 1.14 on energy cost prediction task. The findings suggest that this approach has the potential for practical applications in physical activity surveillance, health interventions, and at-home activity monitoring

    Energy expenditure estimation using visual and inertial sensors

    Get PDF
    © The Institution of Engineering and Technology 2017. Deriving a person's energy expenditure accurately forms the foundation for tracking physical activity levels across many health and lifestyle monitoring tasks. In this study, the authors present a method for estimating calorific expenditure from combined visual and accelerometer sensors by way of an RGB-Depth camera and a wearable inertial sensor. The proposed individual-independent framework fuses information from both modalities which leads to improved estimates beyond the accuracy of single modality and manual metabolic equivalents of task (MET) lookup table based methods. For evaluation, the authors introduce a new dataset called SPHERE_RGBD + Inertial_calorie, for which visual and inertial data are simultaneously obtained with indirect calorimetry ground truth measurements based on gas exchange. Experiments show that the fusion of visual and inertial data reduces the estimation error by 8 and 18% compared with the use of visual only and inertial sensor only, respectively, and by 33% compared with a MET-based approach. The authors conclude from their results that the proposed approach is suitable for home monitoring in a controlled environment

    Jointly Learning Energy Expenditures and Activities using Egocentric Multimodal Signals

    Get PDF
    Physiological signals such as heart rate can provide valuable information about an individual’s state and activity. However, existing work on computer vision has not yet explored leveraging these signals to enhance egocentric video understanding. In this work, we propose a model for reasoning on multimodal data to jointly predict activities and energy expenditures. We use heart rate signals as privileged self-supervision to derive energy expenditure in a training stage. A multitask objective is used to jointly optimize the two tasks. Additionally, we introduce a dataset that contains 31 hours of egocentric video augmented with heart rate and acceleration signals. This study can lead to new applications such as a visual calorie counter

    Calorific expenditure estimation using deep convolutional network features

    Get PDF
    • …
    corecore