76 research outputs found

    Calorific expenditure estimation using deep convolutional network features

    Get PDF

    CaloriNet: From silhouettes to calorie estimation in private environments

    Get PDF
    We propose a novel deep fusion architecture, CaloriNet, for the online estimation of energy expenditure for free living monitoring in private environments, where RGB data is discarded and replaced by silhouettes. Our fused convolutional neural network architecture is trainable end-to-end, to estimate calorie expenditure, using temporal foreground silhouettes alongside accelerometer data. The network is trained and cross-validated on a publicly available dataset, SPHERE_RGBD + Inertial_calorie. Results show state-of-the-art minimum error on the estimation of energy expenditure (calories per minute), outperforming alternative, standard and single-modal techniques.Comment: 11 pages, 7 figure

    Energy expenditure estimation using visual and inertial sensors

    Get PDF
    Ā© The Institution of Engineering and Technology 2017. Deriving a person's energy expenditure accurately forms the foundation for tracking physical activity levels across many health and lifestyle monitoring tasks. In this study, the authors present a method for estimating calorific expenditure from combined visual and accelerometer sensors by way of an RGB-Depth camera and a wearable inertial sensor. The proposed individual-independent framework fuses information from both modalities which leads to improved estimates beyond the accuracy of single modality and manual metabolic equivalents of task (MET) lookup table based methods. For evaluation, the authors introduce a new dataset called SPHERE_RGBD + Inertial_calorie, for which visual and inertial data are simultaneously obtained with indirect calorimetry ground truth measurements based on gas exchange. Experiments show that the fusion of visual and inertial data reduces the estimation error by 8 and 18% compared with the use of visual only and inertial sensor only, respectively, and by 33% compared with a MET-based approach. The authors conclude from their results that the proposed approach is suitable for home monitoring in a controlled environment

    Research progress on wearable devices for daily human health management

    Get PDF
    As the publicā€™s demand for portable access to personal health information continues to expand, wearable devices are not only widely used in clinical practice, but also gradually applied to the daily health management of ordinary families due to their intelligence, miniaturization, and portability. This paper searches the literature of wearable devices through PubMed and CNKI databases, classifies them according to the different functions realized by wearable devices, and briefly describes the algorithms and specific analysis methods of their applications and made a prospect of its development trend in the field of human health

    Segmentation and Recognition of Eating Gestures from Wrist Motion Using Deep Learning

    Get PDF
    This research considers training a deep learning neural network for segmenting and classifying eating related gestures from recordings of subjects eating unscripted meals in a cafeteria environment. It is inspired by the recent trend of success in deep learning for solving a wide variety of machine related tasks such as image annotation, classification and segmentation. Image segmentation is a particularly important inspiration, and this work proposes a novel deep learning classifier for segmenting time-series data based on the work done in [25] and [30]. While deep learning has established itself as the state-of-the-art approach in image segmentation, particularly in works such as [2],[25] and [31], very little work has been done for segmenting time-series data using deep learning models. Wrist mounted IMU sensors such as accelerometers and gyroscopes can record activity from a subject in a free-living environment, while being encapsulated in a watch-like device and thus being inconspicuous. Such a device can be used to monitor eating related activities as well, and is thought to be useful for monitoring energy intake for healthy individuals as well as those afflicted with conditions such as being overweight or obese. The data set that is used for this research study is known as the Clemson Cafeteria Dataset, available publicly at [14]. It contains data for 276 people eating a meal at the Harcombe Dining Hall at Clemson University, which is a large cafeteria environment. The data includes wrist motion measurements (accelerometer x, y, z; gyroscope yaw, pitch, roll) recorded when the subjects each ate an unscripted meal. Each meal consisted of 1-4 courses, of which 488 were used as part of this research. The ground truth labelings of gestures were created by a set of 18 trained human raters, and consist of labels such as ā€™biteā€™ used to indicate when the subject starts to put food in their mouth, and later moves the hand away for more ā€™bitesā€™ or other activities. Other labels include ā€™drinkā€™ for liquid intake, ā€™restā€™ for stationary hands and ā€™utensilingā€™ for actions such as cutting the food into bite size pieces, stirring a liquid or dipping food in sauce among other things. All other activities are labeled as ā€™otherā€™ by the human raters. Previous work in our group focused on recognizing these gesture types from manually segmented data using hidden Markov models [24],[27]. This thesis builds on that work, by considering a deep learning classifier for automatically segmenting and recognizing gestures. The neural network classifier proposed as part of this research performs satisfactorily well at recognizing intake gestures, with 79.6% of ā€™biteā€™ and 80.7% of ā€™drinkā€™ gestures being recognized correctly on average per meal. Overall 77.7% of all gestures were recognized correctly on average per meal, indicating that a deep learning classifier can successfully be used to simultaneously segment and identify eating gestures from wrist motion measured through IMU sensors

    Predicting the Future

    Get PDF
    Due to the increased capabilities of microprocessors and the advent of graphics processing units (GPUs) in recent decades, the use of machine learning methodologies has become popular in many fields of science and technology. This fact, together with the availability of large amounts of information, has meant that machine learning and Big Data have an important presence in the field of Energy. This Special Issue entitled ā€œPredicting the Futureā€”Big Data and Machine Learningā€ is focused on applications of machine learning methodologies in the field of energy. Topics include but are not limited to the following: big data architectures of power supply systems, energy-saving and efficiency models, environmental effects of energy consumption, prediction of occupational health and safety outcomes in the energy industry, price forecast prediction of raw materials, and energy management of smart buildings
    • ā€¦
    corecore