9 research outputs found

    From AAL to ambient assisted rehabilitation: a research pilot protocol based on smart objects and biofeedback

    Get PDF
    AbstractThe progressive miniaturization of electronic devices and their exponential increase in processing, storage and transmission capabilities, represent key factors of the current digital transformation, also sustaining the great development of Ambient Assisted Living (AAL) and the Internet of Things. Although most of the investigations in the recent years focused on remote monitoring and diagnostics, rehabilitation too could be positively affected by the widespread integrated use of these devices. Smart Objects in particular may be among the enablers to new quantitative approaches. In this paper, we present a proof-of-concept and some preliminary results of an innovative pediatric rehabilitation protocol based on Smart Objects and biofeedback, which we administered to a sample of children with unilateral cerebral palsy. The novelty of the approach mainly consists in placing the sensing device into a common toy (a ball in our protocol) and using the information measured by the device to administer multimedia-enriched type of exercises, more engaging if compared to the usual rehabilitation activities used in clinical settings. We also introduce a couple of performance indexes, which could be helpful for a quantitative continuous evaluation of movements during the exercises. Even if the number of children involved and sessions performed are not suitable to assess any change in the subjects' abilities, nor to derive solid statistical inferences, the novel approach resulted very engaging and enjoyable by all the children participating in the study. Moreover, given the almost non-existent literature on the use of Smart Objects in pediatric rehabilitation, the few qualitative/quantitative results here reported may promote the scientific and clinical discussion regarding AAL solutions in a "Computer Assisted Rehabilitation" perspective, towards what can be defined "Pediatric Rehabilitation 2.0"

    Detecting Eating Episodes with an Ear-mounted Sensor

    Get PDF
    In this paper, we propose Auracle, a wearable earpiece that can automatically recognize eating behavior. More specifically, in free-living conditions, we can recognize when and for how long a person is eating. Using an off-the-shelf contact microphone placed behind the ear, Auracle captures the sound of a person chewing as it passes through the bone and tissue of the head. This audio data is then processed by a custom analog/digital circuit board. To ensure reliable (yet comfortable) contact between microphone and skin, all hardware components are incorporated into a 3D-printed behind-the-head framework. We collected field data with 14 participants for 32 hours in free-living conditions and additional eating data with 10 participants for 2 hours in a laboratory setting. We achieved accuracy exceeding 92.8% and F1 score exceeding 77.5% for eating detection. Moreover, Auracle successfully detected 20-24 eating episodes (depending on the metrics) out of 26 in free-living conditions. We demonstrate that our custom device could sense, process, and classify audio data in real time. Additionally, we estimateAuracle can last 28.1 hours with a 110 mAh battery while communicating its observations of eating behavior to a smartphone over Bluetooth

    Re-thinking functional food development through a holistic approach

    Get PDF
    Although the interest towards functional food has dramatically increased, several factors jeopardize their effective development. A univocally recognized definition and a dedicated regulation for this emerging food category is lacking, and a gap exists between the technological and the nutritional viewpoints. Involved actors speak different languages, thus impinging the progression towards an integrated approach for functional food development. A holistic approach to rationalize functional food development was here proposed, i.e., the \u201cFunctional Food Development Cycle\u201d. First regulation and definitions were reviewed. The technological approaches for functional food design were then described, followed by the efficacy evaluation ones. Merging the technological and the evaluation viewpoints, by identifying the best compromise between quality and functionality, is pivotal to develop effective functional foods. Finally, delivering functional food on the market requires dedicated communication strategies. These in turn can provide information about consumer needs, thus representing an input for regulatory bodies to drive the development of functional food, feeding it within an iterative and virtuous holistic cycle

    Segmentation and Recognition of Eating Gestures from Wrist Motion Using Deep Learning

    Get PDF
    This research considers training a deep learning neural network for segmenting and classifying eating related gestures from recordings of subjects eating unscripted meals in a cafeteria environment. It is inspired by the recent trend of success in deep learning for solving a wide variety of machine related tasks such as image annotation, classification and segmentation. Image segmentation is a particularly important inspiration, and this work proposes a novel deep learning classifier for segmenting time-series data based on the work done in [25] and [30]. While deep learning has established itself as the state-of-the-art approach in image segmentation, particularly in works such as [2],[25] and [31], very little work has been done for segmenting time-series data using deep learning models. Wrist mounted IMU sensors such as accelerometers and gyroscopes can record activity from a subject in a free-living environment, while being encapsulated in a watch-like device and thus being inconspicuous. Such a device can be used to monitor eating related activities as well, and is thought to be useful for monitoring energy intake for healthy individuals as well as those afflicted with conditions such as being overweight or obese. The data set that is used for this research study is known as the Clemson Cafeteria Dataset, available publicly at [14]. It contains data for 276 people eating a meal at the Harcombe Dining Hall at Clemson University, which is a large cafeteria environment. The data includes wrist motion measurements (accelerometer x, y, z; gyroscope yaw, pitch, roll) recorded when the subjects each ate an unscripted meal. Each meal consisted of 1-4 courses, of which 488 were used as part of this research. The ground truth labelings of gestures were created by a set of 18 trained human raters, and consist of labels such as ’bite’ used to indicate when the subject starts to put food in their mouth, and later moves the hand away for more ’bites’ or other activities. Other labels include ’drink’ for liquid intake, ’rest’ for stationary hands and ’utensiling’ for actions such as cutting the food into bite size pieces, stirring a liquid or dipping food in sauce among other things. All other activities are labeled as ’other’ by the human raters. Previous work in our group focused on recognizing these gesture types from manually segmented data using hidden Markov models [24],[27]. This thesis builds on that work, by considering a deep learning classifier for automatically segmenting and recognizing gestures. The neural network classifier proposed as part of this research performs satisfactorily well at recognizing intake gestures, with 79.6% of ’bite’ and 80.7% of ’drink’ gestures being recognized correctly on average per meal. Overall 77.7% of all gestures were recognized correctly on average per meal, indicating that a deep learning classifier can successfully be used to simultaneously segment and identify eating gestures from wrist motion measured through IMU sensors

    DETECTION OF HEALTH-RELATED BEHAVIOURS USING HEAD-MOUNTED DEVICES

    Get PDF
    The detection of health-related behaviors is the basis of many mobile-sensing applications for healthcare and can trigger other inquiries or interventions. Wearable sensors have been widely used for mobile sensing due to their ever-decreasing cost, ease of deployment, and ability to provide continuous monitoring. In this dissertation, we develop a generalizable approach to sensing eating-related behavior. First, we developed Auracle, a wearable earpiece that can automatically detect eating episodes. Using an off-the-shelf contact microphone placed behind the ear, Auracle captures the sound of a person chewing as it passes through the head. This audio data is then processed by a custom circuit board. We collected data with 14 participants for 32 hours in free-living conditions and achieved accuracy exceeding 92.8% and F1 score exceeding77.5% for eating detection with 1-minute resolution. Second, we adapted Auracle for measuring children’s eating behavior, and improved the accuracy and robustness of the eating-activity detection algorithms. We used this improved prototype in a laboratory study with a sample of 10 children for 60 total sessions and collected 22.3 hours of data in both meal and snack scenarios. Overall, we achieved 95.5% accuracy and 95.7% F1 score for eating detection with 1-minute resolution. Third, we developed a computer-vision approach for eating detection in free-living scenarios. Using a miniature head-mounted camera, we collected data with 10 participants for about 55 hours. The camera was fixed under the brim of a cap, pointing to the mouth of the wearer and continuously recording video (but not audio) throughout their normal daily activity. We evaluated performance for eating detection using four different Convolutional Neural Network (CNN) models. The best model achieved 90.9% accuracy and 78.7%F1 score for eating detection with 1-minute resolution. Finally, we validated the feasibility of deploying the 3D CNN model in wearable or mobile platforms when considering computation, memory, and power constraints

    Exploring Design Opportunities for Promoting Healthy Eating at Work

    Get PDF

    From Cellular to Holistic: Development of Algorithms to Study Human Health and Diseases

    Get PDF
    The development of theoretical computational methods and their application has become widespread in the world today. In this dissertation, I present my work in the creation of models to detect and describe complex biological and health related problems. The first major part of my work centers around the creation and enhancement of methods to calculate protein structure and dynamics. To this end, substantial enhancement has been made to the software package REDCRAFT to better facilitate its usage in protein structure calculation. The enhancements have led to an overall increase in its ability to characterize proteins under difficult conditions such as high noise and low data density. Secondly, a database that allows for easy and comprehensive mining of protein structures has been created and deployed. We show preliminary results for its application to protein structure calculation. This database, among other applications, can be used to create input sets for computational models for prediction of protein structure. Lastly, I present my work on the creation of a theoretical model to describe discrete state protein dynamics. The results of this work can be used to describe many real-world dynamic systems. The second major part of my work centers around the application of machine learning techniques to create a system for the automated detection of smoking using accelerometer data from smartwatches. The first aspect of this work that will be presented is binary detection of smoking puffs. This model was then expanded to perform full cigarette session detection. Next, the model was reformulated to perform quantification of smoking (such as puff duration and the time between puffs). Lastly, a rotational matrix was derived to resolve ambiguities of smartwatches due to position of the watch on the wrist

    Advancement in Dietary Assessment and Self-Monitoring Using Technology

    Get PDF
    Although methods to assess or self-monitor intake may be considered similar, the intended function of each is quite distinct. For the assessment of dietary intake, methods aim to measure food and nutrient intake and/or to derive dietary patterns for determining diet-disease relationships, population surveillance or the effectiveness of interventions. In comparison, dietary self-monitoring primarily aims to create awareness of and reinforce individual eating behaviours, in addition to tracking foods consumed. Advancements in the capabilities of technologies, such as smartphones and wearable devices, have enhanced the collection, analysis and interpretation of dietary intake data in both contexts. This Special Issue invites submissions on the use of novel technology-based approaches for the assessment of food and/or nutrient intake and for self-monitoring eating behaviours. Submissions may document any part of the development and evaluation of the technology-based approaches. Examples may include: web adaption of existing dietary assessment or self-monitoring tools (e.g., food frequency questionnaires, screeners) image-based or image-assisted methods mobile/smartphone applications for capturing intake for assessment or self-monitoring wearable cameras to record dietary intake or eating behaviours body sensors to measure eating behaviours and/or dietary intake use of technology-based methods to complement aspects of traditional dietary assessment or self-monitoring, such as portion size estimation
    corecore