13 research outputs found

    Fluid Intake Monitoring Systems for the Elderly: A Review of the Literature

    No full text
    Fluid intake monitoring is an essential component in preventing dehydration and overhydration, especially for the senior population. Numerous critical health problems are associated with poor or excessive drinking such as swelling of the brain and heart failure. Real-time systems for monitoring fluid intake will not only measure the exact amount consumed by the users, but could also motivate people to maintain a healthy lifestyle by providing feedback to encourage them to hydrate regularly throughout the day. This paper reviews the most recent solutions to automatic fluid intake monitoring both commercially and in the literature. The available technologies are divided into four categories: wearables, surfaces with embedded sensors, vision- and environmental-based solutions, and smart containers. A detailed performance evaluation was carried out considering detection accuracy, usability and availability. It was observed that the most promising results came from studies that used data fusion from multiple technologies, compared to using an individual technology. The areas that need further research and the challenges for each category are discussed in detail

    A novel wearable biofeedback system to prevent trip-related falls

    No full text
    Real-time gait monitoring of older adults and gait-impaired individuals while providing real-time biofeedback has the potential to help reduce trip-related falls. A low or unsuccessful Minimum Toe Clearance (MTC) is considered a predictor of tripping risk. Thus, increasing the MTC can be a key component in minimizing the likelihood of tripping. This paper discusses a proof-of-concept wearable system that estimates the MTC in real-time using two Time-of-Flight (ToF) sensors and provides auditory biofeedback to alert users if they have a low MTC during everyday walking activities. Ten healthy female adults were asked to perform two experiments: 1) walk at a predetermined speed to evaluate the proposed real-time MTC detection algorithm, and 2) walk in four conditions: baseline, biofeedback with no distraction, biofeedback with distraction 1 (talking on the phone), and biofeedback with distraction 2 (playing a simple mobile game). The average MTC values were significantly greater during all feedback conditions than the baseline, indicating that the proposed system could successfully warn users to increase their MTC in real-time

    Design of a Novel Wearable System for Foot Clearance Estimation

    No full text
    Trip-related falls are one of the major causes of injury among seniors in Canada and can be attributable to an inadequate Minimum Toe Clearance (MTC). Currently, motion capture systems are the gold standard for measuring MTC; however, they are expensive and have a restricted operating area. In this paper, a novel wearable system is proposed that can estimate different foot clearance parameters accurately using only two Time-of-Flight (ToF) sensors located at the toe and heel of the shoe. A small-scale preliminary study was conducted to investigate the feasibility of foot clearance estimation using the proposed wearable system. We recruited ten young, healthy females to walk at three self-selected speeds (normal, slow, and fast) while wearing the system. Our data analysis showed an average correlation coefficient of 0.94, 0.94, 0.92 for the normal, slow, and fast speed, respectively, when comparing the ToF signals with motion capture. The ANOVA analysis confirmed these results further by revealing no statistically significant differences between the ToF signals and motion capture data for most of the gait parameters after applying the newly proposed foot angle and offset compensation. In addition, the proposed system can measure the MTC with an average Mean Error (ME) of −0.08 ± 3.69 mm, −0.12 ± 4.25 mm, and −0.10 ± 6.57 mm for normal, slow, and fast walking speeds, respectively. The proposed affordable wearable system has the potential to perform real-time MTC estimation and contribute to future work focused on minimizing tripping risks

    Influential Factors in Remote Monitoring of Heart Failure Patients: A Review of the Literature and Direction for Future Research

    No full text
    With new advances in technology, remote monitoring of heart failure (HF) patients has become increasingly prevalent and has the potential to greatly enhance the outcome of care. Many studies have focused on implementing systems for the management of HF by analyzing physiological signals for the early detection of HF decompensation. This paper reviews recent literature exploring significant physiological variables, compares their reliability in predicting HF-related events, and examines the findings according to the monitored variables used such as body weight, bio-impedance, blood pressure, heart rate, and respiration rate. The reviewed studies identified correlations between the monitored variables and the number of alarms, HF-related events, and/or readmission rates. It was observed that the most promising results came from studies that used a combination of multiple parameters, compared to using an individual variable. The main challenges discussed include inaccurate data collection leading to contradictory outcomes from different studies, compliance with daily monitoring, and consideration of additional factors such as physical activity and diet. The findings demonstrate the need for a shared remote monitoring platform which can lead to a significant reduction of false alarms and help in collecting reliable data from the patients for clinical use especially for the prevention of cardiac events

    Automated Fluid Intake Detection Using RGB Videos

    No full text
    Dehydration is a common, serious issue among older adults. It is important to drink fluid to prevent dehydration and the complications that come with it. As many older adults forget to drink regularly, there is a need for an automated approach, tracking intake throughout the day with limited user interaction. The current literature has used vision-based approaches with deep learning models to detect drink events; however, most use static frames (2D networks) in a lab-based setting, only performing eating and drinking. This study proposes a 3D convolutional neural network using video segments to detect drinking events. In this preliminary study, we collected data from 9 participants in a home simulated environment performing daily activities as well as eating and drinking from various containers to create a robust environment and dataset. Using state-of-the-art deep learning models, we trained our CNN using both static images and video segments to compare the results. The 3D model attained higher performance (compared to 2D CNN) with F1 scores of 93.7% and 84.2% using 10-fold and leave-one-subject-out cross-validations, respectively

    A Vision-Based Approach for Sidewalk and Walkway Trip Hazards Assessment

    No full text
    Tripping hazards on the sidewalk cause many falls annually, and the inspection and repair of these hazards cost cities millions of dollars. Currently, there is not an efficient and cost-effective method to monitor the sidewalk to identify any possible tripping hazards. In this paper, a new portable device is proposed using an Intel RealSense D415 RGB-D camera to monitor the sidewalks, detect the hazards, and extract relevant features of the hazards. This paper first analyzes the effects of environmental factors contributing to the device’s error and compares different regression techniques to calibrate the camera. The Gaussian Process Regression models yielded the most accurate predictions with less than 0.09 mm Mean Absolute Errors (MAEs). In the second phase, a novel segmentation algorithm is proposed that combines the edge detection and region-growing techniques to detect the true tripping hazards. Different examples are provided to visualize the output results of the proposed method

    Design and Validation of Vision-Based Exercise Biofeedback for Tele-Rehabilitation

    No full text
    Tele-rehabilitation has the potential to considerably change the way patients are monitored from their homes during the care process, by providing equitable access without the need to travel to rehab centers or shoulder the high cost of personal in-home services. Developing a tele-rehab platform with the capability of automating exercise guidance is likely to have a significant impact on rehabilitation outcomes. In this paper, a new vision-based biofeedback system is designed and validated to identify the quality of performed exercises. This new system will help patients to refine their movements to get the most out of their plan of care. An open dataset was used, which consisted of data from 30 participants performing nine different exercises. Each exercise was labeled as “Correctly” or “Incorrectly” executed by five clinicians. We used a pre-trained 3D Convolution Neural Network (3D-CNN) to design our biofeedback system. The proposed system achieved average accuracy values of 90.57% ± 9.17% and 83.78% ± 7.63% using 10-Fold and Leave-One-Subject-Out (LOSO) cross validation, respectively. In addition, we obtained average F1-scores of 71.78% ± 5.68% using 10-Fold and 60.64% ± 21.3% using LOSO validation. The proposed 3D-CNN was able to classify the rehabilitation videos and feedback on the quality of exercises to help users modify their movement patterns

    Joint angle estimation during shoulder abduction exercise using contactless technology

    No full text
    Abstract Background Tele-rehabilitation, also known as tele-rehab, uses communication technologies to provide rehabilitation services from a distance. The COVID-19 pandemic has highlighted the importance of tele-rehab, where the in-person visits declined and the demand for remote healthcare rises. Tele-rehab offers enhanced accessibility, convenience, cost-effectiveness, flexibility, care quality, continuity, and communication. However, the current systems are often not able to perform a comprehensive movement analysis. To address this, we propose and validate a novel approach using depth technology and skeleton tracking algorithms. Methods Our data involved 14 participants (8 females, 6 males) performing shoulder abduction exercises. We collected depth videos from an LiDAR camera and motion data from a Motion Capture (Mocap) system as our ground truth. The data were collected at distances of 2 m, 2.5 m, and 3.5 m from the LiDAR sensor for both arms. Our innovative approach integrates LiDAR with the Cubemos and Mediapipe skeleton tracking frameworks, enabling the assessment of 3D joint angles. We validated the system by comparing the estimated joint angles versus Mocap outputs. Personalized calibration was applied using various regression models to enhance the accuracy of the joint angle calculations. Results The Cubemos skeleton tracking system outperformed Mediapipe in joint angle estimation with higher accuracy and fewer errors. The proposed system showed a strong correlation with Mocap results, although some deviations were present due to noise. Precision decreased as the distance from the camera increased. Calibration significantly improved performance. Linear regression models consistently outperformed nonlinear models, especially at shorter distances. Conclusion This study showcases the potential of a marker-less system, to proficiently track body joints and upper-limb angles. Signals from the proposed system and the Mocap system exhibited robust correlation, with Mean Absolute Errors (MAEs) consistently below 10∘10^\circ 10 ∘ . LiDAR’s depth feature enabled accurate computation of in-depth angles beyond the reach of traditional RGB cameras. Altogether, this emphasizes the depth-based system’s potential for precise joint tracking and angle calculation in tele-rehab applications

    A novel approach to tele-rehabilitation: Implementing a biofeedback system using machine learning algorithms

    No full text
    Tele-rehabilitation (Tele-rehab) is changing the landscape of virtual care by redefining assessment and breaking accessibility barriers as a convenient substitute for conventional rehabilitation. The COVID-19 pandemic resulted in a rapid uptake of virtual care. Researchers and health professionals have started developing new tele-rehab platforms, e.g., in the form of video conferencing. Albeit useful, these platforms still require the clinicians’ time and energy. Integrating a biofeedback system that can reliably distinguish between “Correctly Executed” from “Incorrectly Executed” exercises into tele-rehab platforms can help patients to perform rehab exercises correctly, avoid injuries, and enhance recovery. To address this gap, this paper proposes an automated system that uses machine learning to classify correct and incorrect executions of 9 rehabilitation gestures. The model is trained on 24 angle signals extracted from different body sections. The angle signals are obtained in 3D space, and 10 features are extracted from each signal. Six different classifiers, including Random Forest, Multi-Layer Perceptron Artificial Neural Networks, NaĂŻve Bayes, Support Vector Machine, K-Nearest Neighbors, and Logistic Regression, are used, and evaluated with 10-Fold and Leave One Subject Out (LOSO) cross validations. The best classifiers achieved an average accuracy of 89.86% ± 3.38% and F1-Score of 72.84% ± 11.98% for 10-Fold and an average accuracy of 88.21% ± 3.90% and F1-Score of 68.16%±13.28% for LOSO. The proposed system has great potential to be integrated into tele-rehab platforms to help patients perform their exercises reliably.© 2017 Elsevier Inc. All rights reserved

    A Comprehensive Analysis on Wearable Acceleration Sensors in Human Activity Recognition

    No full text
    Sensor-based motion recognition integrates the emerging area of wearable sensors with novel machine learning techniques to make sense of low-level sensor data and provide rich contextual information in a real-life application. Although Human Activity Recognition (HAR) problem has been drawing the attention of researchers, it is still a subject of much debate due to the diverse nature of human activities and their tracking methods. Finding the best predictive model in this problem while considering different sources of heterogeneities can be very difficult to analyze theoretically, which stresses the need of an experimental study. Therefore, in this paper, we first create the most complete dataset, focusing on accelerometer sensors, with various sources of heterogeneities. We then conduct an extensive analysis on feature representations and classification techniques (the most comprehensive comparison yet with 293 classifiers) for activity recognition. Principal component analysis is applied to reduce the feature vector dimension while keeping essential information. The average classification accuracy of eight sensor positions is reported to be 96.44% ± 1.62% with 10-fold evaluation, whereas accuracy of 79.92% ± 9.68% is reached in the subject-independent evaluation. This study presents significant evidence that we can build predictive models for HAR problem under more realistic conditions, and still achieve highly accurate results
    corecore