4 research outputs found

    Gait Analysis from Wearable Devices using Image and Signal Processing

    Get PDF
    We present the results of analyzing gait motion in-person video taken from a commercially available wearable camera embedded in a pair of glasses. The video is analyzed with three different computer vision methods to extract motion vectors from different gait sequences from four individuals for comparison against a manually annotated ground truth dataset. Using a combination of signal processing and computer vision techniques, gait features are extracted to identify the walking pace of the individual wearing the camera and are validated using the ground truth dataset. We perform an additional data collection with both the camera and a body-worn accelerometer to understand the correlation between our vision-based data and a more traditional set of accelerometer data. Our results indicate that the extraction of activity from the video in a controlled setting shows strong promise of being utilized in different activity monitoring applications such as in the eldercare environment, as well as for monitoring chronic healthcare conditions

    Biomechanical Evaluation of an Optical System for Quantitative Human Motion Analysis

    Get PDF
    An eight-camera Optitrack motion capture system was evaluated by performing static, linear dynamic, and angular dynamic calibrations using marker distances associated with upper and lower extremity gait and wheelchair models. Data were analyzed to determine accuracy and resolution within a defined capture volume using a standard Cartesian reference system. Two additional cameras along with AMASS and Visual3D (C-Motion, Inc., Germantown, MD) biomechanical modeling software were used to determine joint kinematics at the pelvis, hip, knee, and ankle of ten control subjects (mean age 21.5 ± 1.65 years). The same data were processed through Nexus (Vicon Motion Systems, Oxford, England) modeling software. The joint angle data was statistically compared between the two systems using a variance components model which determined the variability between maximum, minimum, and range values. Static accuracy ranged from 99.31% to 99.90%. Static resolution ranged from 0.04 ± 0.15 mm to 0.63 ± 0.15 mm at the 0.05 level of significance. The dynamic accuracy ranged from 94.82% to 99.77 %, and dynamic resolution ranged from 0.09 ± 0.26 mm to 0.61 ± 0.31 mm at the 0.05 level of significance. These values are comparable to those reported for a standard Vicon 524 (Vicon Motion Systems, Oxford, England) motion analysis system. Gait cycle maximum, minimum, and range values showed no significant difference when comparing Visual3D and Nexus at the pelvis, hip, and knee. Significant differences were seen at the tibia (rotation) and foot due to foot model variations between the two systems. The results support application of the lower cost Optitrack cameras and Visual3D software for 3D kinematic assessment of lower extremity motion during gait. Additional potential applications supported by these findings include other lower extremity models, assisted ambulation, and wheelchair mobility

    Computer Vision Algorithms for Mobile Camera Applications

    Get PDF
    Wearable and mobile sensors have found widespread use in recent years due to their ever-decreasing cost, ease of deployment and use, and ability to provide continuous monitoring as opposed to sensors installed at fixed locations. Since many smart phones are now equipped with a variety of sensors, including accelerometer, gyroscope, magnetometer, microphone and camera, it has become more feasible to develop algorithms for activity monitoring, guidance and navigation of unmanned vehicles, autonomous driving and driver assistance, by using data from one or more of these sensors. In this thesis, we focus on multiple mobile camera applications, and present lightweight algorithms suitable for embedded mobile platforms. The mobile camera scenarios presented in the thesis are: (i) activity detection and step counting from wearable cameras, (ii) door detection for indoor navigation of unmanned vehicles, and (iii) traffic sign detection from vehicle-mounted cameras. First, we present a fall detection and activity classification system developed for embedded smart camera platform CITRIC. In our system, the camera platform is worn by the subject, as opposed to static sensors installed at fixed locations in certain rooms, and, therefore, monitoring is not limited to confined areas, and extends to wherever the subject may travel including indoors and outdoors. Next, we present a real-time smart phone-based fall detection system, wherein we implement camera and accelerometer based fall-detection on Samsung Galaxy Sâ„¢ 4. We fuse these two sensor modalities to have a more robust fall detection system. Then, we introduce a fall detection algorithm with autonomous thresholding using relative-entropy within the class of Ali-Silvey distance measures. As another wearable camera application, we present a footstep counting algorithm using a smart phone camera. This algorithm provides more accurate step-count compared to using only accelerometer data in smart phones and smart watches at various body locations. As a second mobile camera scenario, we study autonomous indoor navigation of unmanned vehicles. A novel approach is proposed to autonomously detect and verify doorway openings by using the Google Project Tangoâ„¢ platform. The third mobile camera scenario involves vehicle-mounted cameras. More specifically, we focus on traffic sign detection from lower-resolution and noisy videos captured from vehicle-mounted cameras. We present a new method for accurate traffic sign detection, incorporating Aggregate Channel Features and Chain Code Histograms, with the goal of providing much faster training and testing, and comparable or better performance, with respect to deep neural network approaches, without requiring specialized processors. Proposed computer vision algorithms provide promising results for various useful applications despite the limited energy and processing capabilities of mobile devices

    Human gait estimation using a wearable camera

    No full text
    corecore