25 research outputs found

    Recognition of elementary upper limb movements in an activity of daily living using data from wrist mounted accelerometers

    No full text
    In this paper we present a methodology as a proof of concept for recognizing fundamental movements of the humanarm (extension, flexion and rotation of the forearm) involved in ‘making-a-cup-of-tea’, typical of an activity of daily-living (ADL). The movements are initially performed in a controlled environment as part of a training phase and the data are grouped into three clusters using k-means clustering. Movements performed during ADL, forming part of the testing phase, are associated with each cluster label using a minimum distance classifier in a multi-dimensional feature space, comprising of features selected from a ranked set of 30 features, using Euclidean and Mahalonobis distance as the metric. Experiments were performed with four healthy subjects and our results show that the proposed methodology can detect the three movements with an overall average accuracy of 88% across all subjects and arm movement types using Euclidean distance classifier

    Activity and Health Status Monitoring System

    Get PDF
    Physical activity monitoring represents an important tool in supporting/encouraging vulnerable persons in their struggle to recover from surgery or long term illness promoting a healthy lifestyle. The paper proposes a smart, low power activity monitoring platform capable to acquire data from 4 inertial sensor modules placed on human body, temporarily store it on a mobile phone for real time data display or on a server for long term data analysis

    Recognition of elementary arm movements using orientation of a tri-axial accelerometer located near the wrist

    No full text
    In this paper we present a method for recognising three fundamental movements of the human arm (reach and retrieve, lift cup to mouth, rotation of the arm) by determining the orientation of a tri-axial accelerometer located near the wrist. Our objective is to detect the occurrence of such movements performed with the impaired arm of a stroke patient during normal daily activities as a means to assess their rehabilitation. The method relies on accurately mapping transitions of predefined, standard orientations of the accelerometer to corresponding elementary arm movements. To evaluate the technique, kinematic data was collected from four healthy subjects and four stroke patients as they performed a number of activities involved in a representative activity of daily living, 'making-a-cup-of-tea'. Our experimental results show that the proposed method can independently recognise all three of the elementary upper limb movements investigated with accuracies in the range 91–99% for healthy subjects and 70–85% for stroke patients

    CORDIC Framework for Quaternion-based Joint Angle Computation to Classify Arm Movements

    Get PDF
    We present a novel architecture for arm movement classification based on kinematic properties (joint angle and position), computed from MARG sensors, using a quaternion-based gradient-descent method and a 2-link model of the upper limb. The design based on Coordinate Rotation Digital Computer framework was validated on stroke survivors and healthy subjects performing three elementary arm movements (reach and retrieve, lift arm, rotate arm), involved in `making-a-cup-of-tea' an archetypal daily activity, achieved an overall accuracy of 78% and 85% respectively. The design coded in System Verilog, was synthesized using STMicroelectronics 130 nm technology, occupies 340K NAND2 equivalent area and consumes 292 nW @ 150 Hz, besides being functionally verified up to 25 MHz making it suitable for real-time high speed operations. The orientation, arm position and the joint angle, are computed on-the-fly, with the classification performed at the end of movement duration

    Unsupervised Understanding of Location and Illumination Changes in Egocentric Videos

    Full text link
    Wearable cameras stand out as one of the most promising devices for the upcoming years, and as a consequence, the demand of computer algorithms to automatically understand the videos recorded with them is increasing quickly. An automatic understanding of these videos is not an easy task, and its mobile nature implies important challenges to be faced, such as the changing light conditions and the unrestricted locations recorded. This paper proposes an unsupervised strategy based on global features and manifold learning to endow wearable cameras with contextual information regarding the light conditions and the location captured. Results show that non-linear manifold methods can capture contextual patterns from global features without compromising large computational resources. The proposed strategy is used, as an application case, as a switching mechanism to improve the hand-detection problem in egocentric videos.Comment: Submitted for publicatio

    CORDIC Framework for Quaternion-based Joint Angle Computation to Classify Arm Movements

    Get PDF
    We present a novel architecture for arm movement classification based on kinematic properties (joint angle and position), computed from MARG sensors, using a quaternion-based gradient-descent method and a 2-link model of the upper limb. The design based on Coordinate Rotation Digital Computer framework was validated on stroke survivors and healthy subjects performing three elementary arm movements (reach and retrieve, lift arm, rotate arm), involved in `making-a-cup-of-tea' an archetypal daily activity, achieved an overall accuracy of 78% and 85% respectively. The design coded in System Verilog, was synthesized using STMicroelectronics 130 nm technology, occupies 340K NAND2 equivalent area and consumes 292 nW @ 150 Hz, besides being functionally verified up to 25 MHz making it suitable for real-time high speed operations. The orientation, arm position and the joint angle, are computed on-the-fly, with the classification performed at the end of movement duration

    Application of OptiTrack motion capture systems in human movement analysis A systematic literature review

    Get PDF
    With the spreading of motion analysis decisions to invest into a new system demand scientific reference applications. The aim of the present systematic review is to reveal the biomechanical scientific applications of OptiTrack motion capture systems and to overview documented usage conditions and purposes. Six major scientific literature databases were used (PubMed, PubMed Central, ScienceDirect, IEEE Xplore, PLOS and Web Of Science). An OptiTrack camera system had to be used for human or biologically related motion capture. A total of 85 articles were included, 4 out of which dealt with the validation of OptiTrack systems and 81 utilized the system for biomechanical analyses. The data analysed and extracted from the system validation studies included: description of the validated and the reference system, measured features and observed errors. The data extracted from the utilizing studies also included: OptiTrack application, camera type and frequency, marker size, camera number, data processing software and the motion studied. The review offers a broad collection of biomechanical applications of OptiTrack motion capture systems as scientific references for certain motion studies. The review also summarizes findings on the accuracy of the systems. It concludes that the method descriptions of system usage are often underspecifie

    A generalized model for indoor location estimation using environmental sound from human activity recognition

    Get PDF
    The indoor location of individuals is a key contextual variable for commercial and assisted location-based services and applications. Commercial centers and medical buildings (eg, hospitals) require location information of their users/patients to offer the services that are needed at the correct moment. Several approaches have been proposed to tackle this problem. In this paper, we present the development of an indoor location system which relies on the human activity recognition approach, using sound as an information source to infer the indoor location based on the contextual information of the activity that is realized at the moment. In this work, we analyze the sound information to estimate the location using the contextual information of the activity. A feature extraction approach to the sound signal is performed to feed a random forest algorithm in order to generate a model to estimate the location of the user. We evaluate the quality of the resulting model in terms of sensitivity and specificity for each location, and we also perform out-of-bag error estimation. Our experiments were carried out in five representative residential homes. Each home had four individual indoor rooms. Eleven activities (brewing coffee, cooking, eggs, taking a shower, etc.) were performed to provide the contextual information. Experimental results show that developing an indoor location system (ILS) that uses contextual information from human activities (identified with data provided from the environmental sound) can achieve an estimation that is 95% correct
    corecore