60 research outputs found

    Recognition of elementary upper limb movements in an activity of daily living using data from wrist mounted accelerometers

    No full text
    In this paper we present a methodology as a proof of concept for recognizing fundamental movements of the humanarm (extension, flexion and rotation of the forearm) involved in ‘making-a-cup-of-tea’, typical of an activity of daily-living (ADL). The movements are initially performed in a controlled environment as part of a training phase and the data are grouped into three clusters using k-means clustering. Movements performed during ADL, forming part of the testing phase, are associated with each cluster label using a minimum distance classifier in a multi-dimensional feature space, comprising of features selected from a ranked set of 30 features, using Euclidean and Mahalonobis distance as the metric. Experiments were performed with four healthy subjects and our results show that the proposed methodology can detect the three movements with an overall average accuracy of 88% across all subjects and arm movement types using Euclidean distance classifier

    Detecting Social Interactions in Working Environments Through Sensing Technologies

    Get PDF
    The knowledge about social ties among humans is important to optimize several aspects concerning networking in mobile social networks. Generally, ties among people are detected on the base of proximity of people. We discuss here how ties concerning colleagues in an office can be detected by leveraging on a number of sociological markers like co-activity, proximity, speech activity and similarity of locations visited. We present the results from two data gathering campaigns located in Italy and Spain.Ministerio de EconomĂ­a y Competitividad TIN2013-46801-C4-1-RJunta de AndalucĂ­a TIC-805

    Action Recognition in Manufacturing Assembly using Multimodal Sensor Fusion

    Get PDF
    Production innovations are occurring faster than ever. Manufacturing workers thus need to frequently learn new methods and skills. In fast changing, largely uncertain production systems, manufacturers with the ability to comprehend workers\u27 behavior and assess their operation performance in near real-time will achieve better performance than peers. Action recognition can serve this purpose. Despite that human action recognition has been an active field of study in machine learning, limited work has been done for recognizing worker actions in performing manufacturing tasks that involve complex, intricate operations. Using data captured by one sensor or a single type of sensor to recognize those actions lacks reliability. The limitation can be surpassed by sensor fusion at data, feature, and decision levels. This paper presents a study that developed a multimodal sensor system and used sensor fusion methods to enhance the reliability of action recognition. One step in assembling a Bukito 3D printer, which composed of a sequence of 7 actions, was used to illustrate and assess the proposed method. Two wearable sensors namely Myo-armband captured both Inertial Measurement Unit (IMU) and electromyography (EMG) signals of assembly workers. Microsoft Kinect, a vision based sensor, simultaneously tracked predefined skeleton joints of them. The collected IMU, EMG, and skeleton data were respectively used to train five individual Convolutional Neural Network (CNN) models. Then, various fusion methods were implemented to integrate the prediction results of independent models to yield the final prediction. Reasons for achieving better performance using sensor fusion were identified from this study

    Kinect vs. low-cost inertial sensing for gesture recognition

    Get PDF
    In this paper, we investigate efficient recognition of human gestures / movements from multimedia and multimodal data, including the Microsoft Kinect and translational and rotational acceleration and velocity from wearable inertial sensors. We firstly present a system that automatically classifies a large range of activities (17 different gestures) using a random forest decision tree. Our system can achieve near real time recognition by appropriately selecting the sensors that led to the greatest contributing factor for a particular task. Features extracted from multimodal sensor data were used to train and evaluate a customized classifier. This novel technique is capable of successfully classifying various gestures with up to 91 % overall accuracy on a publicly available data set. Secondly we investigate a wide range of different motion capture modalities and compare their results in terms of gesture recognition accuracy using our proposed approach. We conclude that gesture recognition can be effectively performed by considering an approach that overcomes many of the limitations associated with the Kinect and potentially paves the way for low-cost gesture recognition in unconstrained environments

    Recognition of elementary arm movements using orientation of a tri-axial accelerometer located near the wrist

    No full text
    In this paper we present a method for recognising three fundamental movements of the human arm (reach and retrieve, lift cup to mouth, rotation of the arm) by determining the orientation of a tri-axial accelerometer located near the wrist. Our objective is to detect the occurrence of such movements performed with the impaired arm of a stroke patient during normal daily activities as a means to assess their rehabilitation. The method relies on accurately mapping transitions of predefined, standard orientations of the accelerometer to corresponding elementary arm movements. To evaluate the technique, kinematic data was collected from four healthy subjects and four stroke patients as they performed a number of activities involved in a representative activity of daily living, 'making-a-cup-of-tea'. Our experimental results show that the proposed method can independently recognise all three of the elementary upper limb movements investigated with accuracies in the range 91–99% for healthy subjects and 70–85% for stroke patients

    Hum Factors

    Get PDF
    Objective:To gather information on the (a) types of wearable sensors, particularly personal activity monitors, currently used by occupational safety and health (OSH) professionals, (b) potential benefits of using such technologies in the workplace, and (c) perceived barriers preventing the widespread adoption of wearable sensors in industry.Background:Wearable sensors are increasingly being promoted as a means to improve employee health and well-being and there is mounting evidence supporting their use as exposure assessment and personal health tools. Despite this, many workplaces have been hesitant to adopt these technologies.Methods:An electronic survey was emailed to 28,428 registered members of the American Society of Safety Engineers (ASSE) and 1,302 professionals certified by the Board of Certification in Professional Ergonomics (BCPE).Results:A total of 952 valid responses were returned. Over half of respondents described being in favor of using wearable sensors to track OSH-related risk factors and relevant exposure metrics at their respective workplaces. However, barriers including concerns regarding employee privacy/confidentiality of collected data, employee compliance, sensor durability, the cost/benefit ratio of using wearables, and good manufacturing practice requirements were described as challenges precluding adoption.Conclusion:The broad adoption of wearable technologies appears to depend largely on the scientific community\u2019s ability to successfully address the identified barriers.Application:Investigators may use the information provided to develop research studies that better address OSH practitioner concerns and that help technology developers operationalize wearable sensors to improve employee health and well-being.T42 OH008436/OH/NIOSH CDC HHSUnited States/2022-07-22T00:00:00Z29320232PMC930713011686vault:4300

    A Hybrid Hierarchical Framework for Gym Physical Activity Recognition and Measurement Using Wearable Sensors

    Get PDF
    Due to the many beneficial effects on physical and mental health and strong association with many fitness and rehabilitation programs, physical activity (PA) recognition has been considered as a key paradigm for internet of things (IoT) healthcare. Traditional PA recognition techniques focus on repeated aerobic exercises or stationary PA. As a crucial indicator in human health, it covers a range of bodily movement from aerobics to anaerobic that may all bring health benefits. However, existing PA recognition approaches are mostly designed for specific scenarios and often lack extensibility for application in other areas, thereby limiting their usefulness. In this paper, we attempt to detect more gym physical activities (GPAs) in addition to traditional PA using acceleration, A two layer recognition framework is proposed that can classify aerobic, sedentary and free weight activities, count repetitions and sets for the free weight exercises, and in the meantime, measure quantities of repetitions and sets for free weight activities. In the first layer, a one-class SVM (OC-SVM) is applied to coarsely classify free weight and non-free weight activities. In the second layer, a neural network (NN) is utilized for aerobic and sedentary activities recognition; a hidden Markov model (HMM) is to provide a further classification in free weight activities. The performance of the framework was tested on 10 healthy subjects (age: 30 ± 5; BMI: 25 ± 5.5 kg/ and compared with some typical classifiers. The results indicate the proposed framework has better performance in recognizing and measuring GPAs than other approaches. The potential of this framework can be potentially extended in supporting more types of PA recognition in complex applications

    Towards automatic activity classification and movement assessment during a sports training session

    Get PDF
    Abstract—Motion analysis technologies have been widely used to monitor the potential for injury and enhance athlete perfor- mance. However, most of these technologies are expensive, can only be used in laboratory environments and examine only a few trials of each movement action. In this paper, we present a novel ambulatory motion analysis framework using wearable inertial sensors to accurately assess all of an athlete’s activities in real training environment. We firstly present a system that automatically classifies a large range of training activities using the Discrete Wavelet Transform (DWT) in conjunction with a Random forest classifier. The classifier is capable of successfully classifying various activities with up to 98% accuracy. Secondly, a computationally efficient gradient descent algorithm is used to estimate the relative orientations of the wearable inertial sensors mounted on the shank, thigh and pelvis of a subject, from which the flexion-extension knee and hip angles are calculated. These angles, along with sacrum impact accelerations, are automatically extracted for each stride during jogging. Finally, normative data is generated and used to determine if a subject’s movement technique differed to the normative data in order to identify potential injury related factors. For the joint angle data this is achieved using a curve-shift registration technique. It is envisaged that the proposed framework could be utilized for accurate and automatic sports activity classification and reliable movement technique evaluation in various unconstrained environments for both injury management and performance enhancement

    Bite detection and differentiation using templates of wrist motion

    Get PDF
    We introduce a new algorithm of bite detection during an eating activity based on template matching. The algorithm uses a template to model the motion of the wrist over a 6-second window centered on the time when a person takes a bite. We also determine if diïŹ€erent types of bites (for example food vs. drink, or using diïŹ€erent types of utensils) have diïŹ€erent wrist motion templates. This method is implemented on 22,383 bites and 5 diïŹ€erent types of templates are built. We then describe a method to recognize diïŹ€erent types of bites using the set of templates. The obtained accuracy was 46%. Finally, we describe a method to detect bites using the set of templates and compare its accuracy to the original threshold-based algorithm. We get positive predictive value of 75 % and true positive rate of 47% found across all bites
    • 

    corecore