497 research outputs found
An Examination of Wearable Sensors and Video Data Capture for Human Exercise Classification
Wearable sensors such as Inertial Measurement Units (IMUs) are often used to
assess the performance of human exercise. Common approaches use handcrafted
features based on domain expertise or automatically extracted features using
time series analysis. Multiple sensors are required to achieve high
classification accuracy, which is not very practical. These sensors require
calibration and synchronization and may lead to discomfort over longer time
periods. Recent work utilizing computer vision techniques has shown similar
performance using video, without the need for manual feature engineering, and
avoiding some pitfalls such as sensor calibration and placement on the body. In
this paper, we compare the performance of IMUs to a video-based approach for
human exercise classification on two real-world datasets consisting of Military
Press and Rowing exercises. We compare the performance using a single camera
that captures video in the frontal view versus using 5 IMUs placed on different
parts of the body. We observe that an approach based on a single camera can
outperform a single IMU by 10 percentage points on average. Additionally, a
minimum of 3 IMUs are required to outperform a single camera. We observe that
working with the raw data using multivariate time series classifiers
outperforms traditional approaches based on handcrafted or automatically
extracted features. Finally, we show that an ensemble model combining the data
from a single camera with a single IMU outperforms either data modality. Our
work opens up new and more realistic avenues for this application, where a
video captured using a readily available smartphone camera, combined with a
single sensor, can be used for effective human exercise classification
Editorial for Vol. 26 No. 1
This first issue of CIT\u27s Volume 26 (March 2018) brings one opinion paper and five regular papers, these latter from the broad areas of computer networks, image processing, cluster analysis as well as information retrieval
06241 Abstracts Collection -- Human Motion - Understanding, Modeling, Capture and Animation. 13th Workshop
From 11.06.06 to 16.06.06, the Dagstuhl Seminar 06241 ``Human Motion - Understanding, Modeling, Capture and Animation. 13th Workshop "Theoretical Foundations of Computer Vision"\u27\u27 was held
in the International Conference and Research Center (IBFI),
Schloss Dagstuhl.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this paper. The first section
describes the seminar topics and goals in general
POSE ESTIMATION AND ACTION RECOGNITION IN SPORTS AND FITNESS
The emergence of large datasets and major improvements in Deep Learning has lead to many real-world applications. These applications have been focused on automotive markets, mobile markets, stock markets, and the healthcare market. Although Deep Learning has strong foundations across many areas, the few applications in Sports, Fitness, or even Injury Rehabilitation could benefit greatly from it. For example, if you are performing a workout and you need to evaluate your form, but do not have access or resources for an instructor to evaluate your form, it would be great to have an Artificial Intelligent agent provide real time feedback through your laptop or phone. Therefore our goal in this research study is to find a foundation for an exercise feedback application by comparing two computer vision models. The two approaches we will be comparing will be pose estimation and action recognition. The latter will be covered in more depth, as we will provide an end to end approach, while the former will be used as a benchmark to compare with. Action recognition will cover the collection, labeling, and organization of the data, training and integrating with real-time data to provide the user with feedback. The exercises we will focus on during our testing and analysis will be squats and push-ups. We were able to achieve an accuracy score of 79% with our best model, given a validation set of 391 squatting images from the PennAction dataset for squat exercise action recognition
GaitPT: Skeletons Are All You Need For Gait Recognition
The analysis of patterns of walking is an important area of research that has
numerous applications in security, healthcare, sports and human-computer
interaction. Lately, walking patterns have been regarded as a unique
fingerprinting method for automatic person identification at a distance. In
this work, we propose a novel gait recognition architecture called Gait Pyramid
Transformer (GaitPT) that leverages pose estimation skeletons to capture unique
walking patterns, without relying on appearance information. GaitPT adopts a
hierarchical transformer architecture that effectively extracts both spatial
and temporal features of movement in an anatomically consistent manner, guided
by the structure of the human skeleton. Our results show that GaitPT achieves
state-of-the-art performance compared to other skeleton-based gait recognition
works, in both controlled and in-the-wild scenarios. GaitPT obtains 82.6%
average accuracy on CASIA-B, surpassing other works by a margin of 6%.
Moreover, it obtains 52.16% Rank-1 accuracy on GREW, outperforming both
skeleton-based and appearance-based approaches
Human behavior understanding for worker-centered intelligent manufacturing
“In a worker-centered intelligent manufacturing system, sensing and understanding of the worker’s behavior are the primary tasks, which are essential for automatic performance evaluation & optimization, intelligent training & assistance, and human-robot collaboration. In this study, a worker-centered training & assistant system is proposed for intelligent manufacturing, which is featured with self-awareness and active-guidance. To understand the hand behavior, a method is proposed for complex hand gesture recognition using Convolutional Neural Networks (CNN) with multiview augmentation and inference fusion, from depth images captured by Microsoft Kinect. To sense and understand the worker in a more comprehensive way, a multi-modal approach is proposed for worker activity recognition using Inertial Measurement Unit (IMU) signals obtained from a Myo armband and videos from a visual camera. To automatically learn the importance of different sensors, a novel attention-based approach is proposed to human activity recognition using multiple IMU sensors worn at different body locations. To deploy the developed algorithms to the factory floor, a real-time assembly operation recognition system is proposed with fog computing and transfer learning. The proposed worker-centered training & assistant system has been validated and demonstrated the feasibility and great potential for applying to the manufacturing industry for frontline workers. Our developed approaches have been evaluated: 1) the multi-view approach outperforms the state-of-the-arts on two public benchmark datasets, 2) the multi-modal approach achieves an accuracy of 97% on a worker activity dataset including 6 activities and achieves the best performance on a public dataset, 3) the attention-based method outperforms the state-of-the-art methods on five publicly available datasets, and 4) the developed transfer learning model achieves a real-time recognition accuracy of 95% on a dataset including 10 worker operations”--Abstract, page iv
Recent Advances in Motion Analysis
The advances in the technology and methodology for human movement capture and analysis over the last decade have been remarkable. Besides acknowledged approaches for kinematic, dynamic, and electromyographic (EMG) analysis carried out in the laboratory, more recently developed devices, such as wearables, inertial measurement units, ambient sensors, and cameras or depth sensors, have been adopted on a wide scale. Furthermore, computational intelligence (CI) methods, such as artificial neural networks, have recently emerged as promising tools for the development and application of intelligent systems in motion analysis. Thus, the synergy of classic instrumentation and novel smart devices and techniques has created unique capabilities in the continuous monitoring of motor behaviors in different fields, such as clinics, sports, and ergonomics. However, real-time sensing, signal processing, human activity recognition, and characterization and interpretation of motion metrics and behaviors from sensor data still representing a challenging problem not only in laboratories but also at home and in the community. This book addresses open research issues related to the improvement of classic approaches and the development of novel technologies and techniques in the domain of motion analysis in all the various fields of application
- …