2,613 research outputs found
Human Gait Analysis in Neurodegenerative Diseases: a Review
This paper reviews the recent literature on technologies and methodologies for quantitative human gait analysis in the context of neurodegnerative diseases. The use of technological instruments can be of great support in both clinical diagnosis and severity assessment of these pathologies. In this paper, sensors, features and processing methodologies have been reviewed in order to provide a highly consistent work that explores the issues related to gait analysis. First, the phases of the human gait cycle are briefly explained, along with some non-normal gait patterns (gait abnormalities) typical of some neurodegenerative diseases. The work continues with a survey on the publicly available datasets principally used for comparing results. Then the paper reports the most common processing techniques for both feature selection and extraction and for classification and clustering. Finally, a conclusive discussion on current open problems and future directions is outlined
Wearable inertial sensors for human movement analysis
Introduction: The present review aims to provide an overview of the most common uses of wearable inertial sensors in the field of clinical human movement analysis.Areas covered: Six main areas of application are analysed: gait analysis, stabilometry, instrumented clinical tests, upper body mobility assessment, daily-life activity monitoring and tremor assessment. Each area is analyzed both from a methodological and applicative point of view. The focus on the methodological approaches is meant to provide an idea of the computational complexity behind a variable/parameter/index of interest so that the reader is aware of the reliability of the approach. The focus on the application is meant to provide a practical guide for advising clinicians on how inertial sensors can help them in their clinical practice.Expert commentary: Less expensive and more easy to use than other systems used in human movement analysis, wearable sensors have evolved to the point that they can be considered ready for being part of routine clinical routine
Automatic visual detection of human behavior: a review from 2000 to 2014
Due to advances in information technology (e.g., digital video cameras, ubiquitous sensors), the automatic detection of human behaviors from video is a very recent research topic. In this paper, we perform a systematic and recent literature review on this topic, from 2000 to 2014, covering a selection of 193 papers that were searched from six major scientific publishers. The selected papers were classified into three main subjects: detection techniques, datasets and applications. The detection techniques were divided into four categories (initialization, tracking, pose estimation and recognition). The list of datasets includes eight examples (e.g., Hollywood action). Finally, several application areas were identified, including human detection, abnormal activity detection, action recognition, player modeling and pedestrian detection. Our analysis provides a road map to guide future research for designing automatic visual human behavior detection systems.This work is funded by the Portuguese Foundation for Science and Technology (FCT - Fundacao para a Ciencia e a Tecnologia) under research Grant SFRH/BD/84939/2012
Recommended from our members
Recognizing human activity using RGBD data
textTraditional computer vision algorithms try to understand the world using visible light cameras. However, there are inherent limitations of this type of data source. First, visible light images are sensitive to illumination changes and background clutter. Second, the 3D structural information of the scene is lost when projecting the 3D world to 2D images. Recovering the 3D information from 2D images is a challenging problem. Range sensors have existed for over thirty years, which capture 3D characteristics of the scene. However, earlier range sensors were either too expensive, difficult to use in human environments, slow at acquiring data, or provided a poor estimation of distance. Recently, the easy access to the RGBD data at real-time frame rate is leading to a revolution in perception and inspired many new research using RGBD data. I propose algorithms to detect persons and understand the activities using RGBD data. I demonstrate the solutions to many computer vision problems may be improved with the added depth channel. The 3D structural information may give rise to algorithms with real-time and view-invariant properties in a faster and easier fashion. When both data sources are available, the features extracted from the depth channel may be combined with traditional features computed from RGB channels to generate more robust systems with enhanced recognition abilities, which may be able to deal with more challenging scenarios. As a starting point, the first problem is to find the persons of various poses in the scene, including moving or static persons. Localizing humans from RGB images is limited by the lighting conditions and background clutter. Depth image gives alternative ways to find the humans in the scene. In the past, detection of humans from range data is usually achieved by tracking, which does not work for indoor person detection. In this thesis, I propose a model based approach to detect the persons using the structural information embedded in the depth image. I propose a 2D head contour model and a 3D head surface model to look for the head-shoulder part of the person. Then, a segmentation scheme is proposed to segment the full human body from the background and extract the contour. I also give a tracking algorithm based on the detection result. I further research on recognizing human actions and activities. I propose two features for recognizing human activities. The first feature is drawn from the skeletal joint locations estimated from a depth image. It is a compact representation of the human posture called histograms of 3D joint locations (HOJ3D). This representation is view-invariant and the whole algorithm runs at real-time. This feature may benefit many applications to get a fast estimation of the posture and action of the human subject. The second feature is a spatio-temporal feature for depth video, which is called Depth Cuboid Similarity Feature (DCSF). The interest points are extracted using an algorithm that effectively suppresses the noise and finds salient human motions. DCSF is extracted centered on each interest point, which forms the description of the video contents. This descriptor can be used to recognize the activities with no dependence on skeleton information or pre-processing steps such as motion segmentation, tracking, or even image de-noising or hole-filling. It is more flexible and widely applicable to many scenarios. Finally, all the features herein developed are combined to solve a novel problem: first-person human activity recognition using RGBD data. Traditional activity recognition algorithms focus on recognizing activities from a third-person perspective. I propose to recognize activities from a first-person perspective with RGBD data. This task is very novel and extremely challenging due to the large amount of camera motion either due to self exploration or the response of the interaction. I extracted 3D optical flow features as the motion descriptor, 3D skeletal joints features as posture descriptors, spatio-temporal features as local appearance descriptors to describe the first-person videos. To address the ego-motion of the camera, I propose an attention mask to guide the recognition procedures and separate the features on the ego-motion region and independent-motion region. The 3D features are very useful at summarizing the discerning information of the activities. In addition, the combination of the 3D features with existing 2D features brings more robust recognition results and make the algorithm capable of dealing with more challenging cases.Electrical and Computer Engineerin
Application-Driven AI Paradigm for Human Action Recognition
Human action recognition in computer vision has been widely studied in recent
years. However, most algorithms consider only certain action specially with
even high computational cost. That is not suitable for practical applications
with multiple actions to be identified with low computational cost. To meet
various application scenarios, this paper presents a unified human action
recognition framework composed of two modules, i.e., multi-form human detection
and corresponding action classification. Among them, an open-source dataset is
constructed to train a multi-form human detection model that distinguishes a
human being's whole body, upper body or part body, and the followed action
classification model is adopted to recognize such action as falling, sleeping
or on-duty, etc. Some experimental results show that the unified framework is
effective for various application scenarios. It is expected to be a new
application-driven AI paradigm for human action recognition
Evaluation of a skeleton-based method for human activity recognition on a large-scale RGB-D dataset
Paper accepted for presentation at 2nd IET International Conference on Technologies for Active and Assisted Living (TechAAL), 24-25 October 2016, IET London: Savoy Place.Low cost RGB-D sensors have been used extensively in the field of Human Action Recognition. The availability of skeleton joints simplifies the process of feature extraction from depth or RGB frames, and this feature fostered the development of activity recognition algorithms using skeletons as input data. This work evaluates the performance of a skeleton-based algorithm for Human Action Recognition on a large-scale dataset.
The algorithm exploits the bag of key poses method, where a sequence of skeleton features is represented as a set of key poses. A temporal pyramid is adopted to model the temporal structure of the key poses, represented using histograms. Finally, a multi-class SVM performs the classification task, obtaining promising results on the large-scale NTU RGB+D dataset.The authors would like to acknowledge the contribution of the COST Action IC1303 AAPELE (Architectures, Algorithms and Platforms for Enhanced Living Environments)
- …