761 research outputs found

    Automated Markerless Extraction of Walking People Using Deformable Contour Models

    No full text
    We develop a new automated markerless motion capture system for the analysis of walking people. We employ global evidence gathering techniques guided by biomechanical analysis to robustly extract articulated motion. This forms a basis for new deformable contour models, using local image cues to capture shape and motion at a more detailed level. We extend the greedy snake formulation to include temporal constraints and occlusion modelling, increasing the capability of this technique when dealing with cluttered and self-occluding extraction targets. This approach is evaluated on a large database of indoor and outdoor video data, demonstrating fast and autonomous motion capture for walking people

    On Using Gait in Forensic Biometrics

    No full text
    Given the continuing advances in gait biometrics, it appears prudent to investigate the translation of these techniques for forensic use. We address the question as to the confidence that might be given between any two such measurements. We use the locations of ankle, knee and hip to derive a measure of the match between walking subjects in image sequences. The Instantaneous Posture Match algorithm, using Harr templates, kinematics and anthropomorphic knowledge is used to determine their location. This is demonstrated using real CCTV recorded at Gatwick Airport, laboratory images from the multi-view CASIA-B dataset and an example of real scene of crime video. To access the measurement confidence we study the mean intra- and inter-match scores as a function of database size. These measures converge to constant and separate values, indicating that the match measure derived from individual comparisons is considerably smaller than the average match measure from a population

    Markerless View Independent Gait Analysis with Self-camera Calibration

    No full text
    We present a new method for viewpoint independent markerless gait analysis. The system uses a single camera, does not require camera calibration and works with a wide range of directions of walking. These properties make the proposed method particularly suitable for identification by gait, where the advantages of completely unobtrusiveness, remoteness and covertness of the biometric system preclude the availability of camera information and use of marker based technology. Tests on more than 200 video sequences with subjects walking freely along different walking directions have been performed. The obtained results show that markerless gait analysis can be achieved without any knowledge of internal or external camera parameters and that the obtained data that can be used for gait biometrics purposes. The performance of the proposed method is particularly encouraging for its appliance in surveillance scenarios

    Towards automated visual surveillance using gait for identity recognition and tracking across multiple non-intersecting cameras

    No full text
    Despite the fact that personal privacy has become a major concern, surveillance technology is now becoming ubiquitous in modern society. This is mainly due to the increasing number of crimes as well as the essential necessity to provide secure and safer environment. Recent research studies have confirmed now the possibility of recognizing people by the way they walk i.e. gait. The aim of this research study is to investigate the use of gait for people detection as well as identification across different cameras. We present a new approach for people tracking and identification between different non-intersecting un-calibrated stationary cameras based on gait analysis. A vision-based markerless extraction method is being deployed for the derivation of gait kinematics as well as anthropometric measurements in order to produce a gait signature. The novelty of our approach is motivated by the recent research in biometrics and forensic analysis using gait. The experimental results affirmed the robustness of our approach to successfully detect walking people as well as its potency to extract gait features for different camera viewpoints achieving an identity recognition rate of 73.6 % processed for 2270 video sequences. Furthermore, experimental results confirmed the potential of the proposed method for identity tracking in real surveillance systems to recognize walking individuals across different views with an average recognition rate of 92.5 % for cross-camera matching for two different non-overlapping views.<br/

    Covariate Analysis for View-point Independent Gait Recognition

    No full text
    Many studies have shown that gait can be deployed as a biometric. Few of these have addressed the effects of view-point and covariate factors on the recognition process. We describe the first analysis which combines view-point invariance for gait recognition which is based on a model-based pose estimation approach from a single un-calibrated camera. A set of experiments are carried out to explore how such factors including clothing, carrying conditions and view-point can affect the identification process using gait. Based on a covariate-based probe dataset of over 270 samples, a recognition rate of 73.4% is achieved using the KNN classifier. This confirms that people identification using dynamic gait features is still perceivable with better recognition rate even under the different covariate factors. As such, this is an important step in translating research from the laboratory to a surveillance environment

    Markerless human pose estimation for biomedical applications: a survey

    Get PDF
    Markerless Human Pose Estimation (HPE) proved its potential to support decision making and assessment in many fields of application. HPE is often preferred to traditional marker-based Motion Capture systems due to the ease of setup, portability, and affordable cost of the technology. However, the exploitation of HPE in biomedical applications is still under investigation. This review aims to provide an overview of current biomedical applications of HPE. In this paper, we examine the main features of HPE approaches and discuss whether or not those features are of interest to biomedical applications. We also identify those areas where HPE is already in use and present peculiarities and trends followed by researchers and practitioners. We include here 25 approaches to HPE and more than 40 studies of HPE applied to motor development assessment, neuromuscolar rehabilitation, and gait & posture analysis. We conclude that markerless HPE offers great potential for extending diagnosis and rehabilitation outside hospitals and clinics, toward the paradigm of remote medical care

    Markerless gait analysis vision system for real-time gait monitoring

    Get PDF
    On this paper a vision-based contact and markerless method for gait evaluation is proposed, and validated in different experimental setups against commercial motion capture systems (Vicon) and inertial gait analysis tools (GaitShoes). While the development goal is its integration on the ASBGo Smart Walker platform, only an inexpensive depth camera is required. It is shown to have reasonable results when computing gait metrics in real time, in different experimental setups, from different walker types, vision hardware and walking scenarios. Performance is evaluated through RMSD values for several gait metrics. Results illustrate that the proposed approach can be a valuable non-invasive, contactless and low cost alternative to gait analysis systems used in clinical rehabilitation environments.This work has been supported by the FEDER Funds through COMPETE 2020 — Programa Operacional Competitividade e Internacionalizacão (POCI) and P2020 with the Reference Project EML under Grant POCI-01-0247-FEDER-033067; COMPETE 2020 — Programa Operacional Competitividade e Internacionalizacão (POCI) with the Reference Project under Grant POCI-01-0145-FEDER-006941

    Estimation and validation of temporal gait features using a markerless 2D video system

    Get PDF
    Background and Objective: Estimation of temporal gait features, such as stance time, swing time and gait cycle time, can be used for clinical evaluations of various patient groups having gait pathologies, such as Parkinson’s diseases, neuropathy, hemiplegia and diplegia. Most clinical laboratories employ an optoelectronic motion capture system to acquire such features. However, the operation of these systems requires specially trained operators, a controlled environment and attaching reflective markers to the patient’s body. To allow the estimation of the same features in a daily life setting, this paper presents a novel vision based system whose operation does not require the presence of skilled technicians or markers and uses a single 2D camera. Method: The proposed system takes as input a 2D video, computes the silhouettes of the walking person, and then estimates key biomedical gait indicators, such as the initial foot contact with the ground and the toe off instants, from which several other temporal gait features can be derived. Results: The proposed system is tested on two datasets: (i) a public gait dataset made available by CASIA, which contains 20 users, with 4 sequences per user; and (ii) a dataset acquired simultaneously by a marker-based optoelectronic motion capture system and a simple 2D video camera, containing 10 users, with 5 sequences per user. For the CASIA gait dataset A the relevant temporal biomedical gait indicators were manually annotated, and the proposed automated video analysis system achieved an accuracy of 99% on their identification. It was able to obtain accurate estimations even on segmented silhouettes where, the state-of-the-art markerless 2D video based systems fail. For the second database, the temporal features obtained by the proposed system achieved an average intra-class correlation coefficient of 0.86, when compared to the "gold standard" optoelectronic motion capture system. Conclusions: The proposed markerless 2D video based system can be used to evaluate patients’ gait without requiring the usage of complex laboratory settings and without the need for physical attachment of sensors/markers to the patients. The good accuracy of the results obtained suggests that the proposed system can be used as an alternative to the optoelectronic motion capture system in non-laboratory environments, which can be enable more regular clinical evaluations.info:eu-repo/semantics/acceptedVersio

    Using the Microsoft Kinect to assess human bimanual coordination

    Get PDF
    Optical marker-based systems are the gold-standard for capturing three-dimensional (3D) human kinematics. However, these systems have various drawbacks including time consuming marker placement, soft tissue movement artifact, and are prohibitively expensive and non-portable. The Microsoft Kinect is an inexpensive, portable, depth camera that can be used to capture 3D human movement kinematics. Numerous investigations have assessed the Kinect\u27s ability to capture postural control and gait, but to date, no study has evaluated it\u27s capabilities for measuring spatiotemporal coordination. In order to investigate human coordination and coordination stability with the Kinect, a well-studied bimanual coordination paradigm (Kelso, 1984, Kelso; Scholz, & Schöner, 1986) was adapted. ^ Nineteen participants performed ten trials of coordinated hand movements in either in-phase or anti-phase patterns of coordination to the beat of a metronome which was incrementally sped up and slowed down. Continuous relative phase (CRP) and the standard deviation of CRP were used to assess coordination and coordination stability, respectively.^ Data from the Kinect were compared to a Vicon motion capture system using a mixed-model, repeated measures analysis of variance and intraclass correlation coefficients (2,1) (ICC(2,1)).^ Kinect significantly underestimated CRP for the the anti-phase coordination pattern (p \u3c.0001) and overestimated the in-phase pattern (p\u3c.0001). However, a high ICC value (r=.097) was found between the systems. For the standard deviation of CRP, the Kinect exhibited significantly higher variability than the Vicon (p \u3c .0001) but was able to distinguish significant differences between patterns of coordination with anti-phase variability being higher than in-phase (p \u3c .0001). Additionally, the Kinect was unable to accurately capture the structure of coordination stability for the anti-phase pattern. Finally, agreement was found between systems using the ICC (r=.37).^ In conclusion, the Kinect was unable to accurately capture mean CRP. However, the high ICC between the two systems is promising and the Kinect was able to distinguish between the coordination stability of in-phase and anti-phase coordination. However, the structure of variability as movement speed increased was dissimilar to the Vicon, particularly for the anti-phase pattern. Some aspects of coordination are nicely captured by the Kinect while others are not. Detecting differences between bimanual coordination patterns and the stability of those patterns can be achieved using the Kinect. However, researchers interested in the structure of coordination stability should exercise caution since poor agreement was found between systems
    • …
    corecore