1,826 research outputs found

    Vision-based techniques for gait recognition

    Full text link
    Global security concerns have raised a proliferation of video surveillance devices. Intelligent surveillance systems seek to discover possible threats automatically and raise alerts. Being able to identify the surveyed object can help determine its threat level. The current generation of devices provide digital video data to be analysed for time varying features to assist in the identification process. Commonly, people queue up to access a facility and approach a video camera in full frontal view. In this environment, a variety of biometrics are available - for example, gait which includes temporal features like stride period. Gait can be measured unobtrusively at a distance. The video data will also include face features, which are short-range biometrics. In this way, one can combine biometrics naturally using one set of data. In this paper we survey current techniques of gait recognition and modelling with the environment in which the research was conducted. We also discuss in detail the issues arising from deriving gait data, such as perspective and occlusion effects, together with the associated computer vision challenges of reliable tracking of human movement. Then, after highlighting these issues and challenges related to gait processing, we proceed to discuss the frameworks combining gait with other biometrics. We then provide motivations for a novel paradigm in biometrics-based human recognition, i.e. the use of the fronto-normal view of gait as a far-range biometrics combined with biometrics operating at a near distance

    Evaluating perceptual maps of asymmetries for gait symmetry quantification and pathology detection

    Get PDF
    Le mouvement de la marche est un processus essentiel de l'activité humaine et aussi le résultat de nombreuses interactions collaboratives entre les systèmes neurologiques, articulaires et musculo-squelettiques fonctionnant ensemble efficacement. Ceci explique pourquoi une analyse de la marche est aujourd'hui de plus en plus utilisée pour le diagnostic (et aussi la prévention) de différents types de maladies (neurologiques, musculaires, orthopédique, etc.). Ce rapport présente une nouvelle méthode pour visualiser rapidement les différentes parties du corps humain liées à une possible asymétrie (temporellement invariante par translation) existant dans la démarche d'un patient pour une possible utilisation clinique quotidienne. L'objectif est de fournir une méthode à la fois facile et peu dispendieuse permettant la mesure et l'affichage visuel, d'une manière intuitive et perceptive, des différentes parties asymétriques d'une démarche. La méthode proposée repose sur l'utilisation d'un capteur de profondeur peu dispendieux (la Kinect) qui est très bien adaptée pour un diagnostique rapide effectué dans de petites salles médicales car ce capteur est d'une part facile à installer et ne nécessitant aucun marqueur. L'algorithme que nous allons présenter est basé sur le fait que la marche saine possède des propriétés de symétrie (relativement à une invariance temporelle) dans le plan coronal.The gait movement is an essential process of the human activity and also the result of coordinated effort between the neurological, articular and musculoskeletal systems. This motivates why gait analysis is important and also increasingly used nowadays for the (possible early) diagnosis of many different types (neurological, muscular, orthopedic, etc.) of diseases. This paper introduces a novel method to quickly visualize the different parts of the body related to an asymmetric movement in the human gait of a patient for daily clinical. The goal is to provide a cheap and easy-to-use method to measure the gait asymmetry and display results in a perceptually relevant manner. This method relies on an affordable consumer depth sensor, the Kinect. The Kinect was chosen because this device is amenable for use in small, confined area, like a living room. Also, since it is marker-less, it provides a fast non-invasive diagnostic. The algorithm we are going to introduce relies on the fact that a healthy walk has (temporally shift-invariant) symmetry properties in the coronal plane

    Combination of Accumulated Motion and Color Segmentation for Human Activity Analysis

    Get PDF
    The automated analysis of activity in digital multimedia, and especially video, is gaining more and more importance due to the evolution of higher-level video processing systems and the development of relevant applications such as surveillance and sports. This paper presents a novel algorithm for the recognition and classification of human activities, which employs motion and color characteristics in a complementary manner, so as to extract the most information from both sources, and overcome their individual limitations. The proposed method accumulates the flow estimates in a video, and extracts “regions of activity†by processing their higher-order statistics. The shape of these activity areas can be used for the classification of the human activities and events taking place in a video and the subsequent extraction of higher-level semantics. Color segmentation of the active and static areas of each video frame is performed to complement this information. The color layers in the activity and background areas are compared using the earth mover's distance, in order to achieve accurate object segmentation. Thus, unlike much existing work on human activity analysis, the proposed approach is based on general color and motion processing methods, and not on specific models of the human body and its kinematics. The combined use of color and motion information increases the method robustness to illumination variations and measurement noise. Consequently, the proposed approach can lead to higher-level information about human activities, but its applicability is not limited to specific human actions. We present experiments with various real video sequences, from sports and surveillance domains, to demonstrate the effectiveness of our approach

    Accuracy assessment of Tri-plane B-mode ultrasound for non-invasive 3D kinematic analysis of knee joints

    Get PDF
    BACKGROUND Currently the clinical standard for measuring the motion of the bones in knee joints with sufficient precision involves implanting tantalum beads into the bones. These beads appear as high intensity features in radiographs and can be used for precise kinematic measurements. This procedure imposes a strong coupling between accuracy and invasiveness. In this paper, a tri-plane B-mode ultrasound (US) based non-invasive approach is proposed for use in kinematic analysis of knee joints in 3D space. METHODS The 3D analysis is performed using image processing procedures on the 2D US slices. The novelty of the proposed procedure and its applicability to the unconstrained 3D kinematic analysis of knee joints is outlined. An error analysis for establishing the method's feasibility is included for different artificial compositions of a knee joint phantom. Some in-vivo and in-vitro scans are presented to demonstrate that US scans reveal enough anatomical details, which further supports the experimental setup used using knee bone phantoms. RESULTS The error between the displacements measured by the registration of the US image slices and the true displacements of the respective slices measured using the precision mechanical stages on the experimental apparatus is evaluated for translation and rotation in two simulated environments. The mean and standard deviation of errors are shown in tabular form. This method provides an average measurement precision of less than 0.1 mm and 0.1 degrees, respectively. CONCLUSION In this paper, we have presented a novel non-invasive approach to measuring the motion of the bones in a knee using tri-plane B-mode ultrasound and image registration. In our study, the image registration method determines the position of bony landmarks relative to a B-mode ultrasound sensor array with sub-pixel accuracy. The advantages of our proposed system over previous techniques are that it is non-invasive, does not require the use of ionizing radiation and can be used conveniently if miniaturized.This work has been supported by School of Engineering & IT, UNSW Canberra, under Research Publication Fellowship

    Pedestrian Models for Autonomous Driving Part I: Low-Level Models, from Sensing to Tracking

    Get PDF
    Abstract—Autonomous vehicles (AVs) must share space with pedestrians, both in carriageway cases such as cars at pedestrian crossings and off-carriageway cases such as delivery vehicles navigating through crowds on pedestrianized high-streets. Unlike static obstacles, pedestrians are active agents with complex, inter- active motions. Planning AV actions in the presence of pedestrians thus requires modelling of their probable future behaviour as well as detecting and tracking them. This narrative review article is Part I of a pair, together surveying the current technology stack involved in this process, organising recent research into a hierarchical taxonomy ranging from low-level image detection to high-level psychology models, from the perspective of an AV designer. This self-contained Part I covers the lower levels of this stack, from sensing, through detection and recognition, up to tracking of pedestrians. Technologies at these levels are found to be mature and available as foundations for use in high-level systems, such as behaviour modelling, prediction and interaction control

    Automatic Video-based Analysis of Human Motion

    Get PDF
    • …
    corecore