36 research outputs found

    Multiple Target, Multiple Type Filtering in the RFS Framework

    Full text link
    A Multiple Target, Multiple Type Filtering (MTMTF) algorithm is developed using Random Finite Set (RFS) theory. First, we extend the standard Probability Hypothesis Density (PHD) filter for multiple types of targets, each with distinct detection properties, to develop a multiple target, multiple type filtering, N-type PHD filter, where N2N\geq2, for handling confusions among target types. In this approach, we assume that there will be confusions between detections, i.e. clutter arises not just from background false positives, but also from target confusions. Then, under the assumptions of Gaussianity and linearity, we extend the Gaussian mixture (GM) implementation of the standard PHD filter for the proposed N-type PHD filter termed the N-type GM-PHD filter. Furthermore, we analyze the results from simulations to track sixteen targets of four different types using a four-type (quad) GM-PHD filter as a typical example and compare it with four independent GM-PHD filters using the Optimal Subpattern Assignment (OSPA) metric. This shows the improved performance of our strategy that accounts for target confusions by efficiently discriminating them

    Augmented particle filtering for efficient visual tracking

    Get PDF
    Copyright © 2005 IEEEVisual tracking is one of the key tasks in computer vision. The particle filter algorithm has been extensively used to tackle this problem due to its flexibility. However the conventional particle filter uses system transition as the proposal distribution, frequently resulting in poor priors for the filtering step. The main reason is that it is difficult, if not impossible, to accurately model the target's motion. Such a proposal distribution does not take into account the current observations. It is not a trivial task to devise a satisfactory proposal distribution for the particle filter. In this paper we advance a general augmented particle filtering framework for designing the optimal proposal distribution. The essential idea is to augment a second filter's estimate into the proposal distribution design. We then show that several existing improved particle filters can be rationalised within this general framework. Based on this framework we further propose variant algorithms for robust and efficient visual tracking. Experiments indicate that the augmented particle filters are more efficient and robust than the conventional particle filter.Chunhua Shen Brooks, M.J. van den Hengel, A

    Recognition of Deictic Gestures for Wearable Computing

    Get PDF

    Detecting human heads with their orientations

    Get PDF
    We propose a two-step method for detecting human heads with their orientations. In the first step, the method employs an ellipse as the contour model of human-head appearances to deal with wide variety of appearances. Our method then evaluates the ellipse to detect possible human heads. In the second step, on the other hand, our method focuses on features inside the ellipse, such as eyes, the mouth or cheeks, to model facial components. The method evaluates not only such components themselves but also their geometric configuration to eliminate false positives in the first step and, at the same time, to estimate face orientations. Our intensive experiments show that our method can correctly and stably detect human heads with their orientations

    Biomechanics of running: An overview on gait cycle

    Get PDF
    This review article summarized the literature regarding running gait. It describes characteristics of running gait and running gait cycle, explains running anatomy in relation to lower and upper body mechanism; contribution of muscles, and joint running gait cycle. The concept of running kinematics and kinetics has described motion characteristics such as position, velocity, acceleration, and force applied during the running cycle. Running gait analysis techniques has discussed such as motion analysis, force plate analysis, and electromyography

    Linearized Motion Estimation for Articulated Planes

    Full text link

    Vision-based techniques for gait recognition

    Full text link
    Global security concerns have raised a proliferation of video surveillance devices. Intelligent surveillance systems seek to discover possible threats automatically and raise alerts. Being able to identify the surveyed object can help determine its threat level. The current generation of devices provide digital video data to be analysed for time varying features to assist in the identification process. Commonly, people queue up to access a facility and approach a video camera in full frontal view. In this environment, a variety of biometrics are available - for example, gait which includes temporal features like stride period. Gait can be measured unobtrusively at a distance. The video data will also include face features, which are short-range biometrics. In this way, one can combine biometrics naturally using one set of data. In this paper we survey current techniques of gait recognition and modelling with the environment in which the research was conducted. We also discuss in detail the issues arising from deriving gait data, such as perspective and occlusion effects, together with the associated computer vision challenges of reliable tracking of human movement. Then, after highlighting these issues and challenges related to gait processing, we proceed to discuss the frameworks combining gait with other biometrics. We then provide motivations for a novel paradigm in biometrics-based human recognition, i.e. the use of the fronto-normal view of gait as a far-range biometrics combined with biometrics operating at a near distance

    Estimating Human Pose with Flowing Puppets

    Get PDF
    International audienceWe address the problem of upper-body human pose estimation in uncontrolled monocular video sequences, without manual initialization. Most current methods focus on isolated video frames and often fail to correctly localize arms and hands. Inferring pose over a video sequence is advantageous because poses of people in adjacent frames exhibit properties of smooth variation due to the nature of human and camera motion. To exploit this, previous methods have used prior knowledge about distinctive actions or generic temporal priors combined with static image likelihoods to track people in motion. Here we take a different approach based on a simple observation: Information about how a person moves from frame to frame is present in the optical flow field. We develop an approach for tracking articulated motions that "links" articulated shape models of people in adjacent frames through the dense optical flow. Key to this approach is a 2D shape model of the body that we use to compute how the body moves over time. The resulting "flowing puppets" provide a way of integrating image evidence across frames to improve pose inference. We apply our method on a challenging dataset of TV video sequences and show state-of-the-art performance
    corecore