8 research outputs found

    Tracking Table Tennis Balls in Real Match Scenes for Umpiring Applications

    Get PDF
    Judging the legitimacy of table tennis services presents many challenges where technology can be judiciously applied to enhance decision-making. This paper presents a purpose-built system to automatically detect and track the ball during table-tennis services to enable precise judgment over their legitimacy in real-time. The system comprises a suite of algorithms which adaptively exploit spatial and temporal information from real match video sequences, which are generally characterised by high object motion, allied with object blurring and occlusion. Experimental results on a diverse set of table-tennis test sequences corroborate the system performance in facilitating consistently accurate and efficient decision-making over the validity of a service

    A photogrammetric approach for real-time 3D localization and tracking of pedestrians in monocular infrared imagery

    Get PDF
    Target tracking within conventional video imagery poses a significant challenge that is increasingly being addressed via complex algorithmic solutions. The complexity of this problem can be fundamentally attributed to the ambiguity associated with actual 3D scene position of a given tracked object in relation to its observed position in 2D image space. We propose an approach that challenges the current trend in complex tracking solutions by addressing this fundamental ambiguity head-on. In contrast to prior work in the field, we leverage the key advantages of thermal-band infrared (IR) imagery for the pedestrian localization to show that robust localization and foreground target separation, afforded via such imagery, facilities accurate 3D position estimation to within the error bounds of conventional Global Position System (GPS) positioning. This work investigates the accuracy of classical photogrammetry, within the context of current target detection and classification techniques, as a means of recovering the true 3D position of pedestrian targets within the scene. Based on photogrammetric estimation of target position, we then illustrate the efficiency of regular Kalman filter based tracking operating on actual 3D pedestrian scene trajectories. We present both a statistical and experimental analysis of the associated errors of this approach in addition to real-time 3D pedestrian tracking using monocular infrared (IR) imagery from a thermal-band camera. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only

    Real-time classification of vehicle types within infra-red imagery.

    Get PDF
    Real-time classification of vehicles into sub-category types poses a significant challenge within infra-red imagery due to the high levels of intra-class variation in thermal vehicle signatures caused by aspects of design, current operating duration and ambient thermal conditions. Despite these challenges, infra-red sensing offers significant generalized target object detection advantages in terms of all-weather operation and invariance to visual camouflage techniques. This work investigates the accuracy of a number of real-time object classification approaches for this task within the wider context of an existing initial object detection and tracking framework. Specifically we evaluate the use of traditional feature-driven bag of visual words and histogram of oriented gradient classification approaches against modern convolutional neural network architectures. Furthermore, we use classical photogrammetry, within the context of current target detection and classification techniques, as a means of approximating 3D target position within the scene based on this vehicle type classification. Based on photogrammetric estimation of target position, we then illustrate the use of regular Kalman filter based tracking operating on actual 3D vehicle trajectories. Results are presented using a conventional thermal-band infra-red (IR) sensor arrangement where targets are tracked over a range of evaluation scenarios

    Real-time Classification of Vehicle Types within Infra-red Imagery

    Get PDF
    Real-time classification of vehicles into sub-category types poses a significant challenge within infra-red imagery due to the high levels of intra-class variation in thermal vehicle signatures caused by aspects of design, current operating duration and ambient thermal conditions. Despite these challenges, infra-red sensing offers significant generalized target object detection advantages in terms of all-weather operation and invariance to visual camouflage techniques. This work investigates the accuracy of a number of real-time object classification approaches for this task within the wider context of an existing initial object detection and tracking framework. Specifically we evaluate the use of traditional feature-driven bag of visual words and histogram of oriented gradient classification approaches against modern convolutional neural network architectures. Furthermore, we use classical photogrammetry, within the context of current target detection and classification techniques, as a means of approximating 3D target position within the scene based on this vehicle type classification. Based on photogrammetric estimation of target position, we then illustrate the use of regular Kalman filter based tracking operating on actual 3D vehicle trajectories. Results are presented using a conventional thermal-band infra-red (IR) sensor arrangement where targets are tracked over a range of evaluation scenarios

    An evolutionary particle filter with the immune genetic algorithm for intelligent video target tracking

    Get PDF
    AbstractParticle filter algorithm is widely used for target tracking using video sequences, which is of great importance for intelligent surveillance applications. However, there is still much room for improvement, e.g. the so-called “sample impoverishment”. It is brought by re-sampling which aims to avoid particle degradation, and thus becomes the inherent shortcoming of the particle filter. In order to solve the problem of sample impoverishment, increase the number of meaningful particles and ensure the diversity of the particle set, an evolutionary particle filter with the immune genetic algorithm (IGA) for target tracking is proposed by adding IGA in front of the re-sampling process to increase particle diversity. Particles are regarded as the antibodies of the immune system, and the state of target being tracked is regarded as the external invading antigen. With the crossover and mutation process, the immune system produces a large number of new antibodies (particles), and thus the new particles can better approximate the true state by exploiting new areas. Regulatory mechanisms of antibodies, such as promotion and suppression, ensure the diversity of the particle set. In the proposed algorithm, the particle set optimized by IGA can better express the true state of the target, and the number of meaningful particles can be increased significantly. The effectiveness and robustness of the proposed particle filter are verified by target tracking experiments. Simulation results show that the proposed particle filter is better than the standard one in particle diversity and efficiency. The proposed algorithm can easily be extended to multiple objects tracking problems with occlusions

    Visual tracking: detecting and mapping occlusion and camouflage using process-behaviour charts

    Get PDF
    Visual tracking aims to identify a target object in each frame of an image sequence. It presents an important scientific problem since the human visual system is capable of tracking moving objects in a wide variety of situations. Artificial visual tracking systems also find practical application in areas such as visual surveillance, robotics, biomedical image analysis, medicine and the media. However, automatic visual tracking algorithms suffer from two common problems: occlusion and camouflage. Occlusion arises when another object, usually with different features, comes between the camera and the target. Camouflage occurs when an object with similar features lies behind the target and makes the target invisible from the camera’s point of view. Either of these disruptive events can cause a tracker to lose its target and fail. This thesis focuses on the detection of occlusion and camouflage in a particle-filter based tracking algorithm. Particle filters are commonly used in tracking. Each particle represents a single hypothesis as to the target’s state, with some probability of being correct. The collection of particles tracking a target in each frame of an image sequence is called a particle set. The configuration of that particle set provides vital information about the state of the tracker. The work detailed in this thesis presents three innovative approaches to detecting occlusion and/or camouflage during tracking by evaluating the fluctuating behaviours of the particle set and detecting anomalies using a graphical statistical tool called a process-behaviour chart. The information produced by the process-behaviour chart is then used to map out the boundary of the interfering object, providing valuable information about the viewed environment. A method based on the medial axis of a novel representation of particle distribution termed the Particle History Image was found to perform best over a set of real and artificial test sequences, detecting 90% of occlusion and 100% of camouflage events. Key advantages of the method over previous work in the area are: (1) it is less sensitive to false data and less likely to fire prematurely; (2) it provides a better representation of particle set behaviour by aggregating particles over a longer time period and (3) the use of a training set to parameterise the process-behaviour charts means that comparisons are being made between measurements that are both made over extended time periods, improving reliability

    Visual tracking: detecting and mapping occlusion and camouflage using process-behaviour charts

    Get PDF
    Visual tracking aims to identify a target object in each frame of an image sequence. It presents an important scientific problem since the human visual system is capable of tracking moving objects in a wide variety of situations. Artificial visual tracking systems also find practical application in areas such as visual surveillance, robotics, biomedical image analysis, medicine and the media. However, automatic visual tracking algorithms suffer from two common problems: occlusion and camouflage. Occlusion arises when another object, usually with different features, comes between the camera and the target. Camouflage occurs when an object with similar features lies behind the target and makes the target invisible from the camera’s point of view. Either of these disruptive events can cause a tracker to lose its target and fail. This thesis focuses on the detection of occlusion and camouflage in a particle-filter based tracking algorithm. Particle filters are commonly used in tracking. Each particle represents a single hypothesis as to the target’s state, with some probability of being correct. The collection of particles tracking a target in each frame of an image sequence is called a particle set. The configuration of that particle set provides vital information about the state of the tracker. The work detailed in this thesis presents three innovative approaches to detecting occlusion and/or camouflage during tracking by evaluating the fluctuating behaviours of the particle set and detecting anomalies using a graphical statistical tool called a process-behaviour chart. The information produced by the process-behaviour chart is then used to map out the boundary of the interfering object, providing valuable information about the viewed environment. A method based on the medial axis of a novel representation of particle distribution termed the Particle History Image was found to perform best over a set of real and artificial test sequences, detecting 90% of occlusion and 100% of camouflage events. Key advantages of the method over previous work in the area are: (1) it is less sensitive to false data and less likely to fire prematurely; (2) it provides a better representation of particle set behaviour by aggregating particles over a longer time period and (3) the use of a training set to parameterise the process-behaviour charts means that comparisons are being made between measurements that are both made over extended time periods, improving reliability

    Model-based Behavioural Tracking and Scale Invariant Features in Omnidirectional Matching

    Get PDF
    Two classical but crucial and unsolved problems in Computer Vision are treated in this thesis: tracking and matching. The first part of the thesis deals with tracking, studying two of its main difficulties: object representation model drift and total occlusions. The second part considers the problem of point matching between omnidirectional images and between omnidirectional and planar images. Model drift is a major problem of tracking when the object representation model is updated on-line. In this thesis, we have developed a visual tracking algorithm that simultaneously tracks and builds a model of the tracked object. The model is computed using an incremental PCA algorithm that allows to weight samples. Thus, model drift is avoided by weighting samples added to the model according to a measure of confidence on the tracked patch. Furthermore, we have introduced also spatial weights for weighting pixels and increasing tracking accuracy in some regions of the tracked object. Total occlusions are another major problem in visual tracking. Indeed, a total occlusion hides completely the tracked object, making visual information unavailable for tracking. For handling this kind of situations, common in unconstrained scenarios, the Model cOrruption and Total Occlusion Handling (MOTOH) framework is introduced. In this framework, in addition to the model drift avoidance scheme described above, a total occlusion detection procedure is introduced. When a total occlusion is detected, the tracker switches to behavioural-based tracking, where instead of guiding the tracker with visual information, a behavioural model of motion is employed. Finally, a Scale Invariant Feature Transform (SIFT) for omnidirectional images is developed. The proposed algorithm generates two types of local descriptors, Local Spherical Descriptors and Local Planar Descriptors. With the first ones, point matching between omnidirectional images can be performed, and with the second ones, the same matching process can be done but between omnidirectional and planar images. Furthermore, a planar to spherical mapping is introduced and an algorithm for its estimation is given. This mapping allows to extract objects from an omnidirectional image given their SIFT descriptors in a planar image
    corecore