1,870 research outputs found

    Message Passing and Hierarchical Models for Simultaneous Tracking and Registration

    Get PDF

    Advanced signal processing techniques for multi-target tracking

    Get PDF
    The multi-target tracking problem essentially involves the recursive joint estimation of the state of unknown and time-varying number of targets present in a tracking scene, given a series of observations. This problem becomes more challenging because the sequence of observations is noisy and can become corrupted due to miss-detections and false alarms/clutter. Additionally, the detected observations are indistinguishable from clutter. Furthermore, whether the target(s) of interest are point or extended (in terms of spatial extent) poses even more technical challenges. An approach known as random finite sets provides an elegant and rigorous framework for the handling of the multi-target tracking problem. With a random finite sets formulation, both the multi-target states and multi-target observations are modelled as finite set valued random variables, that is, random variables which are random in both the number of elements and the values of the elements themselves. Furthermore, compared to other approaches, the random finite sets approach possesses a desirable characteristic of being free of explicit data association prior to tracking. In addition, a framework is available for dealing with random finite sets and is known as finite sets statistics. In this thesis, advanced signal processing techniques are employed to provide enhancements to and develop new random finite sets based multi-target tracking algorithms for the tracking of both point and extended targets with the aim to improve tracking performance in cluttered environments. To this end, firstly, a new and efficient Kalman-gain aided sequential Monte Carlo probability hypothesis density (KG-SMC-PHD) filter and a cardinalised particle probability hypothesis density (KG-SMC-CPHD) filter are proposed. These filters employ the Kalman- gain approach during weight update to correct predicted particle states by minimising the mean square error between the estimated measurement and the actual measurement received at a given time in order to arrive at a more accurate posterior. This technique identifies and selects those particles belonging to a particular target from a given PHD for state correction during weight computation. The proposed SMC-CPHD filter provides a better estimate of the number of targets. Besides the improved tracking accuracy, fewer particles are required in the proposed approach. Simulation results confirm the improved tracking performance when evaluated with different measures. Secondly, the KG-SMC-(C)PHD filters are particle filter (PF) based and as with PFs, they require a process known as resampling to avoid the problem of degeneracy. This thesis proposes a new resampling scheme to address a problem with the systematic resampling method which causes a high tendency of resampling very low weight particles especially when a large number of resampled particles are required; which in turn affect state estimation. Thirdly, the KG-SMC-(C)PHD filters proposed in this thesis perform filtering and not tracking , that is, they provide only point estimates of target states but do not provide connected estimates of target trajectories from one time step to the next. A new post processing step using game theory as a solution to this filtering - tracking problem is proposed. This approach was named the GTDA method. This method was employed in the KG-SMC-(C)PHD filter as a post processing technique and was evaluated using both simulated and real data obtained using the NI-USRP software defined radio platform in a passive bi-static radar system. Lastly, a new technique for the joint tracking and labelling of multiple extended targets is proposed. To achieve multiple extended target tracking using this technique, models for the target measurement rate, kinematic component and target extension are defined and jointly propagated in time under the generalised labelled multi-Bernoulli (GLMB) filter framework. The GLMB filter is a random finite sets-based filter. In particular, a Poisson mixture variational Bayesian (PMVB) model is developed to simultaneously estimate the measurement rate of multiple extended targets and extended target extension was modelled using B-splines. The proposed method was evaluated with various performance metrics in order to demonstrate its effectiveness in tracking multiple extended targets

    Random finite sets in multi-target tracking - efficient sequential MCMC implementation

    Get PDF
    Over the last few decades multi-target tracking (MTT) has proved to be a challenging and attractive research topic. MTT applications span a wide variety of disciplines, including robotics, radar/sonar surveillance, computer vision and biomedical research. The primary focus of this dissertation is to develop an effective and efficient multi-target tracking algorithm dealing with an unknown and time-varying number of targets. The emerging and promising Random Finite Set (RFS) framework provides a rigorous foundation for optimal Bayes multi-target tracking. In contrast to traditional approaches, the collection of individual targets is treated as a set-valued state. The intent of this dissertation is two-fold; first to assert that the RFS framework not only is a natural, elegant and rigorous foundation, but also leads to practical, efficient and reliable algorithms for Bayesian multi-target tracking, and second to provide several novel RFS based tracking algorithms suitable for the specific Track-Before-Detect (TBD) surveillance application. One main contribution of this dissertation is a rigorous derivation and practical implementation of a novel algorithm well suited to deal with multi-target tracking problems for a given cardinality. The proposed Interacting Population-based MCMC-PF algorithm makes use of several Metropolis-Hastings samplers running in parallel, which interact through genetic variation. Another key contribution concerns the design and implementation of two novel algorithms to handle a varying number of targets. The first approach exploits Reversible Jumps. The second approach is built upon the concepts of labeled RFSs and multiple cardinality hypotheses. The performance of the proposed algorithms is also demonstrated in practical scenarios, and shown to significantly outperform conventional multi-target PF in terms of track accuracy and consistency. The final contribution seeks to exploit external information to increase the performance of the surveillance system. In multi-target scenarios, kinematic constraints from the interaction of targets with their environment or other targets can restrict target motion. Such motion constraint information is integrated by using a fixed-lag smoothing procedure, named Knowledge-Based Fixed-Lag Smoother (KB-Smoother). The proposed combination IP-MCMC-PF/KB-Smoother yields enhanced tracking

    Detection-assisted Object Tracking by Mobile Cameras

    Get PDF
    Tracking-by-detection is a class of new tracking approaches that utilizes recent development of object detection algorithms. This type of approach performs object detection for each frame and uses data association algorithms to associate new observations to existing targets. Inspired by the core idea of the tracking-by-detection framework, we propose a new framework called detection-assisted tracking where object detection algorithm provides help to the tracking algorithm when such help is necessary; thus object detection, a very time consuming task, is performed only when needed. The proposed framework is also able to handle complicated scenarios where cameras are allowed to move, and occlusion or multiple similar objects exist. We also port the core component of the proposed framework, the detector, onto embedded smart cameras. Contrary to traditional scenarios where the smart cameras are assumed to be static, we allow the smart cameras to move around in the scene. Our approach employs histogram of oriented gradients (HOG) object detector for foreground detection, to enable more robust detection on mobile platform. Traditional background subtraction methods are not suitable for mobile platforms where the background changes constantly. Adviser: Senem Velipasalar and Mustafa Cenk Gurso

    Information-rich Task Allocation and Motion Planning for Heterogeneous Sensor Platforms

    Get PDF
    This paper introduces a novel stratified planning algorithm for teams of heterogeneous mobile sensors that maximizes information collection while minimizing resource costs. The main contribution of this work is the scalable unification of effective algorithms for de- centralized informative motion planning and decentralized high-level task allocation. We present the Information-rich Rapidly-exploring Random Tree (IRRT) algorithm, which is amenable to very general and realistic mobile sensor constraint characterizations, as well as review the Consensus-Based Bundle Algorithm (CBBA), offering several enhancements to the existing algorithms to embed information collection at each phase of the planning process. The proposed framework is validated with simulation results for networks of mobile sensors performing multi-target localization missions.United States. Air Force. Office of Scientific Research (Grant FA9550-08-1-0086)United States. Air Force. Office of Scientific Research. Multidisciplinary University Research Initiative (FA9550-08-1-0356

    Ensemble Transport Adaptive Importance Sampling

    Full text link
    Markov chain Monte Carlo methods are a powerful and commonly used family of numerical methods for sampling from complex probability distributions. As applications of these methods increase in size and complexity, the need for efficient methods increases. In this paper, we present a particle ensemble algorithm. At each iteration, an importance sampling proposal distribution is formed using an ensemble of particles. A stratified sample is taken from this distribution and weighted under the posterior, a state-of-the-art ensemble transport resampling method is then used to create an evenly weighted sample ready for the next iteration. We demonstrate that this ensemble transport adaptive importance sampling (ETAIS) method outperforms MCMC methods with equivalent proposal distributions for low dimensional problems, and in fact shows better than linear improvements in convergence rates with respect to the number of ensemble members. We also introduce a new resampling strategy, multinomial transformation (MT), which while not as accurate as the ensemble transport resampler, is substantially less costly for large ensemble sizes, and can then be used in conjunction with ETAIS for complex problems. We also focus on how algorithmic parameters regarding the mixture proposal can be quickly tuned to optimise performance. In particular, we demonstrate this methodology's superior sampling for multimodal problems, such as those arising from inference for mixture models, and for problems with expensive likelihoods requiring the solution of a differential equation, for which speed-ups of orders of magnitude are demonstrated. Likelihood evaluations of the ensemble could be computed in a distributed manner, suggesting that this methodology is a good candidate for parallel Bayesian computations
    • …
    corecore