8 research outputs found

    Transferability of Convolutional Neural Networks in Stationary Learning Tasks

    Full text link
    Recent advances in hardware and big data acquisition have accelerated the development of deep learning techniques. For an extended period of time, increasing the model complexity has led to performance improvements for various tasks. However, this trend is becoming unsustainable and there is a need for alternative, computationally lighter methods. In this paper, we introduce a novel framework for efficient training of convolutional neural networks (CNNs) for large-scale spatial problems. To accomplish this we investigate the properties of CNNs for tasks where the underlying signals are stationary. We show that a CNN trained on small windows of such signals achieves a nearly performance on much larger windows without retraining. This claim is supported by our theoretical analysis, which provides a bound on the performance degradation. Additionally, we conduct thorough experimental analysis on two tasks: multi-target tracking and mobile infrastructure on demand. Our results show that the CNN is able to tackle problems with many hundreds of agents after being trained with fewer than ten. Thus, CNN architectures provide solutions to these problems at previously computationally intractable scales.Comment: 14 pages, 7 figures, for associated code see https://github.com/damowerko/mt

    Acoustic Speaker Localization with Strong Reverberation and Adaptive Feature Filtering with a Bayes RFS Framework

    Get PDF
    The thesis investigates the challenges of speaker localization in presence of strong reverberation, multi-speaker tracking, and multi-feature multi-speaker state filtering, using sound recordings from microphones. Novel reverberation-robust speaker localization algorithms are derived from the signal and room acoustics models. A multi-speaker tracking filter and a multi-feature multi-speaker state filter are developed based upon the generalized labeled multi-Bernoulli random finite set framework. Experiments and comparative studies have verified and demonstrated the benefits of the proposed methods

    Advanced signal processing techniques for multi-target tracking

    Get PDF
    The multi-target tracking problem essentially involves the recursive joint estimation of the state of unknown and time-varying number of targets present in a tracking scene, given a series of observations. This problem becomes more challenging because the sequence of observations is noisy and can become corrupted due to miss-detections and false alarms/clutter. Additionally, the detected observations are indistinguishable from clutter. Furthermore, whether the target(s) of interest are point or extended (in terms of spatial extent) poses even more technical challenges. An approach known as random finite sets provides an elegant and rigorous framework for the handling of the multi-target tracking problem. With a random finite sets formulation, both the multi-target states and multi-target observations are modelled as finite set valued random variables, that is, random variables which are random in both the number of elements and the values of the elements themselves. Furthermore, compared to other approaches, the random finite sets approach possesses a desirable characteristic of being free of explicit data association prior to tracking. In addition, a framework is available for dealing with random finite sets and is known as finite sets statistics. In this thesis, advanced signal processing techniques are employed to provide enhancements to and develop new random finite sets based multi-target tracking algorithms for the tracking of both point and extended targets with the aim to improve tracking performance in cluttered environments. To this end, firstly, a new and efficient Kalman-gain aided sequential Monte Carlo probability hypothesis density (KG-SMC-PHD) filter and a cardinalised particle probability hypothesis density (KG-SMC-CPHD) filter are proposed. These filters employ the Kalman- gain approach during weight update to correct predicted particle states by minimising the mean square error between the estimated measurement and the actual measurement received at a given time in order to arrive at a more accurate posterior. This technique identifies and selects those particles belonging to a particular target from a given PHD for state correction during weight computation. The proposed SMC-CPHD filter provides a better estimate of the number of targets. Besides the improved tracking accuracy, fewer particles are required in the proposed approach. Simulation results confirm the improved tracking performance when evaluated with different measures. Secondly, the KG-SMC-(C)PHD filters are particle filter (PF) based and as with PFs, they require a process known as resampling to avoid the problem of degeneracy. This thesis proposes a new resampling scheme to address a problem with the systematic resampling method which causes a high tendency of resampling very low weight particles especially when a large number of resampled particles are required; which in turn affect state estimation. Thirdly, the KG-SMC-(C)PHD filters proposed in this thesis perform filtering and not tracking , that is, they provide only point estimates of target states but do not provide connected estimates of target trajectories from one time step to the next. A new post processing step using game theory as a solution to this filtering - tracking problem is proposed. This approach was named the GTDA method. This method was employed in the KG-SMC-(C)PHD filter as a post processing technique and was evaluated using both simulated and real data obtained using the NI-USRP software defined radio platform in a passive bi-static radar system. Lastly, a new technique for the joint tracking and labelling of multiple extended targets is proposed. To achieve multiple extended target tracking using this technique, models for the target measurement rate, kinematic component and target extension are defined and jointly propagated in time under the generalised labelled multi-Bernoulli (GLMB) filter framework. The GLMB filter is a random finite sets-based filter. In particular, a Poisson mixture variational Bayesian (PMVB) model is developed to simultaneously estimate the measurement rate of multiple extended targets and extended target extension was modelled using B-splines. The proposed method was evaluated with various performance metrics in order to demonstrate its effectiveness in tracking multiple extended targets

    Novel methods for multi-target tracking with applications in sensor registration and fusion

    Get PDF
    Maintaining surveillance over vast volumes of space is an increasingly important capability for the defence industry. A clearer and more accurate picture of a surveillance region could be obtained through sensor fusion between a network of sensors. However, this accurate picture is dependent on the sensor registration being resolved. Any inaccuracies in sensor location or orientation can manifest themselves into the sensor measurements that are used in the fusion process, and lead to poor target tracking performance. Solutions previously proposed in the literature for the sensor registration problem have been based on a number of assumptions that do not always hold in practice, such as having a synchronous network and having small, static registration errors. This thesis will propose a number of solutions to resolving the sensor registration and sensor fusion problems jointly in an efficient manner. The assumptions made in previous works will be loosened or removed, making the solutions more applicable to problems that we are likely to see in practice. The proposed methods will be applied to both simulated data, and a segment of data taken from a live trial in the field

    Automotive Target Models for Point Cloud Sensors

    Get PDF
    One of the major challenges to enable automated driving is the perception of other road users in the host vehicle’s vicinity. Various automotive sensors that provide detailed information about other traffic participants have been developed to handle this challenge. Of particular interest for this work are Light Detection and Ranging (LIDAR) and Radio Detection and Ranging (RADAR) sensors, which generate multiple, spatially distributed, noise corrupted point measurements on other traffic participants. Based on these point measurements, the traffic participant’s kinematic and shape parameters have to be estimated. The choice of a suitable extent model is paramount to accurately track a target’s position, orientation and other parameters. How well a model performs typically depends on the type of target that has to be tracked, e.g. pedestrians, bikes or cars, as well as the sensor’s setup and measurement principle itself. This work considers the creation of extended object models and corresponding inference strategies for tracking automotive vehicles based on accumulated point cloud data. We gain insights into the extended object model’s requirements by analysing automotive LIDAR and RADAR sensor data. This analysis aids in the identification of relevant features from the measurement’s spatial distribution and their incorporation into an accurate target model. The analysis lays the foundation for our main contributions. We developed a constrained Spline-based geometric representation and a corresponding inference strategy for the contour of cars in LIDAR data. We further developed a heuristic to account for the integration of the measurement distribution on cars, generated by LIDAR sensors mounted on the roof of the recording vessel. Last, we developed an extended target model for cars based on automotive RADAR sensors. The model provides an interpretation of a learned Gaussian Mixture Model (GMM) as scatter sources and uses the Probabilistic Multi-Hypothesis Tracker (PMHT) to formulate a closed form Maximum a Posteriori (MAP) update. All developed approaches are evaluated on real world data sets.2022-02-0

    Multiple-Object Estimation Techniques for Challenging Scenarios

    Get PDF
    A series of methods for solving the multi-object estimation problem in the context sequential Bayesian inference is presented. These methods concentrate on dealing with challenging scenarios of multiple target tracking, involving fundamental problems of nonlinearity and non-Gaussianity of processes, high state dimensionality, high number of targets, statistical dependence between target states, and degenerate cases of low signal-to-noise ratio, high uncertainty, lowly observable states or uninformative observations. These difficulties pose obstacles to most practical multi-object inference problems, lying at the heart of the shortcomings reported for state-of-the-art methods, and so elicit novel treatments to enable tackling a broader class of real problems. The novel algorithms offered as solutions in this dissertation address such challenges by acting on the root causes of the associated problems. Often this involves essential dilemmas commonly manifested in Statistics and Decision Theory, such as trading off estimation accuracy with algorithm complexity, soft versus hard decision, generality versus tractability, conciseness versus interpretativeness etc. All proposed algorithms constitute stochastic filters, each of which is formulated to address specific aspects of the challenges at hand while offering tools to achieve judicious compromises in the aforementioned dilemmas. Two of the filters address the weight degeneracy observed in sequential Monte Carlo filters, particularly for nonlinear processes. One of these filters is designed for nonlinear non-Gaussian high-dimensional problems, delivering representativeness of the uncertainty in high-dimensional states while mitigating part of the inaccuracies that arise from the curse of dimensionality. This filter is shown to cope well with scenarios of multimodality, high state uncertainty, uninformative observations and high number of false alarms. A multi-object filter deals with the problem of considering dependencies between target states in a way that is scalable to a large number of targets, by resorting to probabilistic graphical structures. Another multi-object filter treats the problem of reducing the computational complexity of a state-of-the-art cardinalized filter to deal with a large number of targets, without compromising accuracy significantly. Finally, a framework for associating measurements across observation sessions for scenarios of low state observability is proposed, with application to an important Space Surveillance task: cataloging of space debris in the geosynchronous/geostationary belt. The devised methods treat the considered challenges by bringing about rather general questions, and provide not only principled solutions but also analyzes the essence of the investigated problems, extrapolating the implemented techniques to a wider spectrum of similar problems in Signal Processing

    Constrained Multi-Sensor Control Using a Multi-Target MSE Bound and a δ-GLMB Filter

    No full text
    The existing multi-sensor control algorithms for multi-target tracking (MTT) within the random finite set (RFS) framework are all based on the distributed processing architecture, so the rule of generalized covariance intersection (GCI) has to be used to obtain the multi-sensor posterior density. However, there has still been no reliable basis for setting the normalized fusion weight of each sensor in GCI until now. Therefore, to avoid the GCI rule, the paper proposes a new constrained multi-sensor control algorithm based on the centralized processing architecture. A multi-target mean-square error (MSE) bound defined in our paper is served as cost function and the multi-sensor control commands are just the solutions that minimize the bound. In order to derive the bound by using the generalized information inequality to RFS observation, the error between state set and its estimation is measured by the second-order optimal sub-pattern assignment metric while the multi-target Bayes recursion is performed by using a δ-generalized labeled multi-Bernoulli filter. An additional benefit of our method is that the proposed bound can provide an online indication of the achievable limit for MTT precision after the sensor control. Two suboptimal algorithms, which are mixed penalty function (MPF) method and complex method, are used to reduce the computation cost of solving the constrained optimization problem. Simulation results show that for the constrained multi-sensor control system with different observation performance, our method significantly outperforms the GCI-based Cauchy-Schwarz divergence method in MTT precision. Besides, when the number of sensors is relatively large, the computation time of the MPF and complex methods is much shorter than that of the exhaustive search method at the expense of completely acceptable loss of tracking accuracy
    corecore