9 research outputs found
Triangulation positioning system network
This paper presents ongoing work on localization and positioning through triangulation procedure for a Fixed Sensors Network - FSN. The FSN has to work as a system. As the triangulation problem becomes high complicated in a case with large numbers of sensors and transmitters, an adequate grid topology is needed in order to tackle the detection complexity. For that reason a Network grid topology is presented and areas that are problematic and need further analysis are analyzed. The Network System in order to deal with problems of saturation and False Triangulations - FTRNs will have to find adequate methods in every sub-area of the Area Of Interest - AOI. Also, concepts like Sensor blindness and overall Network blindness, are presented. All these concepts affect the Network detection rate and its performance and ought to be considered in a way that the network overall performance won’t be degraded.Network performance should be monitored contentiously, with right algorithms and methods. It is also shown that as the number of TRNs and FTRNs is increased Detection Complexity - DC is increased. It is hoped that with further research all the characteristics of a triangulation system network for positioning will be gained and the system will be able to perform autonomously with a high detection rate
Bayesian Processing of Big Data using Log Homotopy Based Particle Flow Filters
Bayesian recursive estimation using large volumes of data is a challenging research topic. The problem becomes particularly complex for high dimensional non-linear state spaces. Markov chain Monte Carlo (MCMC) based methods have been successfully used to solve such problems. The main issue when employing MCMC is the evaluation of the likelihood function at every iteration, which can become prohibitively expensive to compute. Alternative methods are therefore sought after to overcome this difficulty. One such method is the adaptive sequential MCMC (ASMCMC), where the use of the confidence sampling is proposed as a method to reduce the computational cost. The main idea is to make use of the concentration inequalities to sub-sample the measurements for which the likelihood terms are evaluated. However, ASMCMC methods require appropriate proposal distributions. In this work, we propose a novel ASMCMC framework in which log-homotopy based particle flow filters form adaptive proposals. We show the performance can be significantly enhanced by our proposed algorithm, while still maintaining a comparatively low processing overhead
Joint Registration and Fusion of an Infra-Red Camera and Scanning Radar in a Maritime Context
The number of nodes in sensor networks is continually increasing, and maintaining accurate track estimates inside their common surveillance region is a critical necessity. Modern sensor platforms are likely to carry a range of different sensor modalities, all providing data at differing rates, and with varying degrees of uncertainty. These factors complicate the fusion problem as multiple observation models are required, along with a dynamic prediction model. However, the problem is exacerbated when sensors are not registered correctly with respect to each other, i.e. if they are subject to a static or dynamic bias. In this case, measurements from different sensors may correspond to the same target, but do not correlate with each other when in the same Frame of Reference (FoR), which decreases track accuracy. This paper presents a method to jointly estimate the state of multiple targets in a surveillance region, and to correctly register a radar and an Infrared Search and Track (IRST) system onto the same FoR to perform sensor fusion. Previous work using this type of parent-offspring process has been successful when calibrating a pair of cameras, but has never been attempted on a heterogeneous sensor network, nor in a maritime environment. This article presents results on both simulated scenarios and a segment of real data that show a significant increase in track quality in comparison to using incorrectly calibrated sensors or single-radar only
Recommended from our members
Automatic triangulation positioning system for wide area coverage from a fixed sensors network
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonIn a wide area that many Transmitters (TRs) operate, systems of Fixed Sensors (FS) might be used in order to detect them and find TRs position. The detection and the accurate location of a new TR entering in the area frequently can be missed if the system fails to triangulate accurately the relative readings and analyze the changes in the received data. Additionally, there are cases that a Triangulation Station Network (TSN) can detect the heading as well as the transmitter’s position wrong. This thesis presents the design of a Sensors Network (FSN) system which is able to interact with a user, and exploit the relative data of the Sensors (SRs) in real time. The system performs localization with triangulation and the SRs are detect only TRs bearing data (range free). System design and algorithms are also explained. Efficient algorithms were elaborated and the outcomes of their implementation were calculated. The system design targets to reduce system errors and increase the accuracy and the speed of detection. Synchronously and through interaction with the user and changes of relative settings and parameters will be able to offer the user accurate results on localization of TRs in the area minimizing false readings and False Triangulations (FTRNs). The system also enables the user to apply optimization techniques in order to increase the system detection rate and performance and keep the surveillance in the Field of Interest (FoI) on a high level. The optimization methodology applied for the system proves that the FSN system is able to operate with a high performance even when saturation phenomena appear. The unique outcome of the research conducted, is that this thesis paves the way to enhance the localization via Triangulation for a network of Fixed Sensors with known position. The value of this thesis is that the FSN system performs bearing only detection (Range free) with a certain accuracy and the Area of Interest (AOI) is covered efficiently
Novel methods for multi-target tracking with applications in sensor registration and fusion
Maintaining surveillance over vast volumes of space is an increasingly important
capability for the defence industry. A clearer and more accurate picture of a surveillance region could be obtained through sensor fusion between a network of sensors.
However, this accurate picture is dependent on the sensor registration being resolved. Any inaccuracies in sensor location or orientation can manifest themselves
into the sensor measurements that are used in the fusion process, and lead to poor
target tracking performance. Solutions previously proposed in the literature for the
sensor registration problem have been based on a number of assumptions that do
not always hold in practice, such as having a synchronous network and having small,
static registration errors. This thesis will propose a number of solutions to resolving
the sensor registration and sensor fusion problems jointly in an efficient manner.
The assumptions made in previous works will be loosened or removed, making the
solutions more applicable to problems that we are likely to see in practice. The
proposed methods will be applied to both simulated data, and a segment of data
taken from a live trial in the field
Nonlinear Filtering based on Log-homotopy Particle Flow : Methodological Clarification and Numerical Evaluation
The state estimation of dynamical systems based on measurements is an ubiquitous problem. This is relevant in applications like robotics, industrial manufacturing, computer vision, target tracking etc. Recursive Bayesian methodology can then be used to estimate the hidden states of a dynamical system. The procedure consists of two steps: a process update based on solving the equations modelling the state evolution, and a measurement update in which the prior knowledge about the system is improved based on the measurements. For most real world systems, both the evolution and the measurement models are nonlinear functions of the system states. Additionally, both models can also be perturbed by random noise sources, which could be non-Gaussian in their nature. Unlike linear Gaussian models, there does not exist any optimal estimation scheme for nonlinear/non-Gaussian scenarios. This thesis investigates a particular method for nonlinear and non-Gaussian data assimilation, termed as the log-homotopy based particle flow. Practical filters based on such flows have been known in the literature as Daum Huang filters (DHF), named after the developers. The key concept behind such filters is the gradual inclusion of measurements to counter a major drawback of single step update schemes like the particle filters i.e. namely the degeneracy. This could refer to a situation where the likelihood function has its probability mass well seperated from the prior density, and/or is peaked in comparison. Conventional sampling or grid based techniques do not perform well under such circumstances and in order to achieve a reasonable accuracy, could incur a high processing cost. DHF is a sampling based scheme, which provides a unique way to tackle this challenge thereby lowering the processing cost. This is achieved by dividing the single measurement update step into multiple sub steps, such that particles originating from their prior locations are graduated incrementally until they reach their final locations. The motion is controlled by a differential equation, which is numerically solved to yield the updated states. DH filters, even though not new in the literature, have not been fully explored in the detail yet. They lack the in-depth analysis that the other contemporary filters have gone through. Especially, the implementation details for the DHF are very application specific. In this work, we have pursued four main objectives. The first objective is the exploration of theoretical concepts behind DHF. Secondly, we build an understanding of the existing implementation framework and highlight its potential shortcomings. As a sub task to this, we carry out a detailed study of important factors that affect the performance of a DHF, and suggest possible improvements for each of those factors. The third objective is to use the improved implementation to derive new filtering algorithms. Finally, we have extended the DHF theory and derived new flow equations and filters to cater for more general scenarios. Improvements in the implementation architecture of a standard DHF is one of the key contributions of this thesis. The scope of the applicability of DHF is expanded by combining it with other schemes like the Sequential Markov chain Monte Carlo and the tensor decomposition based solution of the Fokker Planck equation, resulting in the development of new nonlinear filtering algorithms. The standard DHF, using improved implementation and the newly derived algorithms are tested in challenging simulated test scenarios. Detailed analysis have been carried out, together with the comparison against more established filtering schemes. Estimation error and the processing time are used as important performance parameters. We show that our new filtering algorithms exhibit marked performance improvements over the traditional schemes