53,740 research outputs found
Expressing Bayesian Fusion as a Product of Distributions: Application to Randomized Hough Transform
Data fusion is a common issue of mobile robotics, computer assisted
medical diagnosis or behavioral control of simulated character for instance. However
data sources are often noisy, opinion for experts are not known with absolute
precision, and motor commands do not act in the same exact manner on the environment.
In these cases, classic logic fails to manage efficiently the fusion process.
Confronting different knowledge in an uncertain environment can therefore be adequately
formalized in the bayesian framework.
Besides, bayesian fusion can be expensive in terms of memory usage and processing
time. This paper precisely aims at expressing any bayesian fusion process as a
product of probability distributions in order to reduce its complexity. We first study
both direct and inverse fusion schemes. We show that contrary to direct models,
inverse local models need a specific prior in order to allow the fusion to be computed
as a product. We therefore propose to add a consistency variable to each local
model and we show that these additional variables allow the use of a product of the
local distributions in order to compute the global probability distribution over the
fused variable. Finally, we take the example of the Randomized Hough Transform.
We rewrite it in the bayesian framework, considering that it is a fusion process
to extract lines from couples of dots in a picture. As expected, we can find back
the expression of the Randomized Hough Transform from the literature with the
appropriate assumptions
Model Data Fusion: developing Bayesian inversion to constrain equilibrium and mode structure
Recently, a new probabilistic "data fusion" framework based on Bayesian
principles has been developed on JET and W7-AS. The Bayesian analysis framework
folds in uncertainties and inter-dependencies in the diagnostic data and signal
forward-models, together with prior knowledge of the state of the plasma, to
yield predictions of internal magnetic structure. A feature of the framework,
known as MINERVA (J. Svensson, A. Werner, Plasma Physics and Controlled Fusion
50, 085022, 2008), is the inference of magnetic flux surfaces without the use
of a force balance model. We discuss results from a new project to develop
Bayesian inversion tools that aim to (1) distinguish between competing
equilibrium theories, which capture different physics, using the MAST spherical
tokamak; and (2) test the predictions of MHD theory, particularly mode
structure, using the H-1 Heliac.Comment: submitted to Journal of Plasma Fusion Research 10/11/200
Using Bayesian Programming for Multisensor Multi-Target Tracking in Automative Applications
A prerequisite to the design of future Advanced Driver Assistance Systems for cars is a sensing system providing all the information required for high-level driving assistance tasks. Carsense is a European project whose purpose is to develop such a new sensing system. It will combine different sensors (laser, radar and video) and will rely on the fusion of the information coming from these sensors in order to achieve better accuracy, robustness and an increase of the information content. This paper demonstrates the interest of using
probabilistic reasoning techniques to address this challenging multi-sensor data fusion problem. The approach used is called Bayesian Programming. It is a general approach based on an implementation of the Bayesian theory. It was introduced rst to design robot control programs but its scope of application is much broader and it can be used whenever one has to deal with problems involving uncertain or incomplete knowledge
Probabilistic Methodology and Techniques for Artefact Conception and Development
The purpose of this paper is to make a state of the art on probabilistic methodology and techniques for artefact conception and development. It is the 8th deliverable of the BIBA (Bayesian Inspired Brain and Artefacts) project. We first present the incompletness problem as the central difficulty that both living creatures and artefacts have to face: how can they perceive, infer, decide and act efficiently with incomplete and uncertain knowledge?. We then introduce a generic probabilistic formalism called Bayesian Programming. This formalism is then used to review the main probabilistic methodology
and techniques. This review is organized in 3 parts: first the probabilistic models from Bayesian networks to Kalman filters and from sensor fusion to CAD systems, second the inference techniques and finally the learning and model acquisition and comparison methodologies. We conclude with the perspectives of the BIBA project as they rise from this state of the art
On the Effect of Inter-observer Variability for a Reliable Estimation of Uncertainty of Medical Image Segmentation
Uncertainty estimation methods are expected to improve the understanding and
quality of computer-assisted methods used in medical applications (e.g.,
neurosurgical interventions, radiotherapy planning), where automated medical
image segmentation is crucial. In supervised machine learning, a common
practice to generate ground truth label data is to merge observer annotations.
However, as many medical image tasks show a high inter-observer variability
resulting from factors such as image quality, different levels of user
expertise and domain knowledge, little is known as to how inter-observer
variability and commonly used fusion methods affect the estimation of
uncertainty of automated image segmentation. In this paper we analyze the
effect of common image label fusion techniques on uncertainty estimation, and
propose to learn the uncertainty among observers. The results highlight the
negative effect of fusion methods applied in deep learning, to obtain reliable
estimates of segmentation uncertainty. Additionally, we show that the learned
observers' uncertainty can be combined with current standard Monte Carlo
dropout Bayesian neural networks to characterize uncertainty of model's
parameters.Comment: Appears in Medical Image Computing and Computer Assisted
Interventions (MICCAI), 201
Incrementally Learned Mixture Models for GNSS Localization
GNSS localization is an important part of today's autonomous systems,
although it suffers from non-Gaussian errors caused by non-line-of-sight
effects. Recent methods are able to mitigate these effects by including the
corresponding distributions in the sensor fusion algorithm. However, these
approaches require prior knowledge about the sensor's distribution, which is
often not available. We introduce a novel sensor fusion algorithm based on
variational Bayesian inference, that is able to approximate the true
distribution with a Gaussian mixture model and to learn its parametrization
online. The proposed Incremental Variational Mixture algorithm automatically
adapts the number of mixture components to the complexity of the measurement's
error distribution. We compare the proposed algorithm against current
state-of-the-art approaches using a collection of open access real world
datasets and demonstrate its superior localization accuracy.Comment: 8 pages, 5 figures, published in proceedings of IEEE Intelligent
Vehicles Symposium (IV) 201
- …