53,136 research outputs found
City Data Fusion: Sensor Data Fusion in the Internet of Things
Internet of Things (IoT) has gained substantial attention recently and play a
significant role in smart city application deployments. A number of such smart
city applications depend on sensor fusion capabilities in the cloud from
diverse data sources. We introduce the concept of IoT and present in detail ten
different parameters that govern our sensor data fusion evaluation framework.
We then evaluate the current state-of-the art in sensor data fusion against our
sensor data fusion framework. Our main goal is to examine and survey different
sensor data fusion research efforts based on our evaluation framework. The
major open research issues related to sensor data fusion are also presented.Comment: Accepted to be published in International Journal of Distributed
Systems and Technologies (IJDST), 201
From Data Fusion to Knowledge Fusion
The task of {\em data fusion} is to identify the true values of data items
(eg, the true date of birth for {\em Tom Cruise}) among multiple observed
values drawn from different sources (eg, Web sites) of varying (and unknown)
reliability. A recent survey\cite{LDL+12} has provided a detailed comparison of
various fusion methods on Deep Web data. In this paper, we study the
applicability and limitations of different fusion techniques on a more
challenging problem: {\em knowledge fusion}. Knowledge fusion identifies true
subject-predicate-object triples extracted by multiple information extractors
from multiple information sources. These extractors perform the tasks of entity
linkage and schema alignment, thus introducing an additional source of noise
that is quite different from that traditionally considered in the data fusion
literature, which only focuses on factual errors in the original sources. We
adapt state-of-the-art data fusion techniques and apply them to a knowledge
base with 1.6B unique knowledge triples extracted by 12 extractors from over 1B
Web pages, which is three orders of magnitude larger than the data sets used in
previous data fusion papers. We show great promise of the data fusion
approaches in solving the knowledge fusion problem, and suggest interesting
research directions through a detailed error analysis of the methods.Comment: VLDB'201
MRI/TRUS data fusion for brachytherapy
BACKGROUND: Prostate brachytherapy consists in placing radioactive seeds for
tumour destruction under transrectal ultrasound imaging (TRUS) control. It
requires prostate delineation from the images for dose planning. Because
ultrasound imaging is patient- and operator-dependent, we have proposed to fuse
MRI data to TRUS data to make image processing more reliable. The technical
accuracy of this approach has already been evaluated. METHODS: We present work
in progress concerning the evaluation of the approach from the dosimetry
viewpoint. The objective is to determine what impact this system may have on
the treatment of the patient. Dose planning is performed from initial TRUS
prostate contours and evaluated on contours modified by data fusion. RESULTS:
For the eight patients included, we demonstrate that TRUS prostate volume is
most often underestimated and that dose is overestimated in a correlated way.
However, dose constraints are still verified for those eight patients.
CONCLUSIONS: This confirms our initial hypothesis
Estimating and exploiting the degree of independent information in distributed data fusion
Double counting is a major problem in distributed data fusion systems. To maintain flexibility and scalability, distributed data fusion algorithms should just use local information. However globally optimal solutions only exist in highly restricted circumstances. Suboptimal algorithms can be applied in a far wider range of cases, but can be very conservative.
In this paper we present preliminary work to develop
distributed data fusion algorithms that can estimate and
exploit the correlations between the estimates stored in
different nodes in a distributed data fusion network.
We show that partial information can be modelled as
kind of âoverweightedâ Covariance Intersection algorithm. We motivate the need for an adaptive scheme
by analysing the correlation behaviour of a simple distributed data fusion network and show that it is complicated and counterintuitive. Two simple approaches
to estimate the correlation structure are presented and
their results analysed. We show that significant advantages can be obtained
Classification accuracy increase using multisensor data fusion
The practical use of very high resolution visible and near-infrared (VNIR) data is still growing (IKONOS, Quickbird, GeoEye-1, etc.)
but for classification purposes the number of bands is limited in comparison to full spectral imaging. These limitations may lead to the
confusion of materials such as different roofs, pavements, roads, etc. and therefore may provide wrong interpretation and use of classification
products. Employment of hyperspectral data is another solution, but their low spatial resolution (comparing to multispectral
data) restrict their usage for many applications. Another improvement can be achieved by fusion approaches of multisensory data since
this may increase the quality of scene classification. Integration of Synthetic Aperture Radar (SAR) and optical data is widely performed
for automatic classification, interpretation, and change detection. In this paper we present an approach for very high resolution
SAR and multispectral data fusion for automatic classification in urban areas. Single polarization TerraSAR-X (SpotLight mode) and
multispectral data are integrated using the INFOFUSE framework, consisting of feature extraction (information fission), unsupervised
clustering (data representation on a finite domain and dimensionality reduction), and data aggregation (Bayesian or neural network).
This framework allows a relevant way of multisource data combination following consensus theory. The classification is not influenced
by the limitations of dimensionality, and the calculation complexity primarily depends on the step of dimensionality reduction. Fusion
of single polarization TerraSAR-X, WorldView-2 (VNIR or full set), and Digital Surface Model (DSM) data allow for different types
of urban objects to be classified into predefined classes of interest with increased accuracy. The comparison to classification results
of WorldView-2 multispectral data (8 spectral bands) is provided and the numerical evaluation of the method in comparison to other
established methods illustrates the advantage in the classification accuracy for many classes such as buildings, low vegetation, sport
objects, forest, roads, rail roads, etc
Common and Distinct Components in Data Fusion
In many areas of science multiple sets of data are collected pertaining to
the same system. Examples are food products which are characterized by
different sets of variables, bio-processes which are on-line sampled with
different instruments, or biological systems of which different genomics
measurements are obtained. Data fusion is concerned with analyzing such sets of
data simultaneously to arrive at a global view of the system under study. One
of the upcoming areas of data fusion is exploring whether the data sets have
something in common or not. This gives insight into common and distinct
variation in each data set, thereby facilitating understanding the
relationships between the data sets. Unfortunately, research on methods to
distinguish common and distinct components is fragmented, both in terminology
as well as in methods: there is no common ground which hampers comparing
methods and understanding their relative merits. This paper provides a unifying
framework for this subfield of data fusion by using rigorous arguments from
linear algebra. The most frequently used methods for distinguishing common and
distinct components are explained in this framework and some practical examples
are given of these methods in the areas of (medical) biology and food science.Comment: 50 pages, 12 figure
GeoZui3D: Data Fusion for Interpreting Oceanographic Data
GeoZui3D stands for Geographic Zooming User Interface. It is a new visualization software system designed for interpreting multiple sources of 3D data. The system supports gridded terrain models, triangular meshes, curtain plots, and a number of other display objects. A novel center of workspace interaction method unifies a number of aspects of the interface. It creates a simple viewpoint control method, it helps link multiple views, and is ideal for stereoscopic viewing. GeoZui3D has a number of features to support real-time input. Through a CORBA interface external entities can influence the position and state of objects in the display. Extra windows can be attached to moving objects allowing for their position and data to be monitored. We describe the application of this system for heterogeneous data fusion, for multibeam QC and for ROV/AUV monitoring
- âŠ