12,127 research outputs found
Edge and Line Feature Extraction Based on Covariance Models
age segmentation based on contour extraction usually involves three stages of image operations: feature extraction, edge detection and edge linking. This paper is devoted to the first stage: a method to design feature extractors used to detect edges from noisy and/or blurred images. The method relies on a model that describes the existence of image discontinuities (e.g. edges) in terms of covariance functions. The feature extractor transforms the input image into a “log-likelihood ratio” image. Such an image is a good starting point of the edge detection stage since it represents a balanced trade-off between signal-to-noise ratio and the ability to resolve detailed structures. For 1-D signals, the performance of the edge detector based on this feature extractor is quantitatively assessed by the so called “average risk measure”. The results are compared with the performances of 1-D edge detectors known from literature. Generalizations to 2-D operators are given. Applications on real world images are presented showing the capability of the covariance model to build edge and line feature extractors. Finally it is shown that the covariance model can be coupled to a MRF-model of edge configurations so as to arrive at a maximum a posteriori estimate of the edges or lines in the image
From Data Fusion to Knowledge Fusion
The task of {\em data fusion} is to identify the true values of data items
(eg, the true date of birth for {\em Tom Cruise}) among multiple observed
values drawn from different sources (eg, Web sites) of varying (and unknown)
reliability. A recent survey\cite{LDL+12} has provided a detailed comparison of
various fusion methods on Deep Web data. In this paper, we study the
applicability and limitations of different fusion techniques on a more
challenging problem: {\em knowledge fusion}. Knowledge fusion identifies true
subject-predicate-object triples extracted by multiple information extractors
from multiple information sources. These extractors perform the tasks of entity
linkage and schema alignment, thus introducing an additional source of noise
that is quite different from that traditionally considered in the data fusion
literature, which only focuses on factual errors in the original sources. We
adapt state-of-the-art data fusion techniques and apply them to a knowledge
base with 1.6B unique knowledge triples extracted by 12 extractors from over 1B
Web pages, which is three orders of magnitude larger than the data sets used in
previous data fusion papers. We show great promise of the data fusion
approaches in solving the knowledge fusion problem, and suggest interesting
research directions through a detailed error analysis of the methods.Comment: VLDB'201
Invariant feature extraction from event based stimuli
We propose a novel architecture, the event-based GASSOM for learning and
extracting invariant representations from event streams originating from
neuromorphic vision sensors. The framework is inspired by feed-forward cortical
models for visual processing. The model, which is based on the concepts of
sparsity and temporal slowness, is able to learn feature extractors that
resemble neurons in the primary visual cortex. Layers of units in the proposed
model can be cascaded to learn feature extractors with different levels of
complexity and selectivity. We explore the applicability of the framework on
real world tasks by using the learned network for object recognition. The
proposed model achieve higher classification accuracy compared to other
state-of-the-art event based processing methods. Our results also demonstrate
the generality and robustness of the method, as the recognizers for different
data sets and different tasks all used the same set of learned feature
detectors, which were trained on data collected independently of the testing
data.Comment: 6 page
Fusing Data with Correlations
Many applications rely on Web data and extraction systems to accomplish
knowledge-driven tasks. Web information is not curated, so many sources provide
inaccurate, or conflicting information. Moreover, extraction systems introduce
additional noise to the data. We wish to automatically distinguish correct data
and erroneous data for creating a cleaner set of integrated data. Previous work
has shown that a na\"ive voting strategy that trusts data provided by the
majority or at least a certain number of sources may not work well in the
presence of copying between the sources. However, correlation between sources
can be much broader than copying: sources may provide data from complementary
domains (\emph{negative correlation}), extractors may focus on different types
of information (\emph{negative correlation}), and extractors may apply common
rules in extraction (\emph{positive correlation, without copying}). In this
paper we present novel techniques modeling correlations between sources and
applying it in truth finding.Comment: Sigmod'201
Quantum to Classical Randomness Extractors
The goal of randomness extraction is to distill (almost) perfect randomness
from a weak source of randomness. When the source yields a classical string X,
many extractor constructions are known. Yet, when considering a physical
randomness source, X is itself ultimately the result of a measurement on an
underlying quantum system. When characterizing the power of a source to supply
randomness it is hence a natural question to ask, how much classical randomness
we can extract from a quantum system. To tackle this question we here take on
the study of quantum-to-classical randomness extractors (QC-extractors). We
provide constructions of QC-extractors based on measurements in a full set of
mutually unbiased bases (MUBs), and certain single qubit measurements. As the
first application, we show that any QC-extractor gives rise to entropic
uncertainty relations with respect to quantum side information. Such relations
were previously only known for two measurements. As the second application, we
resolve the central open question in the noisy-storage model [Wehner et al.,
PRL 100, 220502 (2008)] by linking security to the quantum capacity of the
adversary's storage device.Comment: 6+31 pages, 2 tables, 1 figure, v2: improved converse parameters,
typos corrected, new discussion, v3: new reference
- …