20,435 research outputs found
Knowledge Base Population using Semantic Label Propagation
A crucial aspect of a knowledge base population system that extracts new
facts from text corpora, is the generation of training data for its relation
extractors. In this paper, we present a method that maximizes the effectiveness
of newly trained relation extractors at a minimal annotation cost. Manual
labeling can be significantly reduced by Distant Supervision, which is a method
to construct training data automatically by aligning a large text corpus with
an existing knowledge base of known facts. For example, all sentences
mentioning both 'Barack Obama' and 'US' may serve as positive training
instances for the relation born_in(subject,object). However, distant
supervision typically results in a highly noisy training set: many training
sentences do not really express the intended relation. We propose to combine
distant supervision with minimal manual supervision in a technique called
feature labeling, to eliminate noise from the large and noisy initial training
set, resulting in a significant increase of precision. We further improve on
this approach by introducing the Semantic Label Propagation method, which uses
the similarity between low-dimensional representations of candidate training
instances, to extend the training set in order to increase recall while
maintaining high precision. Our proposed strategy for generating training data
is studied and evaluated on an established test collection designed for
knowledge base population tasks. The experimental results show that the
Semantic Label Propagation strategy leads to substantial performance gains when
compared to existing approaches, while requiring an almost negligible manual
annotation effort.Comment: Submitted to Knowledge Based Systems, special issue on Knowledge
Bases for Natural Language Processin
On a fast bilateral filtering formulation using functional rearrangements
We introduce an exact reformulation of a broad class of neighborhood filters,
among which the bilateral filters, in terms of two functional rearrangements:
the decreasing and the relative rearrangements.
Independently of the image spatial dimension (one-dimensional signal, image,
volume of images, etc.), we reformulate these filters as integral operators
defined in a one-dimensional space corresponding to the level sets measures.
We prove the equivalence between the usual pixel-based version and the
rearranged version of the filter. When restricted to the discrete setting, our
reformulation of bilateral filters extends previous results for the so-called
fast bilateral filtering. We, in addition, prove that the solution of the
discrete setting, understood as constant-wise interpolators, converges to the
solution of the continuous setting.
Finally, we numerically illustrate computational aspects concerning quality
approximation and execution time provided by the rearranged formulation.Comment: 29 pages, Journal of Mathematical Imaging and Vision, 2015. arXiv
admin note: substantial text overlap with arXiv:1406.712
Identifying Mislabeled Training Data
This paper presents a new approach to identifying and eliminating mislabeled
training instances for supervised learning. The goal of this approach is to
improve classification accuracies produced by learning algorithms by improving
the quality of the training data. Our approach uses a set of learning
algorithms to create classifiers that serve as noise filters for the training
data. We evaluate single algorithm, majority vote and consensus filters on five
datasets that are prone to labeling errors. Our experiments illustrate that
filtering significantly improves classification accuracy for noise levels up to
30 percent. An analytical and empirical evaluation of the precision of our
approach shows that consensus filters are conservative at throwing away good
data at the expense of retaining bad data and that majority filters are better
at detecting bad data at the expense of throwing away good data. This suggests
that for situations in which there is a paucity of data, consensus filters are
preferable, whereas majority vote filters are preferable for situations with an
abundance of data
Collaborative Feature Learning from Social Media
Image feature representation plays an essential role in image recognition and
related tasks. The current state-of-the-art feature learning paradigm is
supervised learning from labeled data. However, this paradigm requires
large-scale category labels, which limits its applicability to domains where
labels are hard to obtain. In this paper, we propose a new data-driven feature
learning paradigm which does not rely on category labels. Instead, we learn
from user behavior data collected on social media. Concretely, we use the image
relationship discovered in the latent space from the user behavior data to
guide the image feature learning. We collect a large-scale image and user
behavior dataset from Behance.net. The dataset consists of 1.9 million images
and over 300 million view records from 1.9 million users. We validate our
feature learning paradigm on this dataset and find that the learned feature
significantly outperforms the state-of-the-art image features in learning
better image similarities. We also show that the learned feature performs
competitively on various recognition benchmarks
- …