3,091 research outputs found
Mass Displacement Networks
Despite the large improvements in performance attained by using deep learning
in computer vision, one can often further improve results with some additional
post-processing that exploits the geometric nature of the underlying task. This
commonly involves displacing the posterior distribution of a CNN in a way that
makes it more appropriate for the task at hand, e.g. better aligned with local
image features, or more compact. In this work we integrate this geometric
post-processing within a deep architecture, introducing a differentiable and
probabilistically sound counterpart to the common geometric voting technique
used for evidence accumulation in vision. We refer to the resulting neural
models as Mass Displacement Networks (MDNs), and apply them to human pose
estimation in two distinct setups: (a) landmark localization, where we collapse
a distribution to a point, allowing for precise localization of body keypoints
and (b) communication across body parts, where we transfer evidence from one
part to the other, allowing for a globally consistent pose estimate. We
evaluate on large-scale pose estimation benchmarks, such as MPII Human Pose and
COCO datasets, and report systematic improvements when compared to strong
baselines.Comment: 12 pages, 4 figure
A Bayesian Approach to Discovering Truth from Conflicting Sources for Data Integration
In practical data integration systems, it is common for the data sources
being integrated to provide conflicting information about the same entity.
Consequently, a major challenge for data integration is to derive the most
complete and accurate integrated records from diverse and sometimes conflicting
sources. We term this challenge the truth finding problem. We observe that some
sources are generally more reliable than others, and therefore a good model of
source quality is the key to solving the truth finding problem. In this work,
we propose a probabilistic graphical model that can automatically infer true
records and source quality without any supervision. In contrast to previous
methods, our principled approach leverages a generative process of two types of
errors (false positive and false negative) by modeling two different aspects of
source quality. In so doing, ours is also the first approach designed to merge
multi-valued attribute types. Our method is scalable, due to an efficient
sampling-based inference algorithm that needs very few iterations in practice
and enjoys linear time complexity, with an even faster incremental variant.
Experiments on two real world datasets show that our new method outperforms
existing state-of-the-art approaches to the truth finding problem.Comment: VLDB201
Abduction-Based Explanations for Machine Learning Models
The growing range of applications of Machine Learning (ML) in a multitude of
settings motivates the ability of computing small explanations for predictions
made. Small explanations are generally accepted as easier for human decision
makers to understand. Most earlier work on computing explanations is based on
heuristic approaches, providing no guarantees of quality, in terms of how close
such solutions are from cardinality- or subset-minimal explanations. This paper
develops a constraint-agnostic solution for computing explanations for any ML
model. The proposed solution exploits abductive reasoning, and imposes the
requirement that the ML model can be represented as sets of constraints using
some target constraint reasoning system for which the decision problem can be
answered with some oracle. The experimental results, obtained on well-known
datasets, validate the scalability of the proposed approach as well as the
quality of the computed solutions
Optimization in Knowledge-Intensive Crowdsourcing
We present SmartCrowd, a framework for optimizing collaborative
knowledge-intensive crowdsourcing. SmartCrowd distinguishes itself by
accounting for human factors in the process of assigning tasks to workers.
Human factors designate workers' expertise in different skills, their expected
minimum wage, and their availability. In SmartCrowd, we formulate task
assignment as an optimization problem, and rely on pre-indexing workers and
maintaining the indexes adaptively, in such a way that the task assignment
process gets optimized both qualitatively, and computation time-wise. We
present rigorous theoretical analyses of the optimization problem and propose
optimal and approximation algorithms. We finally perform extensive performance
and quality experiments using real and synthetic data to demonstrate that
adaptive indexing in SmartCrowd is necessary to achieve efficient high quality
task assignment.Comment: 12 page
Physics Inspired Optimization on Semantic Transfer Features: An Alternative Method for Room Layout Estimation
In this paper, we propose an alternative method to estimate room layouts of
cluttered indoor scenes. This method enjoys the benefits of two novel
techniques. The first one is semantic transfer (ST), which is: (1) a
formulation to integrate the relationship between scene clutter and room layout
into convolutional neural networks; (2) an architecture that can be end-to-end
trained; (3) a practical strategy to initialize weights for very deep networks
under unbalanced training data distribution. ST allows us to extract highly
robust features under various circumstances, and in order to address the
computation redundance hidden in these features we develop a principled and
efficient inference scheme named physics inspired optimization (PIO). PIO's
basic idea is to formulate some phenomena observed in ST features into
mechanics concepts. Evaluations on public datasets LSUN and Hedau show that the
proposed method is more accurate than state-of-the-art methods.Comment: To appear in CVPR 2017. Project Page:
https://sites.google.com/view/st-pio
Time-Sensitive Bayesian Information Aggregation for Crowdsourcing Systems
Crowdsourcing systems commonly face the problem of aggregating multiple
judgments provided by potentially unreliable workers. In addition, several
aspects of the design of efficient crowdsourcing processes, such as defining
worker's bonuses, fair prices and time limits of the tasks, involve knowledge
of the likely duration of the task at hand. Bringing this together, in this
work we introduce a new time--sensitive Bayesian aggregation method that
simultaneously estimates a task's duration and obtains reliable aggregations of
crowdsourced judgments. Our method, called BCCTime, builds on the key insight
that the time taken by a worker to perform a task is an important indicator of
the likely quality of the produced judgment. To capture this, BCCTime uses
latent variables to represent the uncertainty about the workers' completion
time, the tasks' duration and the workers' accuracy. To relate the quality of a
judgment to the time a worker spends on a task, our model assumes that each
task is completed within a latent time window within which all workers with a
propensity to genuinely attempt the labelling task (i.e., no spammers) are
expected to submit their judgments. In contrast, workers with a lower
propensity to valid labeling, such as spammers, bots or lazy labelers, are
assumed to perform tasks considerably faster or slower than the time required
by normal workers. Specifically, we use efficient message-passing Bayesian
inference to learn approximate posterior probabilities of (i) the confusion
matrix of each worker, (ii) the propensity to valid labeling of each worker,
(iii) the unbiased duration of each task and (iv) the true label of each task.
Using two real-world public datasets for entity linking tasks, we show that
BCCTime produces up to 11% more accurate classifications and up to 100% more
informative estimates of a task's duration compared to state-of-the-art
methods
- …