26,219 research outputs found
Decision making under uncertainty
Almost all important decision problems are inevitably subject to some level of uncertainty either about data measurements, the parameters, or predictions describing future evolution. The significance of handling uncertainty is further amplified by the large volume of uncertain data automatically generated by modern data gathering or integration systems. Various types of problems of decision making under uncertainty have been subject to extensive research in computer science, economics and social science. In this dissertation, I study three major problems in this context, ranking, utility maximization, and matching, all involving uncertain datasets.
First, we consider the problem of ranking and top-k query processing over probabilistic datasets. By illustrating the diverse and conflicting behaviors of the prior proposals, we contend that a single, specific ranking function may not suffice for probabilistic datasets. Instead we propose the notion of parameterized ranking functions, that generalize or can approximate many of the previously proposed ranking functions. We present novel exact or approximate algorithms for efficiently ranking large datasets according to these ranking functions, even if the datasets exhibit complex correlations or the probability distributions are continuous.
The second problem concerns with the stochastic versions of a broad class of combinatorial optimization problems. We observe that the expected value is inadequate in capturing different types of risk-averse or risk-prone behaviors, and instead we consider a more general objective which is to maximize the expected utility of the solution for some given utility function. We present a polynomial time approximation algorithm with additive error ε for any ε > 0, under certain conditions. Our result generalizes and improves several prior results on stochastic shortest path, stochastic spanning tree, and stochastic knapsack.
The third is the stochastic matching problem which finds interesting applications in online dating, kidney exchange and online ad assignment. In this problem, the existence of each edge is uncertain and can be only found out by probing the edge. The goal is to design a probing strategy to maximize the expected weight of the matching. We give linear programming based constant-factor approximation algorithms for weighted stochastic matching, which answer an open question raised in prior work
Scalable Probabilistic Similarity Ranking in Uncertain Databases (Technical Report)
This paper introduces a scalable approach for probabilistic top-k similarity
ranking on uncertain vector data. Each uncertain object is represented by a set
of vector instances that are assumed to be mutually-exclusive. The objective is
to rank the uncertain data according to their distance to a reference object.
We propose a framework that incrementally computes for each object instance and
ranking position, the probability of the object falling at that ranking
position. The resulting rank probability distribution can serve as input for
several state-of-the-art probabilistic ranking models. Existing approaches
compute this probability distribution by applying a dynamic programming
approach of quadratic complexity. In this paper we theoretically as well as
experimentally show that our framework reduces this to a linear-time complexity
while having the same memory requirements, facilitated by incremental accessing
of the uncertain vector instances in increasing order of their distance to the
reference object. Furthermore, we show how the output of our method can be used
to apply probabilistic top-k ranking for the objects, according to different
state-of-the-art definitions. We conduct an experimental evaluation on
synthetic and real data, which demonstrates the efficiency of our approach
On Discrimination Discovery and Removal in Ranked Data using Causal Graph
Predictive models learned from historical data are widely used to help
companies and organizations make decisions. However, they may digitally
unfairly treat unwanted groups, raising concerns about fairness and
discrimination. In this paper, we study the fairness-aware ranking problem
which aims to discover discrimination in ranked datasets and reconstruct the
fair ranking. Existing methods in fairness-aware ranking are mainly based on
statistical parity that cannot measure the true discriminatory effect since
discrimination is causal. On the other hand, existing methods in causal-based
anti-discrimination learning focus on classification problems and cannot be
directly applied to handle the ranked data. To address these limitations, we
propose to map the rank position to a continuous score variable that represents
the qualification of the candidates. Then, we build a causal graph that
consists of both the discrete profile attributes and the continuous score. The
path-specific effect technique is extended to the mixed-variable causal graph
to identify both direct and indirect discrimination. The relationship between
the path-specific effects for the ranked data and those for the binary decision
is theoretically analyzed. Finally, algorithms for discovering and removing
discrimination from a ranked dataset are developed. Experiments using the real
dataset show the effectiveness of our approaches.Comment: 9 page
Improving Negative Sampling for Word Representation using Self-embedded Features
Although the word-popularity based negative sampler has shown superb
performance in the skip-gram model, the theoretical motivation behind
oversampling popular (non-observed) words as negative samples is still not well
understood. In this paper, we start from an investigation of the gradient
vanishing issue in the skipgram model without a proper negative sampler. By
performing an insightful analysis from the stochastic gradient descent (SGD)
learning perspective, we demonstrate that, both theoretically and intuitively,
negative samples with larger inner product scores are more informative than
those with lower scores for the SGD learner in terms of both convergence rate
and accuracy. Understanding this, we propose an alternative sampling algorithm
that dynamically selects informative negative samples during each SGD update.
More importantly, the proposed sampler accounts for multi-dimensional
self-embedded features during the sampling process, which essentially makes it
more effective than the original popularity-based (one-dimensional) sampler.
Empirical experiments further verify our observations, and show that our
fine-grained samplers gain significant improvement over the existing ones
without increasing computational complexity.Comment: Accepted in WSDM 201
Quantifying Aspect Bias in Ordinal Ratings using a Bayesian Approach
User opinions expressed in the form of ratings can influence an individual's
view of an item. However, the true quality of an item is often obfuscated by
user biases, and it is not obvious from the observed ratings the importance
different users place on different aspects of an item. We propose a
probabilistic modeling of the observed aspect ratings to infer (i) each user's
aspect bias and (ii) latent intrinsic quality of an item. We model multi-aspect
ratings as ordered discrete data and encode the dependency between different
aspects by using a latent Gaussian structure. We handle the
Gaussian-Categorical non-conjugacy using a stick-breaking formulation coupled
with P\'{o}lya-Gamma auxiliary variable augmentation for a simple, fully
Bayesian inference. On two real world datasets, we demonstrate the predictive
ability of our model and its effectiveness in learning explainable user biases
to provide insights towards a more reliable product quality estimation.Comment: Accepted for publication in IJCAI 201
Zero-Shot Learning by Convex Combination of Semantic Embeddings
Several recent publications have proposed methods for mapping images into
continuous semantic embedding spaces. In some cases the embedding space is
trained jointly with the image transformation. In other cases the semantic
embedding space is established by an independent natural language processing
task, and then the image transformation into that space is learned in a second
stage. Proponents of these image embedding systems have stressed their
advantages over the traditional \nway{} classification framing of image
understanding, particularly in terms of the promise for zero-shot learning --
the ability to correctly annotate images of previously unseen object
categories. In this paper, we propose a simple method for constructing an image
embedding system from any existing \nway{} image classifier and a semantic word
embedding model, which contains the \n class labels in its vocabulary. Our
method maps images into the semantic embedding space via convex combination of
the class label embedding vectors, and requires no additional training. We show
that this simple and direct method confers many of the advantages associated
with more complex image embedding schemes, and indeed outperforms state of the
art methods on the ImageNet zero-shot learning task
- …