9,276 research outputs found
Robust Draws in Balanced Knockout Tournaments
Balanced knockout tournaments are ubiquitous in sports competitions and are
also used in decision-making and elections. The traditional computational
question, that asks to compute a draw (optimal draw) that maximizes the winning
probability for a distinguished player, has received a lot of attention.
Previous works consider the problem where the pairwise winning probabilities
are known precisely, while we study how robust is the winning probability with
respect to small errors in the pairwise winning probabilities. First, we
present several illuminating examples to establish: (a)~there exist
deterministic tournaments (where the pairwise winning probabilities are~0 or~1)
where one optimal draw is much more robust than the other; and (b)~in general,
there exist tournaments with slightly suboptimal draws that are more robust
than all the optimal draws. The above examples motivate the study of the
computational problem of robust draws that guarantee a specified winning
probability. Second, we present a polynomial-time algorithm for approximating
the robustness of a draw for sufficiently small errors in pairwise winning
probabilities, and obtain that the stated computational problem is NP-complete.
We also show that two natural cases of deterministic tournaments where the
optimal draw could be computed in polynomial time also admit polynomial-time
algorithms to compute robust optimal draws
What do majority-voting politics say about redistributive taxation of consumption and factor income? Not much.
Tax rates on labor income, capital income and consumption-and the redistributive transfers those taxes finance-differ widely across developed countries. Can majority-voting methods, applied to a calibrated growth model, explain that variation? The answer I fund is yes, and then some. In this paper, I examine a simple growth model, calibrated roughly to U.S. data, in which the political decision is over constant paths of taxes on factor income and consumption, used to finance a lump-sum transfer. I first look at outcomes under probabilistic voting, and find that equilibria are extremely sensitive to the specification of uncertainty. I then consider other ways to restrict the range of majority-rule outcomes, looking at the model's implications for the shape of the Pareto set and the uncovered set, and the existence or non-existence of a Condorcet winner. Solving the model on discrete grid of policy choices, I find that no Condorcet winner exists and that the Pareto and uncovered sets, while small relative to the entire issue space, are large relative to the range of tax policies we see in data for a collection of 20 OECD countries. Taking that data as the issue space, I find that none of the 20 can be ruled out on efficiency grounds, and that 10 of the 20 are in the uncovered set. Those 10 encompass policies as diverse as those of the US, Norway and Austria. One can construct a Condorcet cycle including all 10 countries' tax vectors. ; The key features of the model here, as compared to other models on the endogenous determination of taxes and redistribution, is that the issue space is multidimensional and, at the same time, no one voter type is sufficiently numerous to be decisive. I conclude that the sharp predictions of papers in this literature may not survive an expansion of their issue spaces or the allowance for a slightly less homogeneous electorate.Taxation ; Consumption (Economics) ; Income tax ; Fiscal policy
Enabling Privacy-preserving Auctions in Big Data
We study how to enable auctions in the big data context to solve many
upcoming data-based decision problems in the near future. We consider the
characteristics of the big data including, but not limited to, velocity,
volume, variety, and veracity, and we believe any auction mechanism design in
the future should take the following factors into consideration: 1) generality
(variety); 2) efficiency and scalability (velocity and volume); 3) truthfulness
and verifiability (veracity). In this paper, we propose a privacy-preserving
construction for auction mechanism design in the big data, which prevents
adversaries from learning unnecessary information except those implied in the
valid output of the auction. More specifically, we considered one of the most
general form of the auction (to deal with the variety), and greatly improved
the the efficiency and scalability by approximating the NP-hard problems and
avoiding the design based on garbled circuits (to deal with velocity and
volume), and finally prevented stakeholders from lying to each other for their
own benefit (to deal with the veracity). We achieve these by introducing a
novel privacy-preserving winner determination algorithm and a novel payment
mechanism. Additionally, we further employ a blind signature scheme as a
building block to let bidders verify the authenticity of their payment reported
by the auctioneer. The comparison with peer work shows that we improve the
asymptotic performance of peer works' overhead from the exponential growth to a
linear growth and from linear growth to a logarithmic growth, which greatly
improves the scalability
Incorporating Side Information in Probabilistic Matrix Factorization with Gaussian Processes
Probabilistic matrix factorization (PMF) is a powerful method for modeling
data associated with pairwise relationships, finding use in collaborative
filtering, computational biology, and document analysis, among other areas. In
many domains, there is additional information that can assist in prediction.
For example, when modeling movie ratings, we might know when the rating
occurred, where the user lives, or what actors appear in the movie. It is
difficult, however, to incorporate this side information into the PMF model. We
propose a framework for incorporating side information by coupling together
multiple PMF problems via Gaussian process priors. We replace scalar latent
features with functions that vary over the space of side information. The GP
priors on these functions require them to vary smoothly and share information.
We successfully use this new method to predict the scores of professional
basketball games, where side information about the venue and date of the game
are relevant for the outcome.Comment: 18 pages, 4 figures, Submitted to UAI 201
Luck as Risk
The aim of this paper is to explore the hypothesis that luck is a risk-involving phenomenon. I start by explaining why this hypothesis is prima facie plausible in view of the parallelisms between luck and risk. I then distinguish three ways to spell it out: in probabilistic terms, in modal terms, and in terms of lack of control. Before evaluating the resulting accounts, I explain how the idea that luck involves risk is compatible with the fact that risk concerns unwanted events whereas luck can concern both wanted and unwanted events. I turn to evaluating the modal and probabilistic views and argue, firstly, that they fail to account for the connection between risk and bad luck; secondly, that they also fail to account for the connection between risk and good luck. Finally, I defend the lack of control view. In particular, I argue that it can handle the objections to the probabilistic and modal accounts and that it can explain how degrees of luck and risk covary
- …