8,088 research outputs found
Multi-scale persistent homology
Data has shape and that shape is important. This is the anthem of Topological Data Analysis (TDA) as often stated by Gunnar Carlsson. In this paper we take a common method of persistence involving the growing of balls of the same size, and generalizing to balls of different sizes in order to better understand the outlier and coverage problems. We begin with a summary of classical persistence theory and stability. We then move on to generalizing the Rips and \v{C}ech complexes as well as generalizing the Rips lemma. We transition into 3 notions of stability in terms of bottleneck distance. For the outlier problem, we show that it is possible to interpolate between persistence on a set with no noise and a set with noise. For the coverage problem, we present an algorithm which provides a cheap way of covering a compact domain
Cake Cutting Algorithms for Piecewise Constant and Piecewise Uniform Valuations
Cake cutting is one of the most fundamental settings in fair division and
mechanism design without money. In this paper, we consider different levels of
three fundamental goals in cake cutting: fairness, Pareto optimality, and
strategyproofness. In particular, we present robust versions of envy-freeness
and proportionality that are not only stronger than their standard
counter-parts but also have less information requirements. We then focus on
cake cutting with piecewise constant valuations and present three desirable
algorithms: CCEA (Controlled Cake Eating Algorithm), MEA (Market Equilibrium
Algorithm) and CSD (Constrained Serial Dictatorship). CCEA is polynomial-time,
robust envy-free, and non-wasteful. It relies on parametric network flows and
recent generalizations of the probabilistic serial algorithm. For the subdomain
of piecewise uniform valuations, we show that it is also group-strategyproof.
Then, we show that there exists an algorithm (MEA) that is polynomial-time,
envy-free, proportional, and Pareto optimal. MEA is based on computing a
market-based equilibrium via a convex program and relies on the results of
Reijnierse and Potters [24] and Devanur et al. [15]. Moreover, we show that MEA
and CCEA are equivalent to mechanism 1 of Chen et. al. [12] for piecewise
uniform valuations. We then present an algorithm CSD and a way to implement it
via randomization that satisfies strategyproofness in expectation, robust
proportionality, and unanimity for piecewise constant valuations. For the case
of two agents, it is robust envy-free, robust proportional, strategyproof, and
polynomial-time. Many of our results extend to more general settings in cake
cutting that allow for variable claims and initial endowments. We also show a
few impossibility results to complement our algorithms.Comment: 39 page
Bounding Stochastic Dependence, Complete Mixability of Matrices, and Multidimensional Bottleneck Assignment Problems
We call a matrix completely mixable if the entries in its columns can be
permuted so that all row sums are equal. If it is not completely mixable, we
want to determine the smallest maximal and largest minimal row sum attainable.
These values provide a discrete approximation of of minimum variance problems
for discrete distributions, a problem motivated by the question how to estimate
the -quantile of an aggregate random variable with unknown dependence
structure given the marginals of the constituent random variables. We relate
this problem to the multidimensional bottleneck assignment problem and show
that there exists a polynomial -approximation algorithm if the matrix has
only columns. In general, deciding complete mixability is
-complete. In particular the swapping algorithm of Puccetti et
al. is not an exact method unless . For a
fixed number of columns it remains -complete, but there exists a
PTAS. The problem can be solved in pseudopolynomial time for a fixed number of
rows, and even in polynomial time if all columns furthermore contain entries
from the same multiset
Strong Nash Equilibria in Games with the Lexicographical Improvement Property
We introduce a class of finite strategic games with the property that every
deviation of a coalition of players that is profitable to each of its members
strictly decreases the lexicographical order of a certain function defined on
the set of strategy profiles. We call this property the Lexicographical
Improvement Property (LIP) and show that it implies the existence of a
generalized strong ordinal potential function. We use this characterization to
derive existence, efficiency and fairness properties of strong Nash equilibria.
We then study a class of games that generalizes congestion games with
bottleneck objectives that we call bottleneck congestion games. We show that
these games possess the LIP and thus the above mentioned properties. For
bottleneck congestion games in networks, we identify cases in which the
potential function associated with the LIP leads to polynomial time algorithms
computing a strong Nash equilibrium. Finally, we investigate the LIP for
infinite games. We show that the LIP does not imply the existence of a
generalized strong ordinal potential, thus, the existence of SNE does not
follow. Assuming that the function associated with the LIP is continuous,
however, we prove existence of SNE. As a consequence, we prove that bottleneck
congestion games with infinite strategy spaces and continuous cost functions
possess a strong Nash equilibrium
- …