265 research outputs found
Structure in Dichotomous Preferences
Many hard computational social choice problems are known to become tractable
when voters' preferences belong to a restricted domain, such as those of
single-peaked or single-crossing preferences. However, to date, all algorithmic
results of this type have been obtained for the setting where each voter's
preference list is a total order of candidates. The goal of this paper is to
extend this line of research to the setting where voters' preferences are
dichotomous, i.e., each voter approves a subset of candidates and disapproves
the remaining candidates. We propose several analogues of the notions of
single-peaked and single-crossing preferences for dichotomous profiles and
investigate the relationships among them. We then demonstrate that for some of
these notions the respective restricted domains admit efficient algorithms for
computationally hard approval-based multi-winner rules.Comment: A preliminary version appeared in the proceedings of IJCAI 2015, the
International Joint Conference on Artificial Intelligenc
Solving Hard Stable Matching Problems Involving Groups of Similar Agents
Many important stable matching problems are known to be NP-hard, even when
strong restrictions are placed on the input. In this paper we seek to identify
structural properties of instances of stable matching problems which will allow
us to design efficient algorithms using elementary techniques. We focus on the
setting in which all agents involved in some matching problem can be
partitioned into k different types, where the type of an agent determines his
or her preferences, and agents have preferences over types (which may be
refined by more detailed preferences within a single type). This situation
would arise in practice if agents form preferences solely based on some small
collection of agents' attributes. We also consider a generalisation in which
each agent may consider some small collection of other agents to be
exceptional, and rank these in a way that is not consistent with their types;
this could happen in practice if agents have prior contact with a small number
of candidates. We show that (for the case without exceptions), several
well-studied NP-hard stable matching problems including Max SMTI (that of
finding the maximum cardinality stable matching in an instance of stable
marriage with ties and incomplete lists) belong to the parameterised complexity
class FPT when parameterised by the number of different types of agents needed
to describe the instance. For Max SMTI this tractability result can be extended
to the setting in which each agent promotes at most one `exceptional' candidate
to the top of his/her list (when preferences within types are not refined), but
the problem remains NP-hard if preference lists can contain two or more
exceptions and the exceptional candidates can be placed anywhere in the
preference lists, even if the number of types is bounded by a constant.Comment: Results on SMTI appear in proceedings of WINE 2018; Section 6
contains work in progres
Counting Houses of Pareto Optimal Matchings in the House Allocation Problem
Let with and be two sets. We assume that every
element has a reference list over all elements from . We call an
injective mapping from to a matching. A blocking coalition of
is a subset of such that there exists a matching that
differs from only on elements of , and every element of
improves in , compared to according to its preference list. If
there exists no blocking coalition, we call the matching an exchange
stable matching (ESM). An element is reachable if there exists an
exchange stable matching using . The set of all reachable elements is
denoted by . We show This is
asymptotically tight. A set is reachable (respectively exactly
reachable) if there exists an exchange stable matching whose image
contains as a subset (respectively equals ). We give bounds for the
number of exactly reachable sets. We find that our results hold in the more
general setting of multi-matchings, when each element of is matched
with elements of instead of just one. Further, we give complexity
results and algorithms for corresponding algorithmic questions. Finally, we
characterize unavoidable elements, i.e., elements of that are used by all
ESM's. This yields efficient algorithms to determine all unavoidable elements.Comment: 24 pages 2 Figures revise
Evaluating Wireless Carrier Consolidation Using Semiparametric Demand Estimation
The US mobile phone service industry has dramatically consolidated over the last two decades. One justification for consolidation is that merged firms can provide consumers with larger coverage areas at lower costs. We estimate the willingness to pay for national coverage to evaluate this motivation for past consolidation. As market level quantity data is not publicly available, we devise an econometric procedure that allows us to estimate the willingness to pay using market share ranks collected from a popular online retailer, Amazon. Our semiparametric maximum score estimator controls for consumers%u2019 heterogeneous preferences for carriers, handsets and minutes of calling time. We find that national coverage is strongly valued by consumers, providing an efficiency justification for across-market mergers. The methods we propose can estimate demand for other products using data from Amazon or other online retailers where quantities are not observed, but product ranks are observed. Since Amazon data can easily be gathered by researchers, these methods may be useful for the analysis of other product markets where high quality data are not publicly available.
Evaluating Wireless Carrier Consolidation Using Semiparametric Demand Estimation
The US mobile phone service industry has dramatically consolidated over the last two decades. One justification for consolidation is that merged firms can provide consumers with larger coverage areas at lower costs. We estimate the willingness to pay for national coverage to evaluate this motivation for past consolidation. As market level quantity data is not publicly available, we devise an econometric procedure that allows us to estimate the willingness to pay using market share ranks collected from a popular online retailer, Amazon. Our semiparametric maximum score estimator controls for consumers' heterogeneous preferences for carriers, handsets and minutes of calling time. We find that national coverage is strongly valued by consumers, providing an efficiency justification for across-market mergers. The methods we propose can estimate demand for other products using data from Amazon or other online retailers where quantities are not observed, but product ranks are observed. Since Amazon data can easily be gathered by researchers, these methods may be useful for the analysis of other product markets where high quality data are not publicly available.Technology and Industry
Stochastic analysis of web page ranking
Today, the study of the World Wide Web is one of the most challenging subjects. In this work we consider the Web from a probabilistic point of view. We analyze the relations between various characteristics of the Web. In particular, we are interested in the Web properties that affect the Web page ranking, which is a measure of popularity and importance of a page in the Web. Mainly we restrict our attention on two widely-used algorithms for ranking: the number of references on a page (indegree), and Google’s PageRank. For the majority of self-organizing networks, such as the Web and the Wikipedia, the in-degree and the PageRank are observed to follow power laws. In this thesis we present a new methodology for analyzing the probabilistic behavior of the PageRank distribution and the dependence between various power law parameters of the Web. Our approach is based on the techniques from the theory of regular variations and the extreme value theory. We start Chapter 2 with models for distributions of the number of incoming (indegree) and outgoing (out-degree) links of a page. Next, we define the PageRank as a solution of a stochastic equation R d= PN i=1 AiRi+B, where Ri’s are distributed as R. This equation is inspired by the original definition of the PageRank. In particular, N models in-degree of a page, and B stays for the user preference. We use a probabilistic approach to show that the equation has a unique non-trivial solution with fixed finite mean. Our analysis based on a recurrent stochastic model for the power iteration algorithm commonly used in PageRank computations. Further, we obtain that the PageRank asymptotics after each iteration are determined by the asymptotics of the random variable with the heaviest tail among N and B. If the tails of N and B are equally heavy, then in fact we get the sum of two asymptotic expressions. We predict the tail behavior of the limiting distribution of the PageRank as a convergence of the results for iterations. To prove the predicted behavior we use another techniques in Chapter 3. In Chapter 3 we define the tail behavior for the models of the in-degree and the PageRank distribution using Laplace-Stieltjes transforms and the Tauberian theorem. We derive the equation for the Laplace-Stieltjes transforms, that corresponds to the general stochastic equation, and obtain our main result that establishes the tail behavior of the solution of the stochastic equation. In Chapter 4 we perform a number of experiments on the Web and the Wikipedia data sets, and on preferential attachment graphs in order to justify the results obtained in Chapters 2 and 3. The numerical results show a good agreement with our stochastic model for the PageRank distribution. Moreover, in Section 4.1 we also address the problem of evaluating power laws in the real data sets. We define several state of the art techniques from the statistical analysis of heavy tails, and we provide empirical evidence on the asymptotic similarity between in-degree and PageRank. Inspired by the minor effect of the out-degree distribution on the asymptotics of the PageRank, in Section 4.4 we introduce a new ranking scheme, called PAR, which combines features of HITS and PageRank ranking schemes. In Chapter 5 we examine the dependence structure in the power law graphs. First, we analytically define the tail dependencies between in-degree and PageRank of a one particular page by using the stochastic equation of the PageRank. We formally establish the relative importance of the two main factors for high ranking: large in-degree and a high rank of one of the ancestors. Second, we compute the angular measures for in-degrees, out-degrees and PageRank scores in three large data sets. The analysis of extremal dependence leads us to propose a new rank correlation measure which is particularly plausible for power law data. Finally, in Chapter 6 we apply the new rank correlation measure from Chapter 5 to various problems of rank aggregation. From numerical results we conclude that methods that are defined by the angular measure can provide good precision for the top nodes in large data sets, however they can fail in a small data sets
Recommended from our members
Designing and Optimizing Matching Markets
Matching market design studies the fundamental problem of how to allocate scarce resources to individuals with varied needs. In recent years, the theoretical study of matching markets such as medical residency, public housing and school choice has greatly informed and improved the design of such markets in practice. Impactful work in matching market design frequently makes use of techniques from computer science, economics and operations research to provide end–to-end solutions that address design questions holistically. In this dissertation, I develop tools for optimization in market design by studying matching mechanisms for school choice, an important societal problem that exemplifies many of the challenges in effective marketplace design.
In the first part of this work I develop frameworks for optimization in school choice that allow us to address operational problems in the assignment process. In the school choice market, where scarce public school seats are assigned to students, a key operational issue is how to reassign seats that are vacated after an initial round of centralized assignment. We propose a class of reassignment mechanisms, the Permuted Lottery Deferred Acceptance (PLDA) mechanisms, which generalize the commonly used Deferred Acceptance school choice mechanism and retain its desirable incentive and efficiency properties. We find that under natural conditions on demand all PLDA mechanisms achieve equivalent allocative welfare, and the PLDA based on reversing the tie-breaking lottery during the reassignment round minimizes reassignment. Empirical investigations on data from NYC high school admissions support our theoretical findings. In this part, we also provide a framework for optimization when using the prominent Top Trading Cycles (TTC) mechanism. We show that the TTC assignment can be described by admission cutoffs, which explain the role of priorities in determining the TTC assignment and can be used to tractably analyze TTC. In a large-scale continuum model we show how to compute these cutoffs directly from the distribution of preferences and priorities, providing a framework for evaluating policy choices. As an application of the model we solve for optimal investment in school quality under choice and find that an egalitarian distribution can be more efficient as it allows students to choose schools based on idiosyncracies in their preferences.
In the second part of this work, I consider the role of a marketplace as an information provider and explore how mechanisms affect information acquisition by agents in matching markets. I provide a tractable “Pandora's box” model where students hold a prior over their value for each school and can pay an inspection cost to learn their realized value. The model captures how students’ decisions to acquire information depend on priors and market information, and can rationalize a student’s choice to remain partially uninformed. In such a model students need market information in order to optimally acquire their personal preferences, and students benefit from waiting for the market to resolve before acquiring information. We extend the definition of stability to this partial information setting and define regret-free stable outcomes, where the matching is stable and each student has acquired the same information as if they had waited for the market to resolve. We show that regret-free stable outcomes have a cutoff characterization, and the set of regret-free stable outcomes is a non-empty lattice. However, there is no mechanism that always produces a regret-free stable matching, as there can be information deadlocks where every student finds it suboptimal to be the first to acquire information. In settings with sufficient information about the distribution of preferences, we provide mechanisms that exploit the cutoff structure to break the deadlock and approximately implement a regret-free stable matching
- …