686 research outputs found

    A Unifying Hierarchy of Valuations with Complements and Substitutes

    Full text link
    We introduce a new hierarchy over monotone set functions, that we refer to as MPH\mathcal{MPH} (Maximum over Positive Hypergraphs). Levels of the hierarchy correspond to the degree of complementarity in a given function. The highest level of the hierarchy, MPH\mathcal{MPH}-mm (where mm is the total number of items) captures all monotone functions. The lowest level, MPH\mathcal{MPH}-11, captures all monotone submodular functions, and more generally, the class of functions known as XOS\mathcal{XOS}. Every monotone function that has a positive hypergraph representation of rank kk (in the sense defined by Abraham, Babaioff, Dughmi and Roughgarden [EC 2012]) is in MPH\mathcal{MPH}-kk. Every monotone function that has supermodular degree kk (in the sense defined by Feige and Izsak [ITCS 2013]) is in MPH\mathcal{MPH}-(k+1)(k+1). In both cases, the converse direction does not hold, even in an approximate sense. We present additional results that demonstrate the expressiveness power of MPH\mathcal{MPH}-kk. One can obtain good approximation ratios for some natural optimization problems, provided that functions are required to lie in low levels of the MPH\mathcal{MPH} hierarchy. We present two such applications. One shows that the maximum welfare problem can be approximated within a ratio of k+1k+1 if all players hold valuation functions in MPH\mathcal{MPH}-kk. The other is an upper bound of 2k2k on the price of anarchy of simultaneous first price auctions. Being in MPH\mathcal{MPH}-kk can be shown to involve two requirements -- one is monotonicity and the other is a certain requirement that we refer to as PLE\mathcal{PLE} (Positive Lower Envelope). Removing the monotonicity requirement, one obtains the PLE\mathcal{PLE} hierarchy over all non-negative set functions (whether monotone or not), which can be fertile ground for further research

    Multiwinner Voting with Fairness Constraints

    Full text link
    Multiwinner voting rules are used to select a small representative subset of candidates or items from a larger set given the preferences of voters. However, if candidates have sensitive attributes such as gender or ethnicity (when selecting a committee), or specified types such as political leaning (when selecting a subset of news items), an algorithm that chooses a subset by optimizing a multiwinner voting rule may be unbalanced in its selection -- it may under or over represent a particular gender or political orientation in the examples above. We introduce an algorithmic framework for multiwinner voting problems when there is an additional requirement that the selected subset should be "fair" with respect to a given set of attributes. Our framework provides the flexibility to (1) specify fairness with respect to multiple, non-disjoint attributes (e.g., ethnicity and gender) and (2) specify a score function. We study the computational complexity of this constrained multiwinner voting problem for monotone and submodular score functions and present several approximation algorithms and matching hardness of approximation results for various attribute group structure and types of score functions. We also present simulations that suggest that adding fairness constraints may not affect the scores significantly when compared to the unconstrained case.Comment: The conference version of this paper appears in IJCAI-ECAI 201

    Fast Local Computation Algorithms

    Full text link
    For input xx, let F(x)F(x) denote the set of outputs that are the "legal" answers for a computational problem FF. Suppose xx and members of F(x)F(x) are so large that there is not time to read them in their entirety. We propose a model of {\em local computation algorithms} which for a given input xx, support queries by a user to values of specified locations yiy_i in a legal output yF(x)y \in F(x). When more than one legal output yy exists for a given xx, the local computation algorithm should output in a way that is consistent with at least one such yy. Local computation algorithms are intended to distill the common features of several concepts that have appeared in various algorithmic subfields, including local distributed computation, local algorithms, locally decodable codes, and local reconstruction. We develop a technique, based on known constructions of small sample spaces of kk-wise independent random variables and Beck's analysis in his algorithmic approach to the Lov{\'{a}}sz Local Lemma, which under certain conditions can be applied to construct local computation algorithms that run in {\em polylogarithmic} time and space. We apply this technique to maximal independent set computations, scheduling radio network broadcasts, hypergraph coloring and satisfying kk-SAT formulas.Comment: A preliminary version of this paper appeared in ICS 2011, pp. 223-23

    Learning from networked examples

    Get PDF
    Many machine learning algorithms are based on the assumption that training examples are drawn independently. However, this assumption does not hold anymore when learning from a networked sample because two or more training examples may share some common objects, and hence share the features of these shared objects. We show that the classic approach of ignoring this problem potentially can have a harmful effect on the accuracy of statistics, and then consider alternatives. One of these is to only use independent examples, discarding other information. However, this is clearly suboptimal. We analyze sample error bounds in this networked setting, providing significantly improved results. An important component of our approach is formed by efficient sample weighting schemes, which leads to novel concentration inequalities
    corecore