42 research outputs found

    Average-case Approximation Ratio of Scheduling without Payments

    Full text link
    Apart from the principles and methodologies inherited from Economics and Game Theory, the studies in Algorithmic Mechanism Design typically employ the worst-case analysis and approximation schemes of Theoretical Computer Science. For instance, the approximation ratio, which is the canonical measure of evaluating how well an incentive-compatible mechanism approximately optimizes the objective, is defined in the worst-case sense. It compares the performance of the optimal mechanism against the performance of a truthful mechanism, for all possible inputs. In this paper, we take the average-case analysis approach, and tackle one of the primary motivating problems in Algorithmic Mechanism Design -- the scheduling problem [Nisan and Ronen 1999]. One version of this problem which includes a verification component is studied by [Koutsoupias 2014]. It was shown that the problem has a tight approximation ratio bound of (n+1)/2 for the single-task setting, where n is the number of machines. We show, however, when the costs of the machines to executing the task follow any independent and identical distribution, the average-case approximation ratio of the mechanism given in [Koutsoupias 2014] is upper bounded by a constant. This positive result asymptotically separates the average-case ratio from the worst-case ratio, and indicates that the optimal mechanism for the problem actually works well on average, although in the worst-case the expected cost of the mechanism is Theta(n) times that of the optimal cost

    Single Parameter Combinatorial Auctions with Partially Public Valuations

    Full text link
    We consider the problem of designing truthful auctions, when the bidders' valuations have a public and a private component. In particular, we consider combinatorial auctions where the valuation of an agent ii for a set SS of items can be expressed as vif(S)v_if(S), where viv_i is a private single parameter of the agent, and the function ff is publicly known. Our motivation behind studying this problem is two-fold: (a) Such valuation functions arise naturally in the case of ad-slots in broadcast media such as Television and Radio. For an ad shown in a set SS of ad-slots, f(S)f(S) is, say, the number of {\em unique} viewers reached by the ad, and viv_i is the valuation per-unique-viewer. (b) From a theoretical point of view, this factorization of the valuation function simplifies the bidding language, and renders the combinatorial auction more amenable to better approximation factors. We present a general technique, based on maximal-in-range mechanisms, that converts any α\alpha-approximation non-truthful algorithm (α1\alpha \leq 1) for this problem into Ω(αlogn)\Omega(\frac{\alpha}{\log{n}}) and Ω(α)\Omega(\alpha)-approximate truthful mechanisms which run in polynomial time and quasi-polynomial time, respectively

    Smoothed and Average-Case Approximation Ratios of Mechanisms: Beyond the Worst-Case Analysis

    Get PDF
    The approximation ratio has become one of the dominant measures in mechanism design problems. In light of analysis of algorithms, we define the smoothed approximation ratio to compare the performance of the optimal mechanism and a truthful mechanism when the inputs are subject to random perturbations of the worst-case inputs, and define the average-case approximation ratio to compare the performance of these two mechanisms when the inputs follow a distribution. For the one-sided matching problem, Filos-Ratsikas et al. [2014] show that, amongst all truthful mechanisms, random priority achieves the tight approximation ratio bound of Theta(sqrt{n}). We prove that, despite of this worst-case bound, random priority has a constant smoothed approximation ratio. This is, to our limited knowledge, the first work that asymptotically differentiates the smoothed approximation ratio from the worst-case approximation ratio for mechanism design problems. For the average-case, we show that our approximation ratio can be improved to 1+e. These results partially explain why random priority has been successfully used in practice, although in the worst case the optimal social welfare is Theta(sqrt{n}) times of what random priority achieves. These results also pave the way for further studies of smoothed and average-case analysis for approximate mechanism design problems, beyond the worst-case analysis

    Bayesian Incentive Compatibility via Fractional Assignments

    Full text link
    Very recently, Hartline and Lucier studied single-parameter mechanism design problems in the Bayesian setting. They proposed a black-box reduction that converted Bayesian approximation algorithms into Bayesian-Incentive-Compatible (BIC) mechanisms while preserving social welfare. It remains a major open question if one can find similar reduction in the more important multi-parameter setting. In this paper, we give positive answer to this question when the prior distribution has finite and small support. We propose a black-box reduction for designing BIC multi-parameter mechanisms. The reduction converts any algorithm into an eps-BIC mechanism with only marginal loss in social welfare. As a result, for combinatorial auctions with sub-additive agents we get an eps-BIC mechanism that achieves constant approximation.Comment: 22 pages, 1 figur

    Budget Feasible Mechanism Design: From Prior-Free to Bayesian

    Full text link
    Budget feasible mechanism design studies procurement combinatorial auctions where the sellers have private costs to produce items, and the buyer(auctioneer) aims to maximize a social valuation function on subsets of items, under the budget constraint on the total payment. One of the most important questions in the field is "which valuation domains admit truthful budget feasible mechanisms with `small' approximations (compared to the social optimum)?" Singer showed that additive and submodular functions have such constant approximations. Recently, Dobzinski, Papadimitriou, and Singer gave an O(log^2 n)-approximation mechanism for subadditive functions; they also remarked that: "A fundamental question is whether, regardless of computational constraints, a constant-factor budget feasible mechanism exists for subadditive functions." We address this question from two viewpoints: prior-free worst case analysis and Bayesian analysis. For the prior-free framework, we use an LP that describes the fractional cover of the valuation function; it is also connected to the concept of approximate core in cooperative game theory. We provide an O(I)-approximation mechanism for subadditive functions, via the worst case integrality gap I of LP. This implies an O(log n)-approximation for subadditive valuations, O(1)-approximation for XOS valuations, and for valuations with a constant I. XOS valuations are an important class of functions that lie between submodular and subadditive classes. We give another polynomial time O(log n/loglog n) sub-logarithmic approximation mechanism for subadditive valuations. For the Bayesian framework, we provide a constant approximation mechanism for all subadditive functions, using the above prior-free mechanism for XOS valuations as a subroutine. Our mechanism allows correlations in the distribution of private information and is universally truthful.Comment: to appear in STOC 201
    corecore