2,931 research outputs found

    Stochastic scheduling on unrelated machines

    Get PDF
    Two important characteristics encountered in many real-world scheduling problems are heterogeneous machines/processors and a certain degree of uncertainty about the actual sizes of jobs. The first characteristic entails machine dependent processing times of jobs and is captured by the classical unrelated machine scheduling model.The second characteristic is adequately addressed by stochastic processing times of jobs as they are studied in classical stochastic scheduling models. While there is an extensive but separate literature for the two scheduling models, we study for the first time a combined model that takes both characteristics into account simultaneously. Here, the processing time of job jj on machine ii is governed by random variable PijP_{ij}, and its actual realization becomes known only upon job completion. With wjw_j being the given weight of job jj, we study the classical objective to minimize the expected total weighted completion time E[jwjCj]E[\sum_j w_jC_j], where CjC_j is the completion time of job jj. By means of a novel time-indexed linear programming relaxation, we compute in polynomial time a scheduling policy with performance guarantee (3+Δ)/2+ϵ(3+\Delta)/2+\epsilon. Here, ϵ>0\epsilon>0 is arbitrarily small, and Δ\Delta is an upper bound on the squared coefficient of variation of the processing times. We show that the dependence of the performance guarantee on Δ\Delta is tight, as we obtain a Δ/2\Delta/2 lower bound for the type of policies that we use. When jobs also have individual release dates rijr_{ij}, our bound is (2+Δ)+ϵ(2+\Delta)+\epsilon. Via Δ=0\Delta=0, currently best known bounds for deterministic scheduling are contained as a special case

    Distributed PCP Theorems for Hardness of Approximation in P

    Get PDF
    We present a new distributed model of probabilistically checkable proofs (PCP). A satisfying assignment x{0,1}nx \in \{0,1\}^n to a CNF formula φ\varphi is shared between two parties, where Alice knows x1,,xn/2x_1, \dots, x_{n/2}, Bob knows xn/2+1,,xnx_{n/2+1},\dots,x_n, and both parties know φ\varphi. The goal is to have Alice and Bob jointly write a PCP that xx satisfies φ\varphi, while exchanging little or no information. Unfortunately, this model as-is does not allow for nontrivial query complexity. Instead, we focus on a non-deterministic variant, where the players are helped by Merlin, a third party who knows all of xx. Using our framework, we obtain, for the first time, PCP-like reductions from the Strong Exponential Time Hypothesis (SETH) to approximation problems in P. In particular, under SETH we show that there are no truly-subquadratic approximation algorithms for Bichromatic Maximum Inner Product over {0,1}-vectors, Bichromatic LCS Closest Pair over permutations, Approximate Regular Expression Matching, and Diameter in Product Metric. All our inapproximability factors are nearly-tight. In particular, for the first two problems we obtain nearly-polynomial factors of 2(logn)1o(1)2^{(\log n)^{1-o(1)}}; only (1+o(1))(1+o(1))-factor lower bounds (under SETH) were known before

    06481 Abstracts Collection -- Geometric Networks and Metric Space Embeddings

    Get PDF
    The Dagstuhl Seminar 06481 ``Geometric Networks and Metric Space Embeddings\u27\u27 was held from November~26 to December~1, 2006 in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. In this paper we describe the seminar topics, we have compiled a list of open questions that were posed during the seminar, there is a list of all talks and there are abstracts of the presentations given during the seminar. Links to extended abstracts or full papers are provided where available

    07261 Abstracts Collection -- Fair Division

    Get PDF
    From 24.06. to 29.06.2007, the Dagstuhl Seminar 07261 % generate automatically ``Fair Division\u27\u27 % generate automatically was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Approximating solution structure of the Weighted Sentence Alignment problem

    Full text link
    We study the complexity of approximating solution structure of the bijective weighted sentence alignment problem of DeNero and Klein (2008). In particular, we consider the complexity of finding an alignment that has a significant overlap with an optimal alignment. We discuss ways of representing the solution for the general weighted sentence alignment as well as phrases-to-words alignment problem, and show that computing a string which agrees with the optimal sentence partition on more than half (plus an arbitrarily small polynomial fraction) positions for the phrases-to-words alignment is NP-hard. For the general weighted sentence alignment we obtain such bound from the agreement on a little over 2/3 of the bits. Additionally, we generalize the Hamming distance approximation of a solution structure to approximating it with respect to the edit distance metric, obtaining similar lower bounds

    Independent Set, Induced Matching, and Pricing: Connections and Tight (Subexponential Time) Approximation Hardnesses

    Full text link
    We present a series of almost settled inapproximability results for three fundamental problems. The first in our series is the subexponential-time inapproximability of the maximum independent set problem, a question studied in the area of parameterized complexity. The second is the hardness of approximating the maximum induced matching problem on bounded-degree bipartite graphs. The last in our series is the tight hardness of approximating the k-hypergraph pricing problem, a fundamental problem arising from the area of algorithmic game theory. In particular, assuming the Exponential Time Hypothesis, our two main results are: - For any r larger than some constant, any r-approximation algorithm for the maximum independent set problem must run in at least 2^{n^{1-\epsilon}/r^{1+\epsilon}} time. This nearly matches the upper bound of 2^{n/r} (Cygan et al., 2008). It also improves some hardness results in the domain of parameterized complexity (e.g., Escoffier et al., 2012 and Chitnis et al., 2013) - For any k larger than some constant, there is no polynomial time min (k^{1-\epsilon}, n^{1/2-\epsilon})-approximation algorithm for the k-hypergraph pricing problem, where n is the number of vertices in an input graph. This almost matches the upper bound of min (O(k), \tilde O(\sqrt{n})) (by Balcan and Blum, 2007 and an algorithm in this paper). We note an interesting fact that, in contrast to n^{1/2-\epsilon} hardness for polynomial-time algorithms, the k-hypergraph pricing problem admits n^{\delta} approximation for any \delta >0 in quasi-polynomial time. This puts this problem in a rare approximability class in which approximability thresholds can be improved significantly by allowing algorithms to run in quasi-polynomial time.Comment: The full version of FOCS 201

    Welfare Maximization and Truthfulness in Mechanism Design with Ordinal Preferences

    Full text link
    We study mechanism design problems in the {\em ordinal setting} wherein the preferences of agents are described by orderings over outcomes, as opposed to specific numerical values associated with them. This setting is relevant when agents can compare outcomes, but aren't able to evaluate precise utilities for them. Such a situation arises in diverse contexts including voting and matching markets. Our paper addresses two issues that arise in ordinal mechanism design. To design social welfare maximizing mechanisms, one needs to be able to quantitatively measure the welfare of an outcome which is not clear in the ordinal setting. Second, since the impossibility results of Gibbard and Satterthwaite~\cite{Gibbard73,Satterthwaite75} force one to move to randomized mechanisms, one needs a more nuanced notion of truthfulness. We propose {\em rank approximation} as a metric for measuring the quality of an outcome, which allows us to evaluate mechanisms based on worst-case performance, and {\em lex-truthfulness} as a notion of truthfulness for randomized ordinal mechanisms. Lex-truthfulness is stronger than notions studied in the literature, and yet flexible enough to admit a rich class of mechanisms {\em circumventing classical impossibility results}. We demonstrate the usefulness of the above notions by devising lex-truthful mechanisms achieving good rank-approximation factors, both in the general ordinal setting, as well as structured settings such as {\em (one-sided) matching markets}, and its generalizations, {\em matroid} and {\em scheduling} markets.Comment: Some typos correcte
    corecore