3,500 research outputs found

    Theory and Applications of Robust Optimization

    Full text link
    In this paper we survey the primary research, both theoretical and applied, in the area of Robust Optimization (RO). Our focus is on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying prominent theoretical results of RO, we also present some recent results linking RO to adaptable models for multi-stage decision-making problems. Finally, we highlight applications of RO across a wide spectrum of domains, including finance, statistics, learning, and various areas of engineering.Comment: 50 page

    Guarantees and Limits of Preprocessing in Constraint Satisfaction and Reasoning

    Full text link
    We present a first theoretical analysis of the power of polynomial-time preprocessing for important combinatorial problems from various areas in AI. We consider problems from Constraint Satisfaction, Global Constraints, Satisfiability, Nonmonotonic and Bayesian Reasoning under structural restrictions. All these problems involve two tasks: (i) identifying the structure in the input as required by the restriction, and (ii) using the identified structure to solve the reasoning task efficiently. We show that for most of the considered problems, task (i) admits a polynomial-time preprocessing to a problem kernel whose size is polynomial in a structural problem parameter of the input, in contrast to task (ii) which does not admit such a reduction to a problem kernel of polynomial size, subject to a complexity theoretic assumption. As a notable exception we show that the consistency problem for the AtMost-NValue constraint admits a polynomial kernel consisting of a quadratic number of variables and domain values. Our results provide a firm worst-case guarantees and theoretical boundaries for the performance of polynomial-time preprocessing algorithms for the considered problems.Comment: arXiv admin note: substantial text overlap with arXiv:1104.2541, arXiv:1104.556

    Detecting and counting small subgraphs, and evaluating a parameterized Tutte polynomial: lower bounds via toroidal grids and Cayley graph expanders

    Get PDF
    Given a graph property Φ\Phi, we consider the problem EdgeSub(Φ)\mathtt{EdgeSub}(\Phi), where the input is a pair of a graph GG and a positive integer kk, and the task is to decide whether GG contains a kk-edge subgraph that satisfies Φ\Phi. Specifically, we study the parameterized complexity of EdgeSub(Φ)\mathtt{EdgeSub}(\Phi) and of its counting problem #EdgeSub(Φ)\#\mathtt{EdgeSub}(\Phi) with respect to both approximate and exact counting. We obtain a complete picture for minor-closed properties Φ\Phi: the decision problem EdgeSub(Φ)\mathtt{EdgeSub}(\Phi) always admits an FPT algorithm and the counting problem #EdgeSub(Φ)\#\mathtt{EdgeSub}(\Phi) always admits an FPTRAS. For exact counting, we present an exhaustive and explicit criterion on the property Φ\Phi which, if satisfied, yields fixed-parameter tractability and otherwise #W[1]\#\mathsf{W[1]}-hardness. Additionally, most of our hardness results come with an almost tight conditional lower bound under the so-called Exponential Time Hypothesis, ruling out algorithms for #EdgeSub(Φ)\#\mathtt{EdgeSub}(\Phi) that run in time f(k)Go(k/logk)f(k)\cdot|G|^{o(k/\log k)} for any computable function ff. As a main technical result, we gain a complete understanding of the coefficients of toroidal grids and selected Cayley graph expanders in the homomorphism basis of #EdgeSub(Φ)\#\mathtt{EdgeSub}(\Phi). This allows us to establish hardness of exact counting using the Complexity Monotonicity framework due to Curticapean, Dell and Marx (STOC'17). Our methods can also be applied to a parameterized variant of the Tutte Polynomial TGkT^k_G of a graph GG, to which many known combinatorial interpretations of values of the (classical) Tutte Polynomial can be extended. As an example, TGk(2,1)T^k_G(2,1) corresponds to the number of kk-forests in the graph GG. Our techniques allow us to completely understand the parametrized complexity of computing the evaluation of TGkT^k_G at every pair of rational coordinates (x,y)(x,y)

    Pure Parsimony Xor Haplotyping

    Full text link
    The haplotype resolution from xor-genotype data has been recently formulated as a new model for genetic studies. The xor-genotype data is a cheaply obtainable type of data distinguishing heterozygous from homozygous sites without identifying the homozygous alleles. In this paper we propose a formulation based on a well-known model used in haplotype inference: pure parsimony. We exhibit exact solutions of the problem by providing polynomial time algorithms for some restricted cases and a fixed-parameter algorithm for the general case. These results are based on some interesting combinatorial properties of a graph representation of the solutions. Furthermore, we show that the problem has a polynomial time k-approximation, where k is the maximum number of xor-genotypes containing a given SNP. Finally, we propose a heuristic and produce an experimental analysis showing that it scales to real-world large instances taken from the HapMap project

    Power and Channel Allocation for Non-orthogonal Multiple Access in 5G Systems: Tractability and Computation

    Full text link
    Network capacity calls for significant increase for 5G cellular systems. A promising multi-user access scheme, non-orthogonal multiple access (NOMA) with successive interference cancellation (SIC), is currently under consideration. In NOMA, spectrum efficiency is improved by allowing more than one user to simultaneously access the same frequency-time resource and separating multi-user signals by SIC at the receiver. These render resource allocation and optimization in NOMA different from orthogonal multiple access in 4G. In this paper, we provide theoretical insights and algorithmic solutions to jointly optimize power and channel allocation in NOMA. For utility maximization, we mathematically formulate NOMA resource allocation problems. We characterize and analyze the problems' tractability under a range of constraints and utility functions. For tractable cases, we provide polynomial-time solutions for global optimality. For intractable cases, we prove the NP-hardness and propose an algorithmic framework combining Lagrangian duality and dynamic programming (LDDP) to deliver near-optimal solutions. To gauge the performance of the obtained solutions, we also provide optimality bounds on the global optimum. Numerical results demonstrate that the proposed algorithmic solution can significantly improve the system performance in both throughput and fairness over orthogonal multiple access as well as over a previous NOMA resource allocation scheme.Comment: IEEE Transactions on Wireless Communications, revisio

    Data-driven Inverse Optimization with Imperfect Information

    Full text link
    In data-driven inverse optimization an observer aims to learn the preferences of an agent who solves a parametric optimization problem depending on an exogenous signal. Thus, the observer seeks the agent's objective function that best explains a historical sequence of signals and corresponding optimal actions. We focus here on situations where the observer has imperfect information, that is, where the agent's true objective function is not contained in the search space of candidate objectives, where the agent suffers from bounded rationality or implementation errors, or where the observed signal-response pairs are corrupted by measurement noise. We formalize this inverse optimization problem as a distributionally robust program minimizing the worst-case risk that the {\em predicted} decision ({\em i.e.}, the decision implied by a particular candidate objective) differs from the agent's {\em actual} response to a random signal. We show that our framework offers rigorous out-of-sample guarantees for different loss functions used to measure prediction errors and that the emerging inverse optimization problems can be exactly reformulated as (or safely approximated by) tractable convex programs when a new suboptimality loss function is used. We show through extensive numerical tests that the proposed distributionally robust approach to inverse optimization attains often better out-of-sample performance than the state-of-the-art approaches
    corecore