13,092 research outputs found

    Sorting by Strip Moves and Strip Swaps

    Get PDF
    Genome rearrangement problems in computational biology [19, 29, 27] and zoning algorithms in optical character recognition [14, 4] have been modeled as combinatorial optimization problems related to the familiar problem of sorting, namely transforming arbitrary permutations to the identity permutation. The term permutation is used for an arbitrary arrangement of the integers 1, 2,···, n, and the term identity permutation for the arrangement of 1, 2,···, n in increasing order. When a permutation is viewed as the string of integers from 1 through n, any substring in it that is also a substring in the identity permutation will be called a strip. The objective in the combinatorial optimization problems arising from the applications is to obtain the identity permutation from an arbitrary permutation in the least number of a particular chosen strip operation. Among the strip operations which have been investigated thus far in the literature are strip moves, transpositions, reversals, and block interchanges [16, 2, 25, 11, 34]. However, it is important to note that most of the existing research on sorting by strip operations has been focused on obtaining hardness results or designing approximation algorithms, with little work carried out thus far on the implementation of the proposed approximation algorithms. This research starts with implementing two existing algorithms [5, 34] and as the main contributions, provides two new algorithms for sorting by strip swaps: 1) A greedy algorithm in which each strip swap reduces the number of strips the most, and puts maximum strips in their correct positions; 2) Another algorithm that uses the strategy of bringing closest consecutive pairs together called the closest consecutive pair (CCP) algorithm. The approximation ratios for the implemented algorithms are also experimentally estimated

    Dominant Strategies Implementation of the Critical Path Allocation in the Project Planning Problem

    Get PDF
    In this paper we propose to analyze the economic problem of allocating tasks on time in order to finish a complex project when information about tasks' duration and predating sequences of tasks is privately owned by the agents that undertake each task. In order to achieve the efficient allocation of tasks -using the well-known Critical path method in the Operations Research literature-, the planner must design the appropriate incentives and compensations to the agents based on the reported information. We show the existence of mechanisms that implement in dominant strategies the efficient allocation of tasks on time. When we further add new desirable properties like individual rationality, an impossibility result emerges.Critical path, PERT, Dominant strategies, implementation, tasks allocations, strategy-proofness, individual rationality.

    Distributed PCP Theorems for Hardness of Approximation in P

    Get PDF
    We present a new distributed model of probabilistically checkable proofs (PCP). A satisfying assignment x{0,1}nx \in \{0,1\}^n to a CNF formula φ\varphi is shared between two parties, where Alice knows x1,,xn/2x_1, \dots, x_{n/2}, Bob knows xn/2+1,,xnx_{n/2+1},\dots,x_n, and both parties know φ\varphi. The goal is to have Alice and Bob jointly write a PCP that xx satisfies φ\varphi, while exchanging little or no information. Unfortunately, this model as-is does not allow for nontrivial query complexity. Instead, we focus on a non-deterministic variant, where the players are helped by Merlin, a third party who knows all of xx. Using our framework, we obtain, for the first time, PCP-like reductions from the Strong Exponential Time Hypothesis (SETH) to approximation problems in P. In particular, under SETH we show that there are no truly-subquadratic approximation algorithms for Bichromatic Maximum Inner Product over {0,1}-vectors, Bichromatic LCS Closest Pair over permutations, Approximate Regular Expression Matching, and Diameter in Product Metric. All our inapproximability factors are nearly-tight. In particular, for the first two problems we obtain nearly-polynomial factors of 2(logn)1o(1)2^{(\log n)^{1-o(1)}}; only (1+o(1))(1+o(1))-factor lower bounds (under SETH) were known before

    Optimal Parameter Choices Through Self-Adjustment: Applying the 1/5-th Rule in Discrete Settings

    Full text link
    While evolutionary algorithms are known to be very successful for a broad range of applications, the algorithm designer is often left with many algorithmic choices, for example, the size of the population, the mutation rates, and the crossover rates of the algorithm. These parameters are known to have a crucial influence on the optimization time, and thus need to be chosen carefully, a task that often requires substantial efforts. Moreover, the optimal parameters can change during the optimization process. It is therefore of great interest to design mechanisms that dynamically choose best-possible parameters. An example for such an update mechanism is the one-fifth success rule for step-size adaption in evolutionary strategies. While in continuous domains this principle is well understood also from a mathematical point of view, no comparable theory is available for problems in discrete domains. In this work we show that the one-fifth success rule can be effective also in discrete settings. We regard the (1+(λ,λ))(1+(\lambda,\lambda))~GA proposed in [Doerr/Doerr/Ebel: From black-box complexity to designing new genetic algorithms, TCS 2015]. We prove that if its population size is chosen according to the one-fifth success rule then the expected optimization time on \textsc{OneMax} is linear. This is better than what \emph{any} static population size λ\lambda can achieve and is asymptotically optimal also among all adaptive parameter choices.Comment: This is the full version of a paper that is to appear at GECCO 201

    Adiabatic Quantum State Generation and Statistical Zero Knowledge

    Get PDF
    The design of new quantum algorithms has proven to be an extremely difficult task. This paper considers a different approach to the problem, by studying the problem of 'quantum state generation'. This approach provides intriguing links between many different areas: quantum computation, adiabatic evolution, analysis of spectral gaps and groundstates of Hamiltonians, rapidly mixing Markov chains, the complexity class statistical zero knowledge, quantum random walks, and more. We first show that many natural candidates for quantum algorithms can be cast as a state generation problem. We define a paradigm for state generation, called 'adiabatic state generation' and develop tools for adiabatic state generation which include methods for implementing very general Hamiltonians and ways to guarantee non negligible spectral gaps. We use our tools to prove that adiabatic state generation is equivalent to state generation in the standard quantum computing model, and finally we show how to apply our techniques to generate interesting superpositions related to Markov chains.Comment: 35 pages, two figure

    Knowledge-based machine vision systems for space station automation

    Get PDF
    Computer vision techniques which have the potential for use on the space station and related applications are assessed. A knowledge-based vision system (expert vision system) and the development of a demonstration system for it are described. This system implements some of the capabilities that would be necessary in a machine vision system for the robot arm of the laboratory module in the space station. A Perceptics 9200e image processor, on a host VAXstation, was used to develop the demonstration system. In order to use realistic test images, photographs of actual space shuttle simulator panels were used. The system's capabilities of scene identification and scene matching are discussed
    corecore