1,816 research outputs found

    Adaptive Computation of the Swap-Insert Correction Distance

    Full text link
    The Swap-Insert Correction distance from a string SS of length nn to another string LL of length mnm\geq n on the alphabet [1..d][1..d] is the minimum number of insertions, and swaps of pairs of adjacent symbols, converting SS into LL. Contrarily to other correction distances, computing it is NP-Hard in the size dd of the alphabet. We describe an algorithm computing this distance in time within O(d2nmgd1)O(d^2 nm g^{d-1}), where there are nαn_\alpha occurrences of α\alpha in SS, mαm_\alpha occurrences of α\alpha in LL, and where g=maxα[1..d]min{nα,mαnα}g=\max_{\alpha\in[1..d]} \min\{n_\alpha,m_\alpha-n_\alpha\} measures the difficulty of the instance. The difficulty gg is bounded by above by various terms, such as the length of the shortest string SS, and by the maximum number of occurrences of a single character in SS. Those results illustrate how, in many cases, the correction distance between two strings can be easier to compute than in the worst case scenario.Comment: 16 pages, no figures, long version of the extended abstract accepted to SPIRE 201

    Are the artificially generated instances uniform in terms of difficulty?

    Get PDF
    In the field of evolutionary computation, it is usual to generate artificial benchmarks of instances that are used as a test-bed to determine the performance of the algorithms at hand. In this context, a recent work on permutation problems analyzed the implications of generating instances uniformly at random (u.a.r.) when building those benchmarks. Particularly, the authors analyzed instances as rankings of the solutions of the search space sorted according to their objective function value. Thus, two instances are considered equivalent when their objective functions induce the same ranking over the search space. Based on the analysis, they suggested that, when some restrictions hold, the probability to create easy rankings is higher than creating difficult ones. In this paper, we continue on that research line by adopting the framework of local search algorithms with the best improvement criterion. Particularly, we empirically analyze, in terms of difficulty, the instances (rankings) created u.a.r. of three popular problems: Linear Ordering Problem, Quadratic Assignment Problem and Flowshop Scheduling Problem. As the neighborhood system is critical for the performance of local search algorithms three different neighborhood systems have been considered: swap, interchange and insert. Conducted experiments reveal that (1) by sampling the parameters uniformly at random we obtain instances with a non-uniform distribution in terms of difficulty, (2) the distribution of the difficulty strongly depends on the pair problem-neighborhood considered, and (3) given a problem, the distribution of the difficulty seems to depend on the smoothness of the landscape induced by the neighborhood and on its size.Research Groups 2013-2018 (IT-609-13) TIN2016-78365-R(Spanish Ministry of Economy, Industry and Competitiveness

    Noise-Adaptive Compiler Mappings for Noisy Intermediate-Scale Quantum Computers

    Full text link
    A massive gap exists between current quantum computing (QC) prototypes, and the size and scale required for many proposed QC algorithms. Current QC implementations are prone to noise and variability which affect their reliability, and yet with less than 80 quantum bits (qubits) total, they are too resource-constrained to implement error correction. The term Noisy Intermediate-Scale Quantum (NISQ) refers to these current and near-term systems of 1000 qubits or less. Given NISQ's severe resource constraints, low reliability, and high variability in physical characteristics such as coherence time or error rates, it is of pressing importance to map computations onto them in ways that use resources efficiently and maximize the likelihood of successful runs. This paper proposes and evaluates backend compiler approaches to map and optimize high-level QC programs to execute with high reliability on NISQ systems with diverse hardware characteristics. Our techniques all start from an LLVM intermediate representation of the quantum program (such as would be generated from high-level QC languages like Scaffold) and generate QC executables runnable on the IBM Q public QC machine. We then use this framework to implement and evaluate several optimal and heuristic mapping methods. These methods vary in how they account for the availability of dynamic machine calibration data, the relative importance of various noise parameters, the different possible routing strategies, and the relative importance of compile-time scalability versus runtime success. Using real-system measurements, we show that fine grained spatial and temporal variations in hardware parameters can be exploited to obtain an average 2.92.9x (and up to 1818x) improvement in program success rate over the industry standard IBM Qiskit compiler.Comment: To appear in ASPLOS'1

    Design and Analysis of a Task-based Parallelization over a Runtime System of an Explicit Finite-Volume CFD Code with Adaptive Time Stepping

    Get PDF
    FLUSEPA (Registered trademark in France No. 134009261) is an advanced simulation tool which performs a large panel of aerodynamic studies. It is the unstructured finite-volume solver developed by Airbus Safran Launchers company to calculate compressible, multidimensional, unsteady, viscous and reactive flows around bodies in relative motion. The time integration in FLUSEPA is done using an explicit temporal adaptive method. The current production version of the code is based on MPI and OpenMP. This implementation leads to important synchronizations that must be reduced. To tackle this problem, we present the study of a task-based parallelization of the aerodynamic solver of FLUSEPA using the runtime system StarPU and combining up to three levels of parallelism. We validate our solution by the simulation (using a finite-volume mesh with 80 million cells) of a take-off blast wave propagation for Ariane 5 launcher.Comment: Accepted manuscript of a paper in Journal of Computational Scienc

    Measurement-based quantum computation on cluster states

    Get PDF
    We give a detailed account of the one-way quantum computer, a scheme of quantum computation that consists entirely of one-qubit measurements on a particular class of entangled states, the cluster states. We prove its universality, describe why its underlying computational model is different from the network model of quantum computation, and relate quantum algorithms to mathematical graphs. Further we investigate the scaling of required resources and give a number of examples for circuits of practical interest such as the circuit for quantum Fourier transformation and for the quantum adder. Finally, we describe computation with clusters of finite size

    Evolution of genetic organization in digital organisms

    Full text link
    We examine the evolution of expression patterns and the organization of genetic information in populations of self-replicating digital organisms. Seeding the experiments with a linearly expressed ancestor, we witness the development of complex, parallel secondary expression patterns. Using principles from information theory, we demonstrate an evolutionary pressure towards overlapping expressions causing variation (and hence further evolution) to sharply drop. Finally, we compare the overlapping sections of dominant genomes to those portions which are singly expressed and observe a significant difference in the entropy of their encoding.Comment: 18 pages with 5 embedded figures. Proc. of DIMACS workshop on "Evolution as Computation", Jan. 11-12, Princeton, NJ. L. Landweber and E. Winfree, eds. (Springer, 1999

    Fault-tolerant quantum computation with cluster states

    Get PDF
    The one-way quantum computing model introduced by Raussendorf and Briegel [Phys. Rev. Lett. 86 (22), 5188-5191 (2001)] shows that it is possible to quantum compute using only a fixed entangled resource known as a cluster state, and adaptive single-qubit measurements. This model is the basis for several practical proposals for quantum computation, including a promising proposal for optical quantum computation based on cluster states [M. A. Nielsen, arXiv:quant-ph/0402005, accepted to appear in Phys. Rev. Lett.]. A significant open question is whether such proposals are scalable in the presence of physically realistic noise. In this paper we prove two threshold theorems which show that scalable fault-tolerant quantum computation may be achieved in implementations based on cluster states, provided the noise in the implementations is below some constant threshold value. Our first threshold theorem applies to a class of implementations in which entangling gates are applied deterministically, but with a small amount of noise. We expect this threshold to be applicable in a wide variety of physical systems. Our second threshold theorem is specifically adapted to proposals such as the optical cluster-state proposal, in which non-deterministic entangling gates are used. A critical technical component of our proofs is two powerful theorems which relate the properties of noisy unitary operations restricted to act on a subspace of state space to extensions of those operations acting on the entire state space.Comment: 31 pages, 54 figure
    corecore