95 research outputs found

    Concrete resource analysis of the quantum linear system algorithm used to compute the electromagnetic scattering cross section of a 2D target

    Get PDF
    We provide a detailed estimate for the logical resource requirements of the quantum linear system algorithm (QLSA) [Phys. Rev. Lett. 103, 150502 (2009)] including the recently described elaborations [Phys. Rev. Lett. 110, 250504 (2013)]. Our resource estimates are based on the standard quantum-circuit model of quantum computation; they comprise circuit width, circuit depth, the number of qubits and ancilla qubits employed, and the overall number of elementary quantum gate operations as well as more specific gate counts for each elementary fault-tolerant gate from the standard set {X, Y, Z, H, S, T, CNOT}. To perform these estimates, we used an approach that combines manual analysis with automated estimates generated via the Quipper quantum programming language and compiler. Our estimates pertain to the example problem size N=332,020,680 beyond which, according to a crude big-O complexity comparison, QLSA is expected to run faster than the best known classical linear-system solving algorithm. For this problem size, a desired calculation accuracy 0.01 requires an approximate circuit width 340 and circuit depth of order 102510^{25} if oracle costs are excluded, and a circuit width and depth of order 10810^8 and 102910^{29}, respectively, if oracle costs are included, indicating that the commonly ignored oracle resources are considerable. In addition to providing detailed logical resource estimates, it is also the purpose of this paper to demonstrate explicitly how these impressively large numbers arise with an actual circuit implementation of a quantum algorithm. While our estimates may prove to be conservative as more efficient advanced quantum-computation techniques are developed, they nevertheless provide a valid baseline for research targeting a reduction of the resource requirements, implying that a reduction by many orders of magnitude is necessary for the algorithm to become practical.Comment: 37 pages, 40 figure

    Hamilton decompositions of regular expanders: a proof of Kelly's conjecture for large tournaments

    Get PDF
    A long-standing conjecture of Kelly states that every regular tournament on n vertices can be decomposed into (n-1)/2 edge-disjoint Hamilton cycles. We prove this conjecture for large n. In fact, we prove a far more general result, based on our recent concept of robust expansion and a new method for decomposing graphs. We show that every sufficiently large regular digraph G on n vertices whose degree is linear in n and which is a robust outexpander has a decomposition into edge-disjoint Hamilton cycles. This enables us to obtain numerous further results, e.g. as a special case we confirm a conjecture of Erdos on packing Hamilton cycles in random tournaments. As corollaries to the main result, we also obtain several results on packing Hamilton cycles in undirected graphs, giving e.g. the best known result on a conjecture of Nash-Williams. We also apply our result to solve a problem on the domination ratio of the Asymmetric Travelling Salesman problem, which was raised e.g. by Glover and Punnen as well as Alon, Gutin and Krivelevich.Comment: new version includes a standalone version of the `robust decomposition lemma' for application in subsequent paper

    Counting Hamilton decompositions of oriented graphs

    Get PDF
    A Hamilton cycle in a directed graph G is a cycle that passes through every vertex of G. A Hamilton decomposition of G is a partition of its edge set into disjoint Hamilton cycles. In the late 60s Kelly conjectured that every regular tournament has a Hamilton decomposition. This conjecture was recently settled for large tournaments by Kuhn and Osthus [15], who proved more generally that every r-regular n-vertex oriented graph G (without antiparallel edges) with r = cn for some fixed c > 3=8 has a Hamilton decomposition, provided n = n(c) is sufficiently large. In this paper we address the natural question of estimating the number of such decompositions of G and show that this number is n^(1-o(1))cn2. In addition, we also obtain a new and much simpler proof for the approximate version of Kelly's conjecture.</p

    Algorithms in Lattice QCD

    Get PDF
    The enormous computing resources that large-scale simulations in Lattice QCD require will continue to test the limits of even the largest supercomputers into the foreseeable future. The efficiency of such simulations will therefore concern practitioners of lattice QCD for some time to come. I begin with an introduction to those aspects of lattice QCD essential to the remainder of the thesis, and follow with a description of the Wilson fermion matrix M, an object which is central to my theme. The principal bottleneck in Lattice QCD simulations is the solution of linear systems involving M, and this topic is treated in depth. I compare some of the more popular iterative methods, including Minimal Residual, Corij ugate Gradient on the Normal Equation, BI-Conjugate Gradient, QMR., BiCGSTAB and BiCGSTAB2, and then turn to a study of block algorithms, a special class of iterative solvers for systems with multiple right-hand sides. Included in this study are two block algorithms which had not previously been applied to lattice QCD. The next chapters are concerned with a generalised Hybrid Monte Carlo algorithm (OHM C) for QCD simulations involving dynamical quarks. I focus squarely on the efficient and robust implementation of GHMC, and describe some tricks to improve its performance. A limited set of results from HMC simulations at various parameter values is presented. A treatment of the non-hermitian Lanczos method and its application to the eigenvalue problem for M rounds off the theme of large-scale matrix computations

    Bayesian Markov switching tensor regression for time-varying networks

    Get PDF
    We propose a new Bayesian Markov switching regression model for multi-dimensional arrays (tensors) of binary time series. We assume a zero-inflated logit dynamics with time-varying parameters and apply it to multi-layer temporal networks. The original contribution is threefold. First, in order to avoid over-fitting we propose a parsimonious parametrization of the model, based on a low-rank decomposition of the tensor of regression coefficients. Second, the parameters of the tensor model are driven by a hidden Markov chain, thus allowing for structural changes. The regimes are identied through prior constraints on the mixing probability of the zero-inflated model. Finally, we model the jointly dynamics of the network and of a set of variables of interest. We follow a Bayesian approach to inference, exploiting the Polya-Gamma data augmentation scheme for logit models in order to provide an efficient Gibbs sampler for posterior approximation. We show the effectiveness of the sampler on simulated datasets of medium-big sizes, nally we apply the methodology to a real dataset of nancial networks

    35th Symposium on Theoretical Aspects of Computer Science: STACS 2018, February 28-March 3, 2018, Caen, France

    Get PDF
    corecore