1,163 research outputs found

    Markov chains and optimality of the Hamiltonian cycle

    Get PDF
    We consider the Hamiltonian cycle problem (HCP) embedded in a controlled Markov decision process. In this setting, HCP reduces to an optimization problem on a set of Markov chains corresponding to a given graph. We prove that Hamiltonian cycles are minimizers for the trace of the fundamental matrix on a set of all stochastic transition matrices. In case of doubly stochastic matrices with symmetric linear perturbation, we show that Hamiltonian cycles minimize a diagonal element of a fundamental matrix for all admissible values of the perturbation parameter. In contrast to the previous work on this topic, our arguments are primarily based on probabilistic rather than algebraic methods

    On the fastest finite Markov processes

    Get PDF
    International audienceConsider a finite irreducible Markov chain with invariant probability π. Define its inverse communication speed as the expectation to go from x to y, when x, y are sampled independently according to π. In the discrete time setting and when π is the uniform distribution υ, Litvak and Ejov have shown that the permutation matrices associated to Hamiltonian cycles are the fastest Markov chains. Here we prove (A) that the above optimality is with respect to all processes compatible with a fixed graph of permitted transitions (assuming that it does contain a Hamiltonian cycle), not only the Markov chains, and, (B) that this result admits a natural extension in both discrete and continuous time when π is close to υ: the fastest Markov chains/processes are those moving successively on the points of a Hamiltonian cycle, with transition probabilities/jump rates dictated by π. Nevertheless, the claim is no longer true when π is significantly different from υ

    Age Optimal Information Gathering and Dissemination on Graphs

    Full text link
    We consider the problem of timely exchange of updates between a central station and a set of ground terminals VV, via a mobile agent that traverses across the ground terminals along a mobility graph G=(V,E)G = (V, E). We design the trajectory of the mobile agent to minimize peak and average age of information (AoI), two newly proposed metrics for measuring timeliness of information. We consider randomized trajectories, in which the mobile agent travels from terminal ii to terminal jj with probability Pi,jP_{i,j}. For the information gathering problem, we show that a randomized trajectory is peak age optimal and factor-8H8\mathcal{H} average age optimal, where H\mathcal{H} is the mixing time of the randomized trajectory on the mobility graph GG. We also show that the average age minimization problem is NP-hard. For the information dissemination problem, we prove that the same randomized trajectory is factor-O(H)O(\mathcal{H}) peak and average age optimal. Moreover, we propose an age-based trajectory, which utilizes information about current age at terminals, and show that it is factor-22 average age optimal in a symmetric setting

    Finding a marked node on any graph by continuous-time quantum walk

    Full text link
    Spatial search by discrete-time quantum walk can find a marked node on any ergodic, reversible Markov chain PP quadratically faster than its classical counterpart, i.e.\ in a time that is in the square root of the hitting time of PP. However, in the framework of continuous-time quantum walks, it was previously unknown whether such general speed-up is possible. In fact, in this framework, the widely used quantum algorithm by Childs and Goldstone fails to achieve such a speedup. Furthermore, it is not clear how to apply this algorithm for searching any Markov chain PP. In this article, we aim to reconcile the apparent differences between the running times of spatial search algorithms in these two frameworks. We first present a modified version of the Childs and Goldstone algorithm which can search for a marked element for any ergodic, reversible PP by performing a quantum walk on its edges. Although this approach improves the algorithmic running time for several instances, it cannot provide a generic quadratic speedup for any PP. Secondly, using the framework of interpolated Markov chains, we provide a new spatial search algorithm by continuous-time quantum walk which can find a marked node on any PP in the square root of the classical hitting time. In the scenario where multiple nodes are marked, the algorithmic running time scales as the square root of a quantity known as the extended hitting time. Our results establish a novel connection between discrete-time and continuous-time quantum walks and can be used to develop a number of Markov chain-based quantum algorithms.Comment: This version deals only with new algorithms for spatial search by continuous-time quantum walk (CTQW) on ergodic, reversible Markov chains. Please see arXiv:2004.12686 for results on the necessary and sufficient conditions for the optimality of the Childs and Goldstone algorithm for spatial search by CTQ

    Alternative proof and interpretations for a recent state-dependent importance sampling scheme

    Get PDF
    Recently, a state-dependent change of measure for simulating overflows in the two-node tandem queue was proposed by Dupuis et al. (Ann. Appl. Probab. 17(4):1306–1346, 2007), together with a proof of its asymptotic optimality. In the present paper, we present an alternative, shorter and simpler proof. As a side result, we obtain interpretations for several of the quantities involved in the change of measure in terms of likelihood ratios

    Hamiltonian cycles and subsets of discounted occupational measures

    Full text link
    We study a certain polytope arising from embedding the Hamiltonian cycle problem in a discounted Markov decision process. The Hamiltonian cycle problem can be reduced to finding particular extreme points of a certain polytope associated with the input graph. This polytope is a subset of the space of discounted occupational measures. We characterize the feasible bases of the polytope for a general input graph GG, and determine the expected numbers of different types of feasible bases when the underlying graph is random. We utilize these results to demonstrate that augmenting certain additional constraints to reduce the polyhedral domain can eliminate a large number of feasible bases that do not correspond to Hamiltonian cycles. Finally, we develop a random walk algorithm on the feasible bases of the reduced polytope and present some numerical results. We conclude with a conjecture on the feasible bases of the reduced polytope.Comment: revised based on referees comment

    Surveying structural complexity in quantum many-body systems

    Full text link
    Quantum many-body systems exhibit a rich and diverse range of exotic behaviours, owing to their underlying non-classical structure. These systems present a deep structure beyond those that can be captured by measures of correlation and entanglement alone. Using tools from complexity science, we characterise such structure. We investigate the structural complexities that can be found within the patterns that manifest from the observational data of these systems. In particular, using two prototypical quantum many-body systems as test cases - the one-dimensional quantum Ising and Bose-Hubbard models - we explore how different information-theoretic measures of complexity are able to identify different features of such patterns. This work furthers the understanding of fully-quantum notions of structure and complexity in quantum systems and dynamics.Comment: 9 pages, 5 figure
    • 

    corecore