145 research outputs found

    Quantum speedup of classical mixing processes

    Get PDF
    Most approximation algorithms for #P-complete problems (e.g., evaluating the permanent of a matrix or the volume of a polytope) work by reduction to the problem of approximate sampling from a distribution π\pi over a large set §\S. This problem is solved using the {\em Markov chain Monte Carlo} method: a sparse, reversible Markov chain PP on §\S with stationary distribution π\pi is run to near equilibrium. The running time of this random walk algorithm, the so-called {\em mixing time} of PP, is O(δ1log1/π)O(\delta^{-1} \log 1/\pi_*) as shown by Aldous, where δ\delta is the spectral gap of PP and π\pi_* is the minimum value of π\pi. A natural question is whether a speedup of this classical method to O(δ1log1/π)O(\sqrt{\delta^{-1}} \log 1/\pi_*), the diameter of the graph underlying PP, is possible using {\em quantum walks}. We provide evidence for this possibility using quantum walks that {\em decohere} under repeated randomized measurements. We show: (a) decoherent quantum walks always mix, just like their classical counterparts, (b) the mixing time is a robust quantity, essentially invariant under any smooth form of decoherence, and (c) the mixing time of the decoherent quantum walk on a periodic lattice Znd\Z_n^d is O(ndlogd)O(n d \log d), which is indeed O(δ1log1/π)O(\sqrt{\delta^{-1}} \log 1/\pi_*) and is asymptotically no worse than the diameter of Znd\Z_n^d (the obvious lower bound) up to at most a logarithmic factor.Comment: 13 pages; v2 revised several part

    The utility of presentation and 4-hour high sensitivity troponin I to rule-out acute myocardial infarction in the emergency department

    Get PDF
    Objectives: International guidance recommends that early serial sampling of high sensitivity troponin be used to accurately identify acute myocardial infarction (AMI) in chest pain patients. The background evidence for this approach is limited. We evaluated whether on presentation and 4-hour high-sensitivity troponin I (hs-cTnI) could be used to accurately rule-out AMI. Design and methods: hs-cTnI was measured on presentation and at 4-hours in adult patients attending an emergency department with possible acute coronary syndrome. We determined the sensitivity for AMI for at least one hs-cTnI above the 99th percentile for a healthy population or alone or in combination with new ischemic ECG changes. Both overall and sex-specific 99th percentiles were assessed. Patients with negative tests were designated low-risk. Results: 63 (17.1%) of 368 patients had AMI. The median (interquartile range) time from symptom onset to first blood sampling was 4.8. h (2.8-8.6). The sensitivity of the presentation and 4. h hs-cTnI using the overall 99th percentile was 92.1% (95% CI 82.4% to 97.4%) and negative predictive value 95.4% (92.3% to 97.4%) with 78.3% low-risk. Applying the sex-specific 99th percentile did not change the sensitivity. The addition of ECG did not change the sensitivity. Conclusion: Hs-cTnI >. 99th percentile thresholds measured on presentation and at 4-hours was not a safe strategy to rule-out AMI in this clinical setting irrespective of whether sex-specific 99th percentiles were used, or whether hs-cTnI was combined with ECG results

    Non-intersecting Brownian walkers and Yang-Mills theory on the sphere

    Full text link
    We study a system of N non-intersecting Brownian motions on a line segment [0,L] with periodic, absorbing and reflecting boundary conditions. We show that the normalized reunion probabilities of these Brownian motions in the three models can be mapped to the partition function of two-dimensional continuum Yang-Mills theory on a sphere respectively with gauge groups U(N), Sp(2N) and SO(2N). Consequently, we show that in each of these Brownian motion models, as one varies the system size L, a third order phase transition occurs at a critical value L=L_c(N)\sim \sqrt{N} in the large N limit. Close to the critical point, the reunion probability, properly centered and scaled, is identical to the Tracy-Widom distribution describing the probability distribution of the largest eigenvalue of a random matrix. For the periodic case we obtain the Tracy-Widom distribution corresponding to the GUE random matrices, while for the absorbing and reflecting cases we get the Tracy-Widom distribution corresponding to GOE random matrices. In the absorbing case, the reunion probability is also identified as the maximal height of N non-intersecting Brownian excursions ("watermelons" with a wall) whose distribution in the asymptotic scaling limit is then described by GOE Tracy-Widom law. In addition, large deviation formulas for the maximum height are also computed.Comment: 37 pages, 4 figures, revised and published version. A typo has been corrected in Eq. (10

    Heart Fatty Acid Binding Protein and cardiac troponin: development of an optimal rule-out strategy for acute myocardial infarction

    Get PDF
    Background: Improved ability to rapidly rule-out Acute Myocardial Infarction (AMI) in patients presenting with chest pain will promote decongestion of the Emergency Department (ED) and reduce unnecessary hospital admissions. We assessed a new commercial Heart Fatty Acid Binding Protein (H-FABP) assay for additional diagnostic value when combined with cardiac troponin (using a high sensitivity assay). Methods: H-FABP and high-sensitivity troponins I (hs-cTnI) and T (hs-cTnT) were measured in samples taken on-presentation from patients, attending the ED, with symptoms triggering investigation for possible acute coronary syndrome. The optimal combination of H-FABP with each hs-cTn was defined as that which maximized the proportion of patients with a negative test (low-risk) whilst maintaining at least 99 % sensitivity for AMI. A negative test comprised both H-FABP and hs-cTn below the chosen threshold in the absence of ischemic changes on the ECG. Results: One thousand seventy-nine patients were recruited including 248 with AMI. H-FABP 99 % sensitivity for AMI whilst classifying 40.9 % of patients as low-risk. The combination of H-FABP < 3.9 ng/mL and hs-cTnT < 7.6 ng/L with a negative ECG maintained the same sensitivity whilst classifying 32.1 % of patients as low risk. Conclusions: In patients requiring rule-out of AMI, the addition of H-FABP to hs-cTn at presentation (in the absence of new ischaemic ECG findings) may accelerate clinical diagnostic decision making by identifying up to 40 % of such patients as low-risk for AMI on the basis of blood tests performed on presentation. If implemented this has the potential to significantly accelerate triaging of patients for early discharge from the ED

    Increasing subsequences and the hard-to-soft edge transition in matrix ensembles

    Get PDF
    Our interest is in the cumulative probabilities Pr(L(t) \le l) for the maximum length of increasing subsequences in Poissonized ensembles of random permutations, random fixed point free involutions and reversed random fixed point free involutions. It is shown that these probabilities are equal to the hard edge gap probability for matrix ensembles with unitary, orthogonal and symplectic symmetry respectively. The gap probabilities can be written as a sum over correlations for certain determinantal point processes. From these expressions a proof can be given that the limiting form of Pr(L(t) \le l) in the three cases is equal to the soft edge gap probability for matrix ensembles with unitary, orthogonal and symplectic symmetry respectively, thereby reclaiming theorems due to Baik-Deift-Johansson and Baik-Rains.Comment: LaTeX, 19 page

    Growth models, random matrices and Painleve transcendents

    Full text link
    The Hammersley process relates to the statistical properties of the maximum length of all up/right paths connecting random points of a given density in the unit square from (0,0) to (1,1). This process can also be interpreted in terms of the height of the polynuclear growth model, or the length of the longest increasing subsequence in a random permutation. The cumulative distribution of the longest path length can be written in terms of an average over the unitary group. Versions of the Hammersley process in which the points are constrained to have certain symmetries of the square allow similar formulas. The derivation of these formulas is reviewed. Generalizing the original model to have point sources along two boundaries of the square, and appropriately scaling the parameters gives a model in the KPZ universality class. Following works of Baik and Rains, and Pr\"ahofer and Spohn, we review the calculation of the scaled cumulative distribution, in which a particular Painlev\'e II transcendent plays a prominent role.Comment: 27 pages, 5 figure

    Almost uniform sampling via quantum walks

    Get PDF
    Many classical randomized algorithms (e.g., approximation algorithms for #P-complete problems) utilize the following random walk algorithm for {\em almost uniform sampling} from a state space SS of cardinality NN: run a symmetric ergodic Markov chain PP on SS for long enough to obtain a random state from within ϵ\epsilon total variation distance of the uniform distribution over SS. The running time of this algorithm, the so-called {\em mixing time} of PP, is O(δ1(logN+logϵ1))O(\delta^{-1} (\log N + \log \epsilon^{-1})), where δ\delta is the spectral gap of PP. We present a natural quantum version of this algorithm based on repeated measurements of the {\em quantum walk} Ut=eiPtU_t = e^{-iPt}. We show that it samples almost uniformly from SS with logarithmic dependence on ϵ1\epsilon^{-1} just as the classical walk PP does; previously, no such quantum walk algorithm was known. We then outline a framework for analyzing its running time and formulate two plausible conjectures which together would imply that it runs in time O(δ1/2logNlogϵ1)O(\delta^{-1/2} \log N \log \epsilon^{-1}) when PP is the standard transition matrix of a constant-degree graph. We prove each conjecture for a subclass of Cayley graphs.Comment: 13 pages; v2 added NSF grant info; v3 incorporated feedbac

    Longest Increasing Subsequence under Persistent Comparison Errors

    Full text link
    We study the problem of computing a longest increasing subsequence in a sequence SS of nn distinct elements in the presence of persistent comparison errors. In this model, every comparison between two elements can return the wrong result with some fixed (small) probability p p , and comparisons cannot be repeated. Computing the longest increasing subsequence exactly is impossible in this model, therefore, the objective is to identify a subsequence that (i) is indeed increasing and (ii) has a length that approximates the length of the longest increasing subsequence. We present asymptotically tight upper and lower bounds on both the approximation factor and the running time. In particular, we present an algorithm that computes an O(logn)O(\log n)-approximation in time O(nlogn)O(n\log n), with high probability. This approximation relies on the fact that that we can approximately sort nn elements in O(nlogn)O(n\log n) time such that the maximum dislocation of an element is at most O(logn)O(\log n). For the lower bounds, we prove that (i) there is a set of sequences, such that on a sequence picked randomly from this set every algorithm must return an Ω(logn)\Omega(\log n)-approximation with high probability, and (ii) any O(logn)O(\log n)-approximation algorithm for longest increasing subsequence requires Ω(nlogn)\Omega(n \log n) comparisons, even in the absence of errors
    corecore