2,493 research outputs found

    Attainable Values of Reset Thresholds

    Get PDF
    An automaton is synchronizing if there exists a word that sends all states of the automaton to a single state. The reset threshold is the length of the shortest such word. We study the set RT_n of attainable reset thresholds by automata with n states. Relying on constructions of digraphs with known local exponents we show that the intervals [1, (n^2-3n+4)/2] and [(p-1)(q-1), p(q-2)+n-q+1], where 2 n, gcd(p,q)=1, belong to RT_n, even if restrict our attention to strongly connected automata. Moreover, we prove that in this case the smallest value that does not belong to RT_n is at least n^2 - O(n^{1.7625} log n / log log n). This value is increased further assuming certain conjectures about the gaps between consecutive prime numbers. We also show that any value smaller than n(n-1)/2 is attainable by an automaton with a sink state and any value smaller than n^2-O(n^{1.5}) is attainable in general case. Furthermore, we solve the problem of existence of slowly synchronizing automata over an arbitrarily large alphabet, by presenting for every fixed size of the alphabet an infinite series of irreducibly synchronizing automata with the reset threshold n^2-O(n)

    Measurement based entanglement under conditions of extreme photon loss

    Full text link
    The act of measuring optical emissions from two remote qubits can entangle them. By demanding that a photon from each qubit reaches the detectors, one can ensure than no photon was lost. But the failure rate then rises quadratically with loss probability. In [1] this resulted in 30 successes per billion attempts. We describe a means to exploit the low grade entanglement heralded by the detection of a lone photon: A subsequent perfect operation is quickly achieved by consuming this noisy resource. We require only two qubits per node, and can tolerate both path length variation and loss asymmetry. The impact of photon loss upon the failure rate is then linear; realistic high-loss devices can gain orders of magnitude in performance and thus support QIP.Comment: Contains an extension of the protocol that makes it robust against asymmetries in path length and photon los

    Distributed quantum information processing with minimal local resources

    Full text link
    We present a protocol for growing graph states, the resource for one-way quantum computing, when the available entanglement mechanism is highly imperfect. The distillation protocol is frugal in its use of ancilla qubits, requiring only a single ancilla qubit when the noise is dominated by one Pauli error, and two for a general noise model. The protocol works with such scarce local resources by never post-selecting on the measurement outcomes of purification rounds. We find that such a strategy causes fidelity to follow a biased random walk, and that a target fidelity is likely to be reached more rapidly than for a comparable post-selecting protocol. An analysis is presented of how imperfect local operations limit the attainable fidelity. For example, a single Pauli error rate of 20% can be distilled down to 10\sim 10 times the imperfection in local operations.Comment: 4 pages of main paper with an additional 1 page appendix, 5 figures. Please contact me with any comment

    On random primitive sets, directable NDFAs and the generation of slowly synchronizing DFAs

    Full text link
    We tackle the problem of the randomized generation of slowly synchronizing deterministic automata (DFAs) by generating random primitive sets of matrices. We show that when the randomized procedure is too simple the exponent of the generated sets is O(n log n) with high probability, thus the procedure fails to return DFAs with large reset threshold. We extend this result to random nondeterministic automata (NDFAs) by showing, in particular, that a uniformly sampled NDFA has both a 2-directing word and a 3-directing word of length O(n log n) with high probability. We then present a more involved randomized algorithm that manages to generate DFAs with large reset threshold and we finally leverage this finding for exhibiting new families of DFAs with reset threshold of order Ω(n2/4) \Omega(n^2/4) .Comment: 31 pages, 9 figures. arXiv admin note: text overlap with arXiv:1805.0672

    Lower Bounds on Avoiding Thresholds

    Get PDF

    Measures of metacognition on signal-detection theoretic models

    Get PDF
    Analysing metacognition, specifically knowledge of accuracy of internal perceptual, memorial or other knowledge states, is vital for many strands of psychology, including determining the accuracy of feelings of knowing, and discriminating conscious from unconscious cognition. Quantifying metacognitive sensitivity is however more challenging than quantifying basic stimulus sensitivity. Under popular signal detection theory (SDT) models for stimulus classification tasks, approaches based on type II receiver-operator characteristic (ROC) curves or type II d-prime risk confounding metacognition with response biases in either the type I (classification) or type II (metacognitive) tasks. A new approach introduces meta-d′: the type I d-prime that would have led to the observed type II data had the subject used all the type I information. Here we (i) further establish the inconsistency of the type II d-prime and ROC approaches with new explicit analyses of the standard SDT model, and (ii) analyse, for the first time, the behaviour of meta-d′ under non-trivial scenarios, such as when metacognitive judgments utilize enhanced or degraded versions of the type I evidence. Analytically, meta-d′ values typically reflect the underlying model well, and are stable under changes in decision criteria; however, in relatively extreme cases meta-d′ can become unstable. We explore bias and variance of in-sample measurements of meta-d′ and supply MATLAB code for estimation in general cases. Our results support meta-d′ as a useful measure of metacognition, and provide rigorous methodology for its application. Our recommendations are useful for any researchers interested in assessing metacognitive accuracy
    corecore