7,304 research outputs found

    Analysing correlated noise on the surface code using adaptive decoding algorithms

    Get PDF
    Laboratory hardware is rapidly progressing towards a state where quantum error-correcting codes can be realised. As such, we must learn how to deal with the complex nature of the noise that may occur in real physical systems. Single qubit Pauli errors are commonly used to study the behaviour of error-correcting codes, but in general we might expect the environment to introduce correlated errors to a system. Given some knowledge of structures that errors commonly take, it may be possible to adapt the error-correction procedure to compensate for this noise, but performing full state tomography on a physical system to analyse this structure quickly becomes impossible as the size increases beyond a few qubits. Here we develop and test new methods to analyse blue a particular class of spatially correlated errors by making use of parametrised families of decoding algorithms. We demonstrate our method numerically using a diffusive noise model. We show that information can be learnt about the parameters of the noise model, and additionally that the logical error rates can be improved. We conclude by discussing how our method could be utilised in a practical setting blue and propose extensions of our work to study more general error models.Comment: 19 pages, 8 figures, comments welcome; v2 - minor typos corrected some references added; v3 - accepted to Quantu

    Model waveform accuracy standards for gravitational wave data analysis

    Get PDF
    Model waveforms are used in gravitational wave data analysis to detect and then to measure the properties of a source by matching the model waveforms to the signal from a detector. This paper derives accuracy standards for model waveforms which are sufficient to ensure that these data analysis applications are capable of extracting the full scientific content of the data, but without demanding excessive accuracy that would place undue burdens on the model waveform simulation community. These accuracy standards are intended primarily for broadband model waveforms produced by numerical simulations, but the standards are quite general and apply equally to such waveforms produced by analytical or hybrid analytical-numerical methods

    Fault-tolerant error correction with the gauge color code

    Get PDF
    The constituent parts of a quantum computer are inherently vulnerable to errors. To this end we have developed quantum error-correcting codes to protect quantum information from noise. However, discovering codes that are capable of a universal set of computational operations with the minimal cost in quantum resources remains an important and ongoing challenge. One proposal of significant recent interest is the gauge color code. Notably, this code may offer a reduced resource cost over other well-studied fault-tolerant architectures using a new method, known as gauge fixing, for performing the non-Clifford logical operations that are essential for universal quantum computation. Here we examine the gauge color code when it is subject to noise. Specifically we make use of single-shot error correction to develop a simple decoding algorithm for the gauge color code, and we numerically analyse its performance. Remarkably, we find threshold error rates comparable to those of other leading proposals. Our results thus provide encouraging preliminary data of a comparative study between the gauge color code and other promising computational architectures.Comment: v1 - 5+4 pages, 11 figures, comments welcome; v2 - minor revisions, new supplemental including a discussion on correlated errors and details on threshold calculations; v3 - Author accepted manuscript. Accepted on 21/06/16. Deposited on 29/07/16. 9+5 pages, 17 figures, new version includes resource scaling analysis in below threshold regime, see eqn. (4) and methods sectio

    Behavioral biases when viewing multiplexed scenes:scene structure and frames of reference for inspection

    Get PDF
    Where people look when viewing a scene has been a much explored avenue of vision research (e.g., see Tatler, 2009). Current understanding of eye guidance suggests that a combination of high and low-level factors influence fixation selection (e.g., Torralba et al., 2006), but that there are also strong biases toward the center of an image (Tatler, 2007). However, situations where we view multiplexed scenes are becoming increasingly common, and it is unclear how visual inspection might be arranged when content lacks normal semantic or spatial structure. Here we use the central bias to examine how gaze behavior is organized in scenes that are presented in their normal format, or disrupted by scrambling the quadrants and separating them by space. In Experiment 1, scrambling scenes had the strongest influence on gaze allocation. Observers were highly biased by the quadrant center, although physical space did not enhance this bias. However, the center of the display still contributed to fixation selection above chance, and was most influential early in scene viewing. When the top left quadrant was held constant across all conditions in Experiment 2, fixation behavior was significantly influenced by the overall arrangement of the display, with fixations being biased toward the quadrant center when the other three quadrants were scrambled (despite the visual information in this quadrant being identical in all conditions). When scenes are scrambled into four quadrants and semantic contiguity is disrupted, observers no longer appear to view the content as a single scene (despite it consisting of the same visual information overall), but rather anchor visual inspection around the four separate “sub-scenes.” Moreover, the frame of reference that observers use when viewing the multiplex seems to change across viewing time: from an early bias toward the display center to a later bias toward quadrant centers

    On the factors causing processing difficulty of multiple-scene displays

    Get PDF
    Multiplex viewing of static or dynamic scenes is an increasing feature of screen media. Most existing multiplex experiments have examined detection across increasing scene numbers, but currently no systematic evaluation of the factors that might produce difficulty in processing multiplexes exists. Across five experiments we provide such an evaluation. Experiment 1 characterises difficulty in change detection when the number of scenes is increased. Experiment 2 reveals that the increased difficulty across multiple-scene displays is caused by the total amount of visual information accounts for differences in change detection times, regardless of whether this information is presented across multiple scenes, or contained in one scene. Experiment 3 shows that whether quadrants of a display were drawn from the same, or different scenes did not affect change detection performance. Experiment 4 demonstrates that knowing which scene the change will occur in means participants can perform at monoplex level. Finally, Experiment 5 finds that changes of central interest in multiplexed scenes are detected far easier than marginal interest changes to such an extent that a centrally interesting object removal in nine screens is detected more rapidly than a marginally interesting object removal in four screens. Processing multiple-screen displays therefore seems dependent on the amount of information, and the importance of that information to the task, rather than simply the number of scenes in the display. We discuss the theoretical and applied implications of these findings
    corecore