48 research outputs found

    Randomizer for High Data Rates

    Get PDF
    NASA as well as a number of other space agencies now recognize that the current recommended CCSDS randomizer used for telemetry (TM) is too short. When multiple applications of the PN8 Maximal Length Sequence (MLS) are required in order to fully cover a channel access data unit (CADU), spectral problems in the form of elevated spurious discretes (spurs) appear. Originally the randomizer was called a bit transition generator (BTG) precisely because it was thought that its primary value was to insure sufficient bit transitions to allow the bit/symbol synchronizer to lock and remain locked. We, NASA, have shown that the old BTG concept is a limited view of the real value of the randomizer sequence and that the randomizer also aids in signal acquisition as well as minimizing the potential for false decoder lock. Under the guidelines we considered here there are multiple maximal length sequences under GF(2) which appear attractive in this application. Although there may be mitigating reasons why another MLS sequence could be selected, one sequence in particular possesses a combination of desired properties which offsets it from the others

    Compressed sensing quantum process tomography for superconducting quantum gates

    Full text link
    We apply the method of compressed sensing (CS) quantum process tomography (QPT) to characterize quantum gates based on superconducting Xmon and phase qubits. Using experimental data for a two-qubit controlled-Z gate, we obtain an estimate for the process matrix χ\chi with reasonably high fidelity compared to full QPT, but using a significantly reduced set of initial states and measurement configurations. We show that the CS method still works when the amount of used data is so small that the standard QPT would have an underdetermined system of equations. We also apply the CS method to the analysis of the three-qubit Toffoli gate with numerically added noise, and similarly show that the method works well for a substantially reduced set of data. For the CS calculations we use two different bases in which the process matrix χ\chi is approximately sparse, and show that the resulting estimates of the process matrices match each ther with reasonably high fidelity. For both two-qubit and three-qubit gates, we characterize the quantum process by not only its process matrix and fidelity, but also by the corresponding standard deviation, defined via variation of the state fidelity for different initial states.Comment: 16 pages, 11 figure

    Landsat Data Continuity Mission (LDCM) - Optimizing X-Band Usage

    Get PDF
    The NASA version of the low-density parity check (LDPC) 7/8-rate code, shortened to the dimensions of (8160, 7136), has been implemented as the forward error correction (FEC) schema for the Landsat Data Continuity Mission (LDCM). This is the first flight application of this code. In order to place a 440 Msps link within the 375 MHz wide X band we found it necessary to heavily bandpass filter the satellite transmitter output . Despite the significant amplitude and phase distortions that accompanied the spectral truncation, the mission required BER is maintained at < 10(exp -12) with less than 2 dB of implementation loss. We utilized a band-pass filter designed ostensibly to replicate the link distortions to demonstrate link design viability. The same filter was then used to optimize the adaptive equalizer in the receiver employed at the terminus of the downlink. The excellent results we obtained could be directly attributed to the implementation of the LDPC code and the amplitude and phase compensation provided in the receiver. Similar results were obtained with receivers from several vendors

    End-to-end communication test on variable length packet structures utilizing AOS testbed

    Get PDF
    This paper describes a communication test, which successfully demonstrated the transfer of losslessly compressed images in an end-to-end system. These compressed images were first formatted into variable length Consultative Committee for Space Data Systems (CCSDS) packets in the Advanced Orbiting System Testbed (AOST). The CCSDS data Structures were transferred from the AOST to the Radio Frequency Simulations Operations Center (RFSOC), via a fiber optic link, where data was then transmitted through the Tracking and Data Relay Satellite System (TDRSS). The received data acquired at the White Sands Complex (WSC) was transferred back to the AOST where the data was captured and decompressed back to the original images. This paper describes the compression algorithm, the AOST configuration, key flight components, data formats, and the communication link characteristics and test results

    Spectral signatures of many-body localization with interacting photons

    Full text link
    Statistical mechanics is founded on the assumption that a system can reach thermal equilibrium, regardless of the starting state. Interactions between particles facilitate thermalization, but, can interacting systems always equilibrate regardless of parameter values\,? The energy spectrum of a system can answer this question and reveal the nature of the underlying phases. However, most experimental techniques only indirectly probe the many-body energy spectrum. Using a chain of nine superconducting qubits, we implement a novel technique for directly resolving the energy levels of interacting photons. We benchmark this method by capturing the intricate energy spectrum predicted for 2D electrons in a magnetic field, the Hofstadter butterfly. By increasing disorder, the spatial extent of energy eigenstates at the edge of the energy band shrink, suggesting the formation of a mobility edge. At strong disorder, the energy levels cease to repel one another and their statistics approaches a Poisson distribution - the hallmark of transition from the thermalized to the many-body localized phase. Our work introduces a new many-body spectroscopy technique to study quantum phases of matter

    Removing leakage-induced correlated errors in superconducting quantum error correction

    Full text link
    Quantum computing can become scalable through error correction, but logical error rates only decrease with system size when physical errors are sufficiently uncorrelated. During computation, unused high energy levels of the qubits can become excited, creating leakage states that are long-lived and mobile. Particularly for superconducting transmon qubits, this leakage opens a path to errors that are correlated in space and time. Here, we report a reset protocol that returns a qubit to the ground state from all relevant higher level states. We test its performance with the bit-flip stabilizer code, a simplified version of the surface code for quantum error correction. We investigate the accumulation and dynamics of leakage during error correction. Using this protocol, we find lower rates of logical errors and an improved scaling and stability of error suppression with increasing qubit number. This demonstration provides a key step on the path towards scalable quantum computing

    Resolving catastrophic error bursts from cosmic rays in large arrays of superconducting qubits

    Full text link
    Scalable quantum computing can become a reality with error correction, provided coherent qubits can be constructed in large arrays. The key premise is that physical errors can remain both small and sufficiently uncorrelated as devices scale, so that logical error rates can be exponentially suppressed. However, energetic impacts from cosmic rays and latent radioactivity violate both of these assumptions. An impinging particle ionizes the substrate, radiating high energy phonons that induce a burst of quasiparticles, destroying qubit coherence throughout the device. High-energy radiation has been identified as a source of error in pilot superconducting quantum devices, but lacking a measurement technique able to resolve a single event in detail, the effect on large scale algorithms and error correction in particular remains an open question. Elucidating the physics involved requires operating large numbers of qubits at the same rapid timescales as in error correction, exposing the event's evolution in time and spread in space. Here, we directly observe high-energy rays impacting a large-scale quantum processor. We introduce a rapid space and time-multiplexed measurement method and identify large bursts of quasiparticles that simultaneously and severely limit the energy coherence of all qubits, causing chip-wide failure. We track the events from their initial localised impact to high error rates across the chip. Our results provide direct insights into the scale and dynamics of these damaging error bursts in large-scale devices, and highlight the necessity of mitigation to enable quantum computing to scale
    corecore