727 research outputs found

    Multi-qubit Randomized Benchmarking Using Few Samples

    Full text link
    Randomized benchmarking (RB) is an efficient and robust method to characterize gate errors in quantum circuits. Averaging over random sequences of gates leads to estimates of gate errors in terms of the average fidelity. These estimates are isolated from the state preparation and measurement errors that plague other methods like channel tomography and direct fidelity estimation. A decisive factor in the feasibility of randomized benchmarking is the number of sampled sequences required to obtain rigorous confidence intervals. Previous bounds were either prohibitively loose or required the number of sampled sequences to scale exponentially with the number of qubits in order to obtain a fixed confidence interval at a fixed error rate. Here we show that, with a small adaptation to the randomized benchmarking procedure, the number of sampled sequences required for a fixed confidence interval is dramatically smaller than could previously be justified. In particular, we show that the number of sampled sequences required is essentially independent of the number of qubits and scales favorably with the average error rate of the system under investigation. We also show that the number of samples required for long sequence lengths can be made substantially smaller than previous rigorous results (even for single qubits) as long as the noise process under investigation is not unitary. Our results bring rigorous randomized benchmarking on systems with many qubits into the realm of experimental feasibility.Comment: v3: Added discussion of the impact of variance heteroskedasticity on the RB fitting procedure. Close to published versio

    The last Frasnian Atrypida (Brachiopoda) in southern Belgium

    Get PDF
    The last representatives of the order Atrypida on the southern flank of the Dinant Synclinorium (Vaulx-Nismes area) in Belgium belong to Costatrypa, Spinatrypa, Spinatrypina (?Spinatrypina), Spinatrypina (Exatrypa), Iowatrypa, ?Waiotrypa, Desquamatia (Desquamatia) and Desquamatia (?Seratrypa). Among the thirteen described taxa, five are new: Spinatrypa tumuli sp. n., Iowatrypa circuitionis sp. n., ?Waiotrypa pluvia sp. n., Desquamatia (Desquamatia) quieta sp. n. and Desquamatia (?Seratrypa) derelicta sp. n. Supposed lissatrypid `Glassia drevermanni' Maillieux, 1936 from the late Frasnian Matagne shales is assigned to the Rhynchonellida. On the southern flank of the Dinant Synclinorium and in the Philippeville Massif, the Atrypida become extinct in the Palmatolepis rhenana Zone, significantly below the Frasnian-Famennian (F-F) boundary. Their extinction coincides with the first appearance of the green and black shales of the late Frasnian Matagne Formation, recording a transgressive-hypoxic event. Based on conodont data, this event takes place earlier on the southern flank of the Dinant Synclinorium than in the Philippeville Massif

    Magneto-Optical Spectrum Analyzer

    Full text link
    We present a method for the investigation of gigahertz magnetization dynamics of single magnetic nano elements. By combining a frequency domain approach with a micro focus Kerr effect detection, a high sensitivity to magnetization dynamics with submicron spatial resolution is achieved. It allows spectra of single nanostructures to be recorded. Results on the uniform precession in soft magnetic platelets are presented.Comment: 5 pages, 7 figure

    Toxocariasis Presenting as Encephalomyelitis

    Get PDF
    We describe a farmer who presented with a clinical picture of a transverse thoracic myelitis. MRI showed inflammatory lesions in brain and thoracic spinal cord. Toxocariasis was suspected because of eosinophilia in blood and cerebrospinal fluid, and this diagnosis was confirmed immunologically. He was successfully treated with antihelminthics in combination with corticosteroids. Neurotoxocariasis is rare and diagnosis can be difficult because of the different and atypical clinical manifestations. It should be considered in every case of central neurological syndrome associated with eosinophilia

    Parametrical optimization of laser surface alloyed NiTi shape memory alloy with Co and Nb by the Taguchi method

    Get PDF
    Different high-purity metal powders were successfully alloyed on to a nickel titanium (NiTi) shape memory alloy (SMA) with a 3 kW carbon dioxide (CO2) laser system. In order to produce an alloyed layer with complete penetration and acceptable composition profile, the Taguchi approach was used as a statistical technique for optimizing selected laser processing parameters. A systematic study of laser power, scanning velocity, and pre-paste powder thickness was conducted. The signal-to-noise ratios (S/N) for each control factor were calculated in order to assess the deviation from the average response. Analysis of variance (ANOVA) was carried out to understand the significance of process variables affecting the process effects. The Taguchi method was able to determine the laser process parameters for the laser surface alloying technique with high statistical accuracy and yield a laser surface alloying technique capable of achieving a desirable dilution ratio. Energy dispersive spectrometry consistently showed that the per cent by weight of Ni was reduced by 45 per cent as compared with untreated NiTi SMA when the Taguchi-determined laser processing parameters were employed, thus verifying the laser's processing parameters as optimum

    Efficient unitarity randomized benchmarking of few-qubit Clifford gates

    Get PDF
    Unitarity randomized benchmarking (URB) is an experimental procedure for estimating the coherence of implemented quantum gates independently of state preparation and measurement errors. These estimates of the coherence are measured by the unitarity. A central problem in this experiment is relating the number of data points to rigorous confidence intervals. In this work we provide a bound on the required number of data points for Clifford URB as a function of confidence and experimental parameters. This bound has favorable scaling in the regime of near-unitary noise and is asymptotically independent of the length of the gate sequences used. We also show that, in contrast to standard randomized benchmarking, a nontrivial number of data points is always required to overcome the randomness introduced by state preparation and measurement errors even in the limit of perfect gates. Our bound is sufficiently sharp to benchmark small-dimensional s

    Big Changes in How Students are Tested

    Get PDF
    For the past decade, school accountability has relied on tests for which the essential format has remained unchanged. Educators are familiar with the yearly testing routine: schools are given curriculum frameworks, teachers use the frameworks to guide instruction, students take one big test at year’s end which relies heavily upon multiple-choice bubble items, and then school leaders wait anxiously to find out whether enough of their students scored at or above proficiency to meet state standards. All this will change with the adoption of Common Core standards. Testing and accountability aren’t going away. Instead, they are developing and expanding in ways that aim to address many of the present shortcomings of state testing routines. Most importantly, these new tests will be computer-based. As such, they will potentially shorten testing time, increase tests’ precision, and provide immediate feedback to students and teachers

    Noise-mitigated randomized measurements and self-calibrating shadow estimation

    Full text link
    Randomized measurements are increasingly appreciated as powerful tools to estimate properties of quantum systems, e.g., in the characterization of hybrid classical-quantum computation. On many platforms they constitute natively accessible measurements, serving as the building block of prominent schemes like shadow estimation. In the real world, however, the implementation of the random gates at the core of these schemes is susceptible to various sources of noise and imperfections, strongly limiting the applicability of protocols. To attenuate the impact of this shortcoming, in this work we introduce an error-mitigated method of randomized measurements, giving rise to a robust shadow estimation procedure. On the practical side, we show that error mitigation and shadow estimation can be carried out using the same session of quantum experiments, hence ensuring that we can address and mitigate the noise affecting the randomization measurements. Mathematically, we develop a picture derived from Fourier-transforms to connect randomized benchmarking and shadow estimation. We prove rigorous performance guarantees and show the functioning using comprehensive numerics. More conceptually, we demonstrate that, if properly used, easily accessible data from randomized benchmarking schemes already provide such valuable diagnostic information to inform about the noise dynamics and to assist in quantum learning procedures.Comment: 6+20 pages, 6 figure
    corecore