4,553,263 research outputs found

    Using Ancient Samples in Projection Analysis.

    Get PDF
    Projection analysis is a tool that extracts information from the joint allele frequency spectrum to better understand the relationship between two populations. In projection analysis, a test genome is compared to a set of genomes from a reference population. The projection's shape depends on the historical relationship of the test genome's population to the reference population. Here, we explore in greater depth the effects on the projection when ancient samples are included in the analysis. First, we conduct a series of simulations in which the ancient sample is directly ancestral to a present-day population (one-population model), or the ancient sample is ancestral to a sister population that diverged before the time of sampling (two-population model). We find that there are characteristic differences between the projections for the one-population and two-population models, which indicate that the projection can be used to determine whether a test genome is directly ancestral to a present-day population or not. Second, we compute projections for several published ancient genomes. We compare two Neanderthals and three ancient human genomes to European, Han Chinese and Yoruba reference panels. We use a previously constructed demographic model and insert these five ancient genomes to assess how well the observed projections are recovered

    Multi-qubit Randomized Benchmarking Using Few Samples

    Full text link
    Randomized benchmarking (RB) is an efficient and robust method to characterize gate errors in quantum circuits. Averaging over random sequences of gates leads to estimates of gate errors in terms of the average fidelity. These estimates are isolated from the state preparation and measurement errors that plague other methods like channel tomography and direct fidelity estimation. A decisive factor in the feasibility of randomized benchmarking is the number of sampled sequences required to obtain rigorous confidence intervals. Previous bounds were either prohibitively loose or required the number of sampled sequences to scale exponentially with the number of qubits in order to obtain a fixed confidence interval at a fixed error rate. Here we show that, with a small adaptation to the randomized benchmarking procedure, the number of sampled sequences required for a fixed confidence interval is dramatically smaller than could previously be justified. In particular, we show that the number of sampled sequences required is essentially independent of the number of qubits and scales favorably with the average error rate of the system under investigation. We also show that the number of samples required for long sequence lengths can be made substantially smaller than previous rigorous results (even for single qubits) as long as the noise process under investigation is not unitary. Our results bring rigorous randomized benchmarking on systems with many qubits into the realm of experimental feasibility.Comment: v3: Added discussion of the impact of variance heteroskedasticity on the RB fitting procedure. Close to published versio

    Measurement of Nonplanar Dielectric Samples Using an Open Resonator

    Get PDF
    Conventional microwave methods for measuring permittivity often utilize samples in flat sheet form. In practice, however, it is sometimes desirable to measure samples having curved surfaces, for example, parts of lenses or small radomes. This paper describes an open resonator technique for achieving this and compares measurements made at 11.6 GHz on samples of polymethyl methacryIate in both curved and flat sheet form
    • …
    corecore