10,662 research outputs found

    Randomized benchmarking in measurement-based quantum computing

    Get PDF
    Randomized benchmarking is routinely used as an efficient method for characterizing the performance of sets of elementary logic gates in small quantum devices. In the measurement-based model of quantum computation, logic gates are implemented via single-site measurements on a fixed universal resource state. Here we adapt the randomized benchmarking protocol for a single qubit to a linear cluster state computation, which provides partial, yet efficient characterization of the noise associated with the target gate set. Applying randomized benchmarking to measurement-based quantum computation exhibits an interesting interplay between the inherent randomness associated with logic gates in the measurement-based model and the random gate sequences used in benchmarking. We consider two different approaches: the first makes use of the standard single-qubit Clifford group, while the second uses recently introduced (non-Clifford) measurement-based 2-designs, which harness inherent randomness to implement gate sequences.Comment: 10 pages, 4 figures, comments welcome; v2 published versio

    An "All Possible Steps" Approach to the Accelerated Use of Gillespie's Algorithm

    Full text link
    Many physical and biological processes are stochastic in nature. Computational models and simulations of such processes are a mathematical and computational challenge. The basic stochastic simulation algorithm was published by D. Gillespie about three decades ago [D.T. Gillespie, J. Phys. Chem. {\bf 81}, 2340, (1977)]. Since then, intensive work has been done to make the algorithm more efficient in terms of running time. All accelerated versions of the algorithm are aimed at minimizing the running time required to produce a stochastic trajectory in state space. In these simulations, a necessary condition for reliable statistics is averaging over a large number of simulations. In this study I present a new accelerating approach which does not alter the stochastic algorithm, but reduces the number of required runs. By analysis of collected data I demonstrate high precision levels with fewer simulations. Moreover, the suggested approach provides a good estimation of statistical error, which may serve as a tool for determining the number of required runs.Comment: Accepted for publication at the Journal of Chemical Physics. 19 pages, including 2 Tables and 4 Figure

    Convergence improvement for coupled cluster calculations

    Full text link
    Convergence problems in coupled-cluster iterations are discussed, and a new iteration scheme is proposed. Whereas the Jacobi method inverts only the diagonal part of the large matrix of equation coefficients, we invert a matrix which also includes a relatively small number of off-diagonal coefficients, selected according to the excitation amplitudes undergoing the largest change in the coupled cluster iteration. A test case shows that the new IPM (inversion of partial matrix) method gives much better convergence than the straightforward Jacobi-type scheme or such well-known convergence aids as the reduced linear equations or direct inversion in iterative subspace methods.Comment: 7 pages, IOPP styl

    Investigation, Testing, and Selection of Slip-ring Lead Wires for Use in High-precision Slip-ring Capsules Final Report

    Get PDF
    Evaluation of corrosion resistant silver alloys for use in lead wires for slip-ring assemblies of Saturn guidance and control system

    Treatment of multidrug-resistant tuberculosis in a remote, conflict-affected area of the Democratic Republic of Congo.

    Get PDF
    The Democratic Republic of Congo is a high-burden country for multidrug-resistant tuberculosis. Médecins Sans Frontières has supported the Ministry of Health in the conflict-affected region of Shabunda since 1997. In 2006, three patients were diagnosed with drug-resistant TB (DR-TB) and had no options for further treatment. An innovative model was developed to treat these patients despite the remote setting. Key innovations were the devolving of responsibility for treatment to non-TB clinicians remotely supported by a TB specialist, use of simplified monitoring protocols, and a strong focus on addressing stigma to support adherence. Treatment was successfully completed after a median of 24 months. This pilot programme demonstrates that successful treatment for DR-TB is possible on a small scale in remote settings

    The ECHELON-2 trial: 5-year results of a randomized, phase III study of brentuximab vedotin with chemotherapy for CD30-positive peripheral T-cell lymphoma

    Get PDF
    BACKGROUND: For patients with peripheral T-cell lymphoma (PTCL), outcomes using frontline treatment with cyclophosphamide, doxorubicin, vincristine, and prednisone (CHOP) or CHOP-like therapy are typically poor. The ECHELON-2 study demonstrated that brentuximab vedotin plus cyclophosphamide, doxorubicin, and prednisone (A+CHP) exhibited statistically superior progression-free survival (PFS) per independent central review and improvements in overall survival versus CHOP for the frontline treatment of patients with systemic anaplastic large cell lymphoma or other CD30-positive PTCL. PATIENTS AND METHODS: ECHELON-2 is a double-blind, double-dummy, randomized, placebo-controlled, active-comparator phase III study. We present an exploratory update of the ECHELON-2 study, including an analysis of 5-year PFS per investigator in the intent-to-treat analysis group. RESULTS: A total of 452 patients were randomized (1 : 1) to six or eight cycles of A+CHP (N = 226) or CHOP (N = 226). At median follow-up of 47.6 months, 5-year PFS rates were 51.4% [95% confidence interval (CI): 42.8% to 59.4%] with A+CHP versus 43.0% (95% CI: 35.8% to 50.0%) with CHOP (hazard ratio = 0.70; 95% CI: 0.53-0.91), and 5-year overall survival (OS) rates were 70.1% (95% CI: 63.3% to 75.9%) with A+CHP versus 61.0% (95% CI: 54.0% to 67.3%) with CHOP (hazard ratio = 0.72; 95% CI: 0.53-0.99). Both PFS and OS were generally consistent across key subgroups. Peripheral neuropathy was resolved or improved in 72% (84/117) of patients in the A+CHP arm and 78% (97/124) in the CHOP arm. Among patients who relapsed and subsequently received brentuximab vedotin, the objective response rate was 59% with brentuximab vedotin retreatment after A+CHP and 50% with subsequent brentuximab vedotin after CHOP. CONCLUSIONS: In this 5-year update of ECHELON-2, frontline treatment of patients with PTCL with A+CHP continues to provide clinically meaningful improvement in PFS and OS versus CHOP, with a manageable safety profile, including continued resolution or improvement of peripheral neuropathy

    Fluctuations and oscillations in a simple epidemic model

    Full text link
    We show that the simplest stochastic epidemiological models with spatial correlations exhibit two types of oscillatory behaviour in the endemic phase. In a large parameter range, the oscillations are due to resonant amplification of stochastic fluctuations, a general mechanism first reported for predator-prey dynamics. In a narrow range of parameters that includes many infectious diseases which confer long lasting immunity the oscillations persist for infinite populations. This effect is apparent in simulations of the stochastic process in systems of variable size, and can be understood from the phase diagram of the deterministic pair approximation equations. The two mechanisms combined play a central role in explaining the ubiquity of oscillatory behaviour in real data and in simulation results of epidemic and other related models.Comment: acknowledgments added; a typo in the discussion that follows Eq. (3) is corrected

    Consistency of shared reference frames should be reexamined

    Full text link
    In a recent Letter [G. Chiribella et al., Phys. Rev. Lett. 98, 120501 (2007)], four protocols were proposed to secretly transmit a reference frame. Here We point out that in these protocols an eavesdropper can change the transmitted reference frame without being detected, which means the consistency of the shared reference frames should be reexamined. The way to check the above consistency is discussed. It is shown that this problem is quite different from that in previous protocols of quantum cryptography.Comment: 3 pages, 1 figure, comments are welcom

    Apples to apples A^2 – II. Cluster selection functions for next-generation surveys

    Get PDF
    We present the cluster selection function for three of the largest next-generation stage-IV surveys in the optical and infrared: Euclid-Optimistic, Euclid-Pessimistic and the Large Synoptic Survey Telescope (LSST). To simulate these surveys, we use the realistic mock catalogues introduced in the first paper of this series. We detected galaxy clusters using the Bayesian Cluster Finder in the mock catalogues. We then modelled and calibrated the total cluster stellar mass observable–theoretical mass (M^∗_(CL)—M_h) relation using a power-law model, including a possible redshift evolution term. We find a moderate scatter of σM^∗_(CL)|M_h) of 0.124, 0.135 and 0.136 dex for Euclid-Optimistic, Euclid-Pessimistic and LSST, respectively, comparable to other work over more limited ranges of redshift. Moreover, the three data sets are consistent with negligible evolution with redshift, in agreement with observational and simulation results in the literature. We find that Euclid-Optimistic will be able to detect clusters with >80 per cent completeness and purity down to 8 × 10^(13) h^(−1) M_⊙ up to z < 1. At higher redshifts, the same completeness and purity are obtained with the larger mass threshold of 2 × 10^(14) h^(−1) M_⊙ up to z = 2. The Euclid-Pessimistic selection function has a similar shape with ∼10 per cent higher mass limit. LSST shows ∼5 per cent higher mass limit than Euclid-Optimistic up to z < 0.7 and increases afterwards, reaching a value of 2 × 10^(14) h^(−1) M_⊙ at z = 1.4. Similar selection functions with only 80 per cent completeness threshold have also been computed. The complementarity of these results with selection functions for surveys in other bands is discussed

    Application of asymptotic expansions of maximum likelihood estimators errors to gravitational waves from binary mergers: the single interferometer case

    Get PDF
    In this paper we describe a new methodology to calculate analytically the error for a maximum likelihood estimate (MLE) for physical parameters from Gravitational wave signals. All the existing litterature focuses on the usage of the Cramer Rao Lower bounds (CRLB) as a mean to approximate the errors for large signal to noise ratios. We show here how the variance and the bias of a MLE estimate can be expressed instead in inverse powers of the signal to noise ratios where the first order in the variance expansion is the CRLB. As an application we compute the second order of the variance and bias for MLE of physical parameters from the inspiral phase of binary mergers and for noises of gravitational wave interferometers . We also compare the improved error estimate with existing numerical estimates. The value of the second order of the variance expansions allows to get error predictions closer to what is observed in numerical simulations. It also predicts correctly the necessary SNR to approximate the error with the CRLB and provides new insight on the relationship between waveform properties SNR and estimation errors. For example the timing match filtering becomes optimal only if the SNR is larger than the kurtosis of the gravitational wave spectrum
    • …
    corecore