1,372 research outputs found

    Optical one-way quantum computing with a simulated valence-bond solid

    Full text link
    One-way quantum computation proceeds by sequentially measuring individual spins (qubits) in an entangled many-spin resource state. It remains a challenge, however, to efficiently produce such resource states. Is it possible to reduce the task of generating these states to simply cooling a quantum many-body system to its ground state? Cluster states, the canonical resource for one-way quantum computing, do not naturally occur as ground states of physical systems. This led to a significant effort to identify alternative resource states that appear as ground states in spin lattices. An appealing candidate is a valence-bond-solid state described by Affleck, Kennedy, Lieb, and Tasaki (AKLT). It is the unique, gapped ground state for a two-body Hamiltonian on a spin-1 chain, and can be used as a resource for one-way quantum computing. Here, we experimentally generate a photonic AKLT state and use it to implement single-qubit quantum logic gates.Comment: 11 pages, 4 figures, 8 tables - added one referenc

    Measurement-based quantum computation in a 2D phase of matter

    Full text link
    Recently it has been shown that the non-local correlations needed for measurement based quantum computation (MBQC) can be revealed in the ground state of the Affleck-Kennedy-Lieb-Tasaki (AKLT) model involving nearest neighbor spin-3/2 interactions on a honeycomb lattice. This state is not singular but resides in the disordered phase of ground states of a large family of Hamiltonians characterized by short-range-correlated valence bond solid states. By applying local filtering and adaptive single particle measurements we show that most states in the disordered phase can be reduced to a graph of correlated qubits that is a scalable resource for MBQC. At the transition between the disordered and Neel ordered phases we find a transition from universal to non-universal states as witnessed by the scaling of percolation in the reduced graph state.Comment: 8 pages, 6 figures, comments welcome. v2: published versio

    Density functional calculations of nanoscale conductance

    Full text link
    Density functional calculations for the electronic conductance of single molecules are now common. We examine the methodology from a rigorous point of view, discussing where it can be expected to work, and where it should fail. When molecules are weakly coupled to leads, local and gradient-corrected approximations fail, as the Kohn-Sham levels are misaligned. In the weak bias regime, XC corrections to the current are missed by the standard methodology. For finite bias, a new methodology for performing calculations can be rigorously derived using an extension of time-dependent current density functional theory from the Schroedinger equation to a Master equation.Comment: topical review, 28 pages, updated version with some revision

    Ozone production chemistry in the presence of urban plumes

    Get PDF
    Ozone pollution affects human health, especially in urban areas on hot sunny days. Its basic photochemistry has been known for decades and yet it is still not possible to correctly predict the high ozone levels that are the greatest threat. The CalNex_SJV study in Bakersfield CA in May/June 2010 provided an opportunity to examine ozone photochemistry in an urban area surrounded by agriculture. The measurement suite included hydroxyl (OH), hydroperoxyl (HO_2), and OH reactivity, which are compared with the output of a photochemical box model. While the agreement is generally within combined uncertainties, measured HO2 far exceeds modeled HO_2 in NO_x-rich plumes. OH production and loss do not balance as they should in the morning, and the ozone production calculated with measured HO_2 is a decade greater than that calculated with modeled HO_2 when NO levels are high. Calculated ozone production using measured HO2 is twice that using modeled HO_2, but this difference in calculated ozone production has minimal impact on the assessment of NOx-sensitivity or VOC-sensitivity for midday ozone production. Evidence from this study indicates that this important discrepancy is not due to the HO_2 measurement or to the sampling of transported plumes but instead to either emissions of unknown organic species that accompany the NO emissions or unknown photochemistry involving nitrogen oxides and hydrogen oxides, possibly the hypothesized reaction OH + NO + O_2 → HO_2 + NO_2

    Incomplete inverse spectral and nodal problems for differential pencils

    Get PDF
    [[abstract]]We prove uniqueness theorems for so-called half inverse spectral problem (and also for some its modification) for second order differential pencils on a finite interval with Robin boundary conditions. Using the obtained result we show that for unique determination of the pencil it is sufficient to specify the nodal points only on a part of the interval slightly exceeding its half.[[notice]]補正完畢[[incitationindex]]SCI[[booktype]]紙本[[booktype]]電子

    Preparation of distilled and purified continuous variable entangled states

    Full text link
    The distribution of entangled states of light over long distances is a major challenge in the field of quantum information. Optical losses, phase diffusion and mixing with thermal states lead to decoherence and destroy the non-classical states after some finite transmission-line length. Quantum repeater protocols, which combine quantum memory, entanglement distillation and entanglement swapping, were proposed to overcome this problem. Here we report on the experimental demonstration of entanglement distillation in the continuous-variable regime. Entangled states were first disturbed by random phase fluctuations and then distilled and purified using interference on beam splitters and homodyne detection. Measurements of covariance matrices clearly indicate a regained strength of entanglement and purity of the distilled states. In contrast to previous demonstrations of entanglement distillation in the complementary discrete-variable regime, our scheme achieved the actual preparation of the distilled states, which might therefore be used to improve the quality of downstream applications such as quantum teleportation

    Logistic random effects regression models: a comparison of statistical packages for binary and ordinal outcomes

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Logistic random effects models are a popular tool to analyze multilevel also called hierarchical data with a binary or ordinal outcome. Here, we aim to compare different statistical software implementations of these models.</p> <p>Methods</p> <p>We used individual patient data from 8509 patients in 231 centers with moderate and severe Traumatic Brain Injury (TBI) enrolled in eight Randomized Controlled Trials (RCTs) and three observational studies. We fitted logistic random effects regression models with the 5-point Glasgow Outcome Scale (GOS) as outcome, both dichotomized as well as ordinal, with center and/or trial as random effects, and as covariates age, motor score, pupil reactivity or trial. We then compared the implementations of frequentist and Bayesian methods to estimate the fixed and random effects. Frequentist approaches included R (lme4), Stata (GLLAMM), SAS (GLIMMIX and NLMIXED), MLwiN ([R]IGLS) and MIXOR, Bayesian approaches included WinBUGS, MLwiN (MCMC), R package MCMCglmm and SAS experimental procedure MCMC.</p> <p>Three data sets (the full data set and two sub-datasets) were analysed using basically two logistic random effects models with either one random effect for the center or two random effects for center and trial. For the ordinal outcome in the full data set also a proportional odds model with a random center effect was fitted.</p> <p>Results</p> <p>The packages gave similar parameter estimates for both the fixed and random effects and for the binary (and ordinal) models for the main study and when based on a relatively large number of level-1 (patient level) data compared to the number of level-2 (hospital level) data. However, when based on relatively sparse data set, i.e. when the numbers of level-1 and level-2 data units were about the same, the frequentist and Bayesian approaches showed somewhat different results. The software implementations differ considerably in flexibility, computation time, and usability. There are also differences in the availability of additional tools for model evaluation, such as diagnostic plots. The experimental SAS (version 9.2) procedure MCMC appeared to be inefficient.</p> <p>Conclusions</p> <p>On relatively large data sets, the different software implementations of logistic random effects regression models produced similar results. Thus, for a large data set there seems to be no explicit preference (of course if there is no preference from a philosophical point of view) for either a frequentist or Bayesian approach (if based on vague priors). The choice for a particular implementation may largely depend on the desired flexibility, and the usability of the package. For small data sets the random effects variances are difficult to estimate. In the frequentist approaches the MLE of this variance was often estimated zero with a standard error that is either zero or could not be determined, while for Bayesian methods the estimates could depend on the chosen "non-informative" prior of the variance parameter. The starting value for the variance parameter may be also critical for the convergence of the Markov chain.</p

    High-precision photometry by telescope defocussing - VI. WASP-24, WASP-25 and WASP-26

    Get PDF
    The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013/) under grant agreement nos. 229517 and 268421. This publication was supported by grants NPRP 09-476-1-078 and NPRP X-019-1-006 from Qatar National Research Fund (a member of Qatar Foundation). TCH acknowledges financial support from the Korea Research Council for Fundamental Science and Technology (KRCF) through the Young Research Scientist Fellowship Programme and is supported by the KASI (Korea Astronomy and Space Science Institute) grant 2012-1-410-02/2013-9-400-00. SG, XW and XF acknowledge the support from NSFC under the grant no. 10873031. The research is supported by the ASTERISK project (ASTERoseismic Investigations with SONG and Kepler) funded by the European Research Council (grant agreement no. 267864). DR, YD, AE, FF (ARC), OW (FNRS research fellow) and J Surdej acknowledge support from the Communauté française de Belgique – Actions de recherche concertées – Académie Wallonie-Europe.We present time series photometric observations of 13 transits in the planetary systems WASP-24, WASP-25 and WASP-26. All three systems have orbital obliquity measurements, WASP-24 and WASP-26 have been observed with Spitzer, and WASP-25 was previously comparatively neglected. Our light curves were obtained using the telescope-defocussing method and have scatters of 0.5–1.2 mmag relative to their best-fitting geometric models. We use these data to measure the physical properties and orbital ephemerides of the systems to high precision, finding that our improved measurements are in good agreement with previous studies. High-resolution Lucky Imaging observations of all three targets show no evidence for faint stars close enough to contaminate our photometry. We confirm the eclipsing nature of the star closest to WASP-24 and present the detection of a detached eclipsing binary within 4.25 arcmin of WASP-26.Publisher PDFPeer reviewe

    The Transiting System GJ1214: High-Precision Defocused Transit Observations and a Search for Evidence of Transit Timing Variation

    Get PDF
    Aims: We present 11 high-precision photometric transit observations of the transiting super-Earth planet GJ1214b. Combining these data with observations from other authors, we investigate the ephemeris for possible signs of transit timing variations (TTVs) using a Bayesian approach. Methods: The observations were obtained using telescope-defocusing techniques, and achieve a high precision with random errors in the photometry as low as 1mmag per point. To investigate the possibility of TTVs in the light curve, we calculate the overall probability of a TTV signal using Bayesian methods. Results: The observations are used to determine the photometric parameters and the physical properties of the GJ1214 system. Our results are in good agreement with published values. Individual times of mid-transit are measured with uncertainties as low as 10s, allowing us to reduce the uncertainty in the orbital period by a factor of two. Conclusions: A Bayesian analysis reveals that it is highly improbable that the observed transit times is explained by TTV, when compared with the simpler alternative of a linear ephemeris.Comment: Submitted to A&
    corecore