235 research outputs found

    Randomised, double-blind, placebo-controlled trials of non-individualised homeopathic treatment: systematic review and meta-analysis

    Get PDF
    Background: A rigorous systematic review and meta-analysis focused on randomised controlled trials (RCTs) of non-individualised homeopathic treatment has not previously been reported. We tested the null hypothesis that the main outcome of treatment using a non-individualised (standardised) homeopathic medicine is indistinguishable from that of placebo. An additional aim was to quantify any condition-specific effects of non-individualised homeopathic treatment. Methods: Literature search strategy, data extraction and statistical analysis all followed the methods described in a pre-published protocol. A trial comprised ‘reliable evidence’ if its risk of bias was low or it was unclear in one specified domain of assessment. ‘Effect size’ was reported as standardised mean difference (SMD), with arithmetic transformation for dichotomous data carried out as required; a negative SMD indicated an effect favouring homeopathy. Results: Forty-eight different clinical conditions were represented in 75 eligible RCTs. Forty-nine trials were classed as ‘high risk of bias’ and 23 as ‘uncertain risk of bias’; the remaining three, clinically heterogeneous, trials displayed sufficiently low risk of bias to be designated reliable evidence. Fifty-four trials had extractable data: pooled SMD was –0.33 (95% confidence interval (CI) –0.44, –0.21), which was attenuated to –0.16 (95% CI –0.31, –0.02) after adjustment for publication bias. The three trials with reliable evidence yielded a non-significant pooled SMD: –0.18 (95% CI –0.46, 0.09). There was no single clinical condition for which meta-analysis included reliable evidence. Conclusions: The quality of the body of evidence is low. A meta-analysis of all extractable data leads to rejection of our null hypothesis, but analysis of a small sub-group of reliable evidence does not support that rejection. Reliable evidence is lacking in condition-specific meta-analyses, precluding relevant conclusions. Better designed and more rigorous RCTs are needed in order to develop an evidence base that can decisively provide reliable effect estimates of non-individualised homeopathic treatment

    Comparing Existing Pipeline Networks with the Potential Scale of Future U.S. CO2 Pipeline Networks

    Get PDF
    AbstractInterest is growing regarding the potential size of a future U.S.-dedicated carbon dioxide (CO2) pipeline infrastructure if carbon dioxide capture and storage (CCS) technologies are commercially deployed on a large scale within the United States. This paper assesses the potential scale of the CO2 pipeline system needed under two hypothetical climate policies (WRE450 and WRE550 stabilization scenarios); a comparison is then made to the extant U.S. pipeline infrastructures used to deliver CO2 for enhanced oil recovery and to move natural gas and liquid hydrocarbons from areas of production and importation to markets. The analysis reveals that between 11,000 and 23,000 additional miles of dedicated CO2 pipeline might be needed in the United States before 2050 across these two cases. While either case represents a significant increase over the 3900 miles that comprise the existing national CO2 pipeline infrastructure, it is important to realize that the demand for additional CO2 pipeline capacity will unfold relatively slowly and in a geographically dispersed manner as new dedicated CCS-enabled power plants and industrial facilities are brought online. During the period 2010Â2030, this analysis indicates growth in the CO2 pipeline system on the order of a few hundred to less than 1000 miles per year. By comparison, during the period 1950Â2000, the U.S. natural gas pipeline distribution system grew at rates that far exceed these growth projections for a future CO2 pipeline network in the U.S. This analysis indicates that the need to increase the size of the existing dedicated CO2 pipeline system should not be seen as a major obstacle for the commercial deployment of CCS technologies in the United States. While there could be issues associated with siting specific segments of a larger national CO2 pipeline infrastructure, the sheer scale of the required infrastructure should not be seen as representing a significant impediment to U.S. deployment of CCS technologies. Document type: Repor

    Convex recovery of a structured signal from independent random linear measurements

    Get PDF
    This chapter develops a theoretical analysis of the convex programming method for recovering a structured signal from independent random linear measurements. This technique delivers bounds for the sampling complexity that are similar with recent results for standard Gaussian measurements, but the argument applies to a much wider class of measurement ensembles. To demonstrate the power of this approach, the paper presents a short analysis of phase retrieval by trace-norm minimization. The key technical tool is a framework, due to Mendelson and coauthors, for bounding a nonnegative empirical process.Comment: 18 pages, 1 figure. To appear in "Sampling Theory, a Renaissance." v2: minor corrections. v3: updated citations and increased emphasis on Mendelson's contribution

    Solar Wind Turbulence and the Role of Ion Instabilities

    Get PDF
    International audienc

    Search for direct production of charginos and neutralinos in events with three leptons and missing transverse momentum in √s = 7 TeV pp collisions with the ATLAS detector

    Get PDF
    A search for the direct production of charginos and neutralinos in final states with three electrons or muons and missing transverse momentum is presented. The analysis is based on 4.7 fb−1 of proton–proton collision data delivered by the Large Hadron Collider and recorded with the ATLAS detector. Observations are consistent with Standard Model expectations in three signal regions that are either depleted or enriched in Z-boson decays. Upper limits at 95% confidence level are set in R-parity conserving phenomenological minimal supersymmetric models and in simplified models, significantly extending previous results

    Jet size dependence of single jet suppression in lead-lead collisions at sqrt(s(NN)) = 2.76 TeV with the ATLAS detector at the LHC

    Get PDF
    Measurements of inclusive jet suppression in heavy ion collisions at the LHC provide direct sensitivity to the physics of jet quenching. In a sample of lead-lead collisions at sqrt(s) = 2.76 TeV corresponding to an integrated luminosity of approximately 7 inverse microbarns, ATLAS has measured jets with a calorimeter over the pseudorapidity interval |eta| < 2.1 and over the transverse momentum range 38 < pT < 210 GeV. Jets were reconstructed using the anti-kt algorithm with values for the distance parameter that determines the nominal jet radius of R = 0.2, 0.3, 0.4 and 0.5. The centrality dependence of the jet yield is characterized by the jet "central-to-peripheral ratio," Rcp. Jet production is found to be suppressed by approximately a factor of two in the 10% most central collisions relative to peripheral collisions. Rcp varies smoothly with centrality as characterized by the number of participating nucleons. The observed suppression is only weakly dependent on jet radius and transverse momentum. These results provide the first direct measurement of inclusive jet suppression in heavy ion collisions and complement previous measurements of dijet transverse energy imbalance at the LHC.Comment: 15 pages plus author list (30 pages total), 8 figures, 2 tables, submitted to Physics Letters B. All figures including auxiliary figures are available at http://atlas.web.cern.ch/Atlas/GROUPS/PHYSICS/PAPERS/HION-2011-02

    The Convex Geometry of Linear Inverse Problems

    Get PDF
    In applications throughout science and engineering one is often faced with the challenge of solving an ill-posed inverse problem, where the number of available measurements is smaller than the dimension of the model to be estimated. However in many practical situations of interest, models are constrained structurally so that they only have a few degrees of freedom relative to their ambient dimension. This paper provides a general framework to convert notions of simplicity into convex penalty functions, resulting in convex optimization solutions to linear, underdetermined inverse problems. The class of simple models considered are those formed as the sum of a few atoms from some (possibly infinite) elementary atomic set; examples include well-studied cases such as sparse vectors and low-rank matrices, as well as several others including sums of a few permutations matrices, low-rank tensors, orthogonal matrices, and atomic measures. The convex programming formulation is based on minimizing the norm induced by the convex hull of the atomic set; this norm is referred to as the atomic norm. The facial structure of the atomic norm ball carries a number of favorable properties that are useful for recovering simple models, and an analysis of the underlying convex geometry provides sharp estimates of the number of generic measurements required for exact and robust recovery of models from partial information. These estimates are based on computing the Gaussian widths of tangent cones to the atomic norm ball. When the atomic set has algebraic structure the resulting optimization problems can be solved or approximated via semidefinite programming. The quality of these approximations affects the number of measurements required for recovery. Thus this work extends the catalog of simple models that can be recovered from limited linear information via tractable convex programming
    • 

    corecore