4,764 research outputs found

    Association is not causation: treatment effects cannot be estimated from observational data in heart failure

    Get PDF
    Aims: Treatment ‘effects’ are often inferred from non-randomized and observational studies. These studies have inherent biases and limitations, which may make therapeutic inferences based on their results unreliable. We compared the conflicting findings of these studies to those of prospective randomized controlled trials (RCTs) in relation to pharmacological treatments for heart failure (HF). Methods and results: We searched Medline and Embase to identify studies of the association between non-randomized drug therapy and all-cause mortality in patients with HF until 31 December 2017. The treatments of interest were: angiotensin-converting enzyme inhibitors, angiotensin receptor blockers, beta-blockers, mineralocorticoid receptor antagonists (MRAs), statins, and digoxin. We compared the findings of these observational studies with those of relevant RCTs. We identified 92 publications, reporting 94 non-randomized studies, describing 158 estimates of the ‘effect’ of the six treatments of interest on all-cause mortality, i.e. some studies examined more than one treatment and/or HF phenotype. These six treatments had been tested in 25 RCTs. For example, two pivotal RCTs showed that MRAs reduced mortality in patients with HF with reduced ejection fraction. However, only one of 12 non-randomized studies found that MRAs were of benefit, with 10 finding a neutral effect, and one a harmful effect. Conclusion: This comprehensive comparison of studies of non-randomized data with the findings of RCTs in HF shows that it is not possible to make reliable therapeutic inferences from observational associations. While trials undoubtedly leave gaps in evidence and enrol selected participants, they clearly remain the best guide to the treatment of patients

    Protecting the regenerative environment: selecting the optimal delivery vehicle for cartilage repair—a narrative review

    Get PDF
    Focal cartilage defects are common in youth and older adults, cause significant morbidity and constitute a major risk factor for developing osteoarthritis (OA). OA is the most common musculoskeletal (MSK) disease worldwide, resulting in pain, stiffness, loss of function, and is currently irreversible. Research into the optimal regenerative approach and methods in the setting of either focal cartilage defects and/or OA holds to the ideal of resolving both diseases. The two fundamentals required for cartilage regenerative treatment are 1) the biological element contributing to the regeneration (e.g., direct application of stem cells, or of an exogenous secretome), and 2) the vehicle by which the biological element is suspended and delivered. The vehicle provides support to the regenerative process by providing a protective environment, a structure that allows cell adherence and migration, and a source of growth and regenerative factors that can activate and sustain regeneration. Models of cartilage diseases include osteochondral defect (OCD) (which usually involve one focal lesion), or OA (which involves a more diffuse articular cartilage loss). Given the differing nature of these models, the optimal regenerative strategy to treat different cartilage diseases may not be universal. This could potentially impact the translatability of a successful approach in one condition to that of the other. An analogy would be the repair of a pothole (OCD) versus repaving the entire road (OA). In this narrative review, we explore the existing literature evaluating cartilage regeneration approaches for OCD and OA in animal then in human studies and the vehicles used for each of these two conditions. We then highlight strengths and challenges faced by the different approaches presented and discuss what might constitute the optimal cartilage regenerative delivery vehicle for clinical cartilage regeneration

    Nonlocality as a Benchmark for Universal Quantum Computation in Ising Anyon Topological Quantum Computers

    Get PDF
    An obstacle affecting any proposal for a topological quantum computer based on Ising anyons is that quasiparticle braiding can only implement a finite (non-universal) set of quantum operations. The computational power of this restricted set of operations (often called stabilizer operations) has been studied in quantum information theory, and it is known that no quantum-computational advantage can be obtained without the help of an additional non-stabilizer operation. Similarly, a bipartite two-qubit system based on Ising anyons cannot exhibit non-locality (in the sense of violating a Bell inequality) when only topologically protected stabilizer operations are performed. To produce correlations that cannot be described by a local hidden variable model again requires the use of a non-stabilizer operation. Using geometric techniques, we relate the sets of operations that enable universal quantum computing (UQC) with those that enable violation of a Bell inequality. Motivated by the fact that non-stabilizer operations are expected to be highly imperfect, our aim is to provide a benchmark for identifying UQC-enabling operations that is both experimentally practical and conceptually simple. We show that any (noisy) single-qubit non-stabilizer operation that, together with perfect stabilizer operations, enables violation of the simplest two-qubit Bell inequality can also be used to enable UQC. This benchmarking requires finding the expectation values of two distinct Pauli measurements on each qubit of a bipartite system.Comment: 12 pages, 2 figure

    Don't cut to the chase: hunting experiences for zoo animals and visitors

    Get PDF
    This workshop explores different ways to use technology to facilitate hunting behaviour enrichment for zoo-housed animals and parallel gaming experiences for zoo visitors

    Noise Thresholds for Higher Dimensional Systems using the Discrete Wigner Function

    Full text link
    For a quantum computer acting on d-dimensional systems, we analyze the computational power of circuits wherein stabilizer operations are perfect and we allow access to imperfect non-stabilizer states or operations. If the noise rate affecting the non-stabilizer resource is sufficiently high, then these states and operations can become simulable in the sense of the Gottesman-Knill theorem, reducing the overall power of the circuit to no better than classical. In this paper we find the depolarizing noise rate at which this happens, and consequently the most robust non-stabilizer states and non-Clifford gates. In doing so, we make use of the discrete Wigner function and derive facets of the so-called qudit Clifford polytope i.e. the inequalities defining the convex hull of all qudit Clifford gates. Our results for robust states are provably optimal. For robust gates we find a critical noise rate that, as dimension increases, rapidly approaches the the theoretical optimum of 100%. Some connections with the question of qudit magic state distillation are discussed.Comment: 14 pages, 1 table; Minor changes vs. version

    Unifying Gate Synthesis and Magic State Distillation

    Get PDF
    The leading paradigm for performing a computation on quantum memories can be encapsulated as distill-then-synthesize. Initially, one performs several rounds of distillation to create high-fidelity magic states that provide one good T gate, an essential quantum logic gate. Subsequently, gate synthesis intersperses many T gates with Clifford gates to realize a desired circuit. We introduce a unified framework that implements one round of distillation and multiquibit gate synthesis in a single step. Typically, our method uses the same number of T gates as conventional synthesis but with the added benefit of quadratic error suppression. Because of this, one less round of magic state distillation needs to be performed, leading to significant resource savings

    Serum free light chains are reduced in endurance trained older adults: Evidence that exercise training may reduce basal inflammation in older adults

    Get PDF
    Traditionally, free light chains (FLCs) are used as key serum biomarkers in the diagnosis and monitoring of plasma cell malignancies, but polyclonal FLCs can also be used as an accurate real-time indicator of immune-activation and inflammation. The primary aim of the present study was to assess the effects of exercise training status on serum FLCs in older adults, and secondly, to examine if training status moderated serum FLC responses to acute exercise. Kappa and lambda serum FLC levels were measured in 45 healthy older adults (aged ≥ 60 years) who were either sedentary, physically active or endurance trained. FLCs were measured at baseline and in response to an acute bout of submaximal exercise. The endurance trained group had significantly lower levels of kappa and lambda serum FLCs compared with physically active or sedentary elderly adults; these effects were independent of age, BMI and renal function. There was no significant difference in whole immunoglobulins between groups. Exercise training status had no effect on serum FLC responses to acute exercise, which were marginal. In conclusion, endurance training was associated with lower FLC levels compared with less physically active individuals. These findings suggest that long-term endurance training may be beneficial in reducing basal inflammation in older adults as well as elevated FLCs present in inflammatory and autoimmune conditions, often associated with ageing. FLCs may serve as a useful biomarker for monitoring the efficacy of exercise intervention studies in healthy and clinical populations
    • …
    corecore