93 research outputs found

    Expression profiling of blood samples from an SU5416 Phase III metastatic colorectal cancer clinical trial: a novel strategy for biomarker identification

    Get PDF
    BACKGROUND: Microarray-based gene expression profiling is a powerful approach for the identification of molecular biomarkers of disease, particularly in human cancers. Utility of this approach to measure responses to therapy is less well established, in part due to challenges in obtaining serial biopsies. Identification of suitable surrogate tissues will help minimize limitations imposed by those challenges. This study describes an approach used to identify gene expression changes that might serve as surrogate biomarkers of drug activity. METHODS: Expression profiling using microarrays was applied to peripheral blood mononuclear cell (PBMC) samples obtained from patients with advanced colorectal cancer participating in a Phase III clinical trial. The PBMC samples were harvested pre-treatment and at the end of the first 6-week cycle from patients receiving standard of care chemotherapy or standard of care plus SU5416, a vascular endothelial growth factor (VEGF) receptor tyrosine kinase (RTK) inhibitor. Results from matched pairs of PBMC samples from 23 patients were queried for expression changes that consistently correlated with SU5416 administration. RESULTS: Thirteen transcripts met this selection criterion; six were further tested by quantitative RT-PCR analysis of 62 additional samples from this trial and a second SU5416 Phase III trial of similar design. This method confirmed four of these transcripts (CD24, lactoferrin, lipocalin 2, and MMP-9) as potential biomarkers of drug treatment. Discriminant analysis showed that expression profiles of these 4 transcripts could be used to classify patients by treatment arm in a predictive fashion. CONCLUSIONS: These results establish a foundation for the further exploration of peripheral blood cells as a surrogate system for biomarker analyses in clinical oncology studies

    Development and validation of a rabbit model of Pseudomonas aeruginosa non-ventilated pneumonia for preclinical drug development

    Get PDF
    BackgroundNew drugs targeting antimicrobial resistant pathogens, including Pseudomonas aeruginosa, have been challenging to evaluate in clinical trials, particularly for the non-ventilated hospital-acquired pneumonia and ventilator-associated pneumonia indications. Development of new antibacterial drugs is facilitated by preclinical animal models that could predict clinical efficacy in patients with these infections.MethodsWe report here an FDA-funded study to develop a rabbit model of non-ventilated pneumonia with Pseudomonas aeruginosa by determining the extent to which the natural history of animal disease reproduced human pathophysiology and conducting validation studies to evaluate whether humanized dosing regimens of two antibiotics, meropenem and tobramycin, can halt or reverse disease progression.ResultsIn a rabbit model of non-ventilated pneumonia, endobronchial challenge with live P. aeruginosa strain 6206, but not with UV-killed Pa6206, caused acute respiratory distress syndrome, as evidenced by acute lung inflammation, pulmonary edema, hemorrhage, severe hypoxemia, hyperlactatemia, neutropenia, thrombocytopenia, and hypoglycemia, which preceded respiratory failure and death. Pa6206 increased >100-fold in the lungs and then disseminated from there to infect distal organs, including spleen and kidneys. At 5 h post-infection, 67% of Pa6206-challenged rabbits had PaO2 <60 mmHg, corresponding to a clinical cut-off when oxygen therapy would be required. When administered at 5 h post-infection, humanized dosing regimens of tobramycin and meropenem reduced mortality to 17-33%, compared to 100% for saline-treated rabbits (P<0.001 by log-rank tests). For meropenem which exhibits time-dependent bactericidal activity, rabbits treated with a humanized meropenem dosing regimen of 80 mg/kg q2h for 24 h achieved 100% T>MIC, resulting in 75% microbiological clearance rate of Pa6206 from the lungs. For tobramycin which exhibits concentration-dependent killing, rabbits treated with a humanized tobramycin dosing regimen of 8 mg/kg q8h for 24 h achieved Cmax/MIC of 9.8 ± 1.4 at 60 min post-dose, resulting in 50% lung microbiological clearance rate. In contrast, rabbits treated with a single tobramycin dose of 2.5 mg/kg had Cmax/MIC of 7.8 ± 0.8 and 8% (1/12) microbiological clearance rate, indicating that this rabbit model can detect dose-response effects.ConclusionThe rabbit model may be used to help predict clinical efficacy of new antibacterial drugs for the treatment of non-ventilated P. aeruginosa pneumonia

    Effects of antiplatelet therapy on stroke risk by brain imaging features of intracerebral haemorrhage and cerebral small vessel diseases: subgroup analyses of the RESTART randomised, open-label trial

    Get PDF
    Background Findings from the RESTART trial suggest that starting antiplatelet therapy might reduce the risk of recurrent symptomatic intracerebral haemorrhage compared with avoiding antiplatelet therapy. Brain imaging features of intracerebral haemorrhage and cerebral small vessel diseases (such as cerebral microbleeds) are associated with greater risks of recurrent intracerebral haemorrhage. We did subgroup analyses of the RESTART trial to explore whether these brain imaging features modify the effects of antiplatelet therapy

    Overcoming leakage in scalable quantum error correction

    Full text link
    Leakage of quantum information out of computational states into higher energy states represents a major challenge in the pursuit of quantum error correction (QEC). In a QEC circuit, leakage builds over time and spreads through multi-qubit interactions. This leads to correlated errors that degrade the exponential suppression of logical error with scale, challenging the feasibility of QEC as a path towards fault-tolerant quantum computation. Here, we demonstrate the execution of a distance-3 surface code and distance-21 bit-flip code on a Sycamore quantum processor where leakage is removed from all qubits in each cycle. This shortens the lifetime of leakage and curtails its ability to spread and induce correlated errors. We report a ten-fold reduction in steady-state leakage population on the data qubits encoding the logical state and an average leakage population of less than 1×1031 \times 10^{-3} throughout the entire device. The leakage removal process itself efficiently returns leakage population back to the computational basis, and adding it to a code circuit prevents leakage from inducing correlated error across cycles, restoring a fundamental assumption of QEC. With this demonstration that leakage can be contained, we resolve a key challenge for practical QEC at scale.Comment: Main text: 7 pages, 5 figure

    Suppressing quantum errors by scaling a surface code logical qubit

    Full text link
    Practical quantum computing will require error rates that are well below what is achievable with physical qubits. Quantum error correction offers a path to algorithmically-relevant error rates by encoding logical qubits within many physical qubits, where increasing the number of physical qubits enhances protection against physical errors. However, introducing more qubits also increases the number of error sources, so the density of errors must be sufficiently low in order for logical performance to improve with increasing code size. Here, we report the measurement of logical qubit performance scaling across multiple code sizes, and demonstrate that our system of superconducting qubits has sufficient performance to overcome the additional errors from increasing qubit number. We find our distance-5 surface code logical qubit modestly outperforms an ensemble of distance-3 logical qubits on average, both in terms of logical error probability over 25 cycles and logical error per cycle (2.914%±0.016%2.914\%\pm 0.016\% compared to 3.028%±0.023%3.028\%\pm 0.023\%). To investigate damaging, low-probability error sources, we run a distance-25 repetition code and observe a 1.7×1061.7\times10^{-6} logical error per round floor set by a single high-energy event (1.6×1071.6\times10^{-7} when excluding this event). We are able to accurately model our experiment, and from this model we can extract error budgets that highlight the biggest challenges for future systems. These results mark the first experimental demonstration where quantum error correction begins to improve performance with increasing qubit number, illuminating the path to reaching the logical error rates required for computation.Comment: Main text: 6 pages, 4 figures. v2: Update author list, references, Fig. S12, Table I

    Measurement-induced entanglement and teleportation on a noisy quantum processor

    Full text link
    Measurement has a special role in quantum theory: by collapsing the wavefunction it can enable phenomena such as teleportation and thereby alter the "arrow of time" that constrains unitary evolution. When integrated in many-body dynamics, measurements can lead to emergent patterns of quantum information in space-time that go beyond established paradigms for characterizing phases, either in or out of equilibrium. On present-day NISQ processors, the experimental realization of this physics is challenging due to noise, hardware limitations, and the stochastic nature of quantum measurement. Here we address each of these experimental challenges and investigate measurement-induced quantum information phases on up to 70 superconducting qubits. By leveraging the interchangeability of space and time, we use a duality mapping, to avoid mid-circuit measurement and access different manifestations of the underlying phases -- from entanglement scaling to measurement-induced teleportation -- in a unified way. We obtain finite-size signatures of a phase transition with a decoding protocol that correlates the experimental measurement record with classical simulation data. The phases display sharply different sensitivity to noise, which we exploit to turn an inherent hardware limitation into a useful diagnostic. Our work demonstrates an approach to realize measurement-induced physics at scales that are at the limits of current NISQ processors

    Non-Abelian braiding of graph vertices in a superconducting processor

    Full text link
    Indistinguishability of particles is a fundamental principle of quantum mechanics. For all elementary and quasiparticles observed to date - including fermions, bosons, and Abelian anyons - this principle guarantees that the braiding of identical particles leaves the system unchanged. However, in two spatial dimensions, an intriguing possibility exists: braiding of non-Abelian anyons causes rotations in a space of topologically degenerate wavefunctions. Hence, it can change the observables of the system without violating the principle of indistinguishability. Despite the well developed mathematical description of non-Abelian anyons and numerous theoretical proposals, the experimental observation of their exchange statistics has remained elusive for decades. Controllable many-body quantum states generated on quantum processors offer another path for exploring these fundamental phenomena. While efforts on conventional solid-state platforms typically involve Hamiltonian dynamics of quasi-particles, superconducting quantum processors allow for directly manipulating the many-body wavefunction via unitary gates. Building on predictions that stabilizer codes can host projective non-Abelian Ising anyons, we implement a generalized stabilizer code and unitary protocol to create and braid them. This allows us to experimentally verify the fusion rules of the anyons and braid them to realize their statistics. We then study the prospect of employing the anyons for quantum computation and utilize braiding to create an entangled state of anyons encoding three logical qubits. Our work provides new insights about non-Abelian braiding and - through the future inclusion of error correction to achieve topological protection - could open a path toward fault-tolerant quantum computing

    Cardiovascular disease, chronic kidney disease, and diabetes mortality burden of cardiometabolic risk factors from 1980 to 2010: A comparative risk assessment

    Get PDF
    Background: High blood pressure, blood glucose, serum cholesterol, and BMI are risk factors for cardiovascular diseases and some of these factors also increase the risk of chronic kidney disease and diabetes. We estimated mortality from cardiovascular diseases, chronic kidney disease, and diabetes that was attributable to these four cardiometabolic risk factors for all countries and regions from 1980 to 2010. Methods: We used data for exposure to risk factors by country, age group, and sex from pooled analyses of population-based health surveys. We obtained relative risks for the effects of risk factors on cause-specific mortality from meta-analyses of large prospective studies. We calculated the population attributable fractions for each risk factor alone, and for the combination of all risk factors, accounting for multicausality and for mediation of the effects of BMI by the other three risks. We calculated attributable deaths by multiplying the cause-specific population attributable fractions by the number of disease-specific deaths. We obtained cause-specific mortality from the Global Burden of Diseases, Injuries, and Risk Factors 2010 Study. We propagated the uncertainties of all the inputs to the final estimates. Findings: In 2010, high blood pressure was the leading risk factor for deaths due to cardiovascular diseases, chronic kidney disease, and diabetes in every region, causing more than 40% of worldwide deaths from these diseases; high BMI and glucose were each responsible for about 15% of deaths, and high cholesterol for more than 10%. After accounting for multicausality, 63% (10·8 million deaths, 95% CI 10·1-11·5) of deaths from these diseases in 2010 were attributable to the combined effect of these four metabolic risk factors, compared with 67% (7·1 million deaths, 6·6-7·6) in 1980. The mortality burden of high BMI and glucose nearly doubled from 1980 to 2010. At the country level, age-standardised death rates from these diseases attributable to the combined effects of these four risk factors surpassed 925 deaths per 100 000 for men in Belarus, Kazakhstan, and Mongolia, but were less than 130 deaths per 100 000 for women and less than 200 for men in some high-income countries including Australia, Canada, France, Japan, the Netherlands, Singapore, South Korea, and Spain. Interpretation: The salient features of the cardiometabolic disease and risk factor epidemic at the beginning of the 21st century are high blood pressure and an increasing effect of obesity and diabetes. The mortality burden of cardiometabolic risk factors has shifted from high-income to low-income and middle-income countries. Lowering cardiometabolic risks through dietary, behavioural, and pharmacological interventions should be a part of the global response to non-communicable diseases. Funding: UK Medical Research Council, US National Institutes of Health. © 2014 Elsevier Ltd

    Testing a global standard for quantifying species recovery and assessing conservation impact.

    Get PDF
    Recognizing the imperative to evaluate species recovery and conservation impact, in 2012 the International Union for Conservation of Nature (IUCN) called for development of a "Green List of Species" (now the IUCN Green Status of Species). A draft Green Status framework for assessing species' progress toward recovery, published in 2018, proposed 2 separate but interlinked components: a standardized method (i.e., measurement against benchmarks of species' viability, functionality, and preimpact distribution) to determine current species recovery status (herein species recovery score) and application of that method to estimate past and potential future impacts of conservation based on 4 metrics (conservation legacy, conservation dependence, conservation gain, and recovery potential). We tested the framework with 181 species representing diverse taxa, life histories, biomes, and IUCN Red List categories (extinction risk). Based on the observed distribution of species' recovery scores, we propose the following species recovery categories: fully recovered, slightly depleted, moderately depleted, largely depleted, critically depleted, extinct in the wild, and indeterminate. Fifty-nine percent of tested species were considered largely or critically depleted. Although there was a negative relationship between extinction risk and species recovery score, variation was considerable. Some species in lower risk categories were assessed as farther from recovery than those at higher risk. This emphasizes that species recovery is conceptually different from extinction risk and reinforces the utility of the IUCN Green Status of Species to more fully understand species conservation status. Although extinction risk did not predict conservation legacy, conservation dependence, or conservation gain, it was positively correlated with recovery potential. Only 1.7% of tested species were categorized as zero across all 4 of these conservation impact metrics, indicating that conservation has, or will, play a role in improving or maintaining species status for the vast majority of these species. Based on our results, we devised an updated assessment framework that introduces the option of using a dynamic baseline to assess future impacts of conservation over the short term to avoid misleading results which were generated in a small number of cases, and redefines short term as 10 years to better align with conservation planning. These changes are reflected in the IUCN Green Status of Species Standard

    Safety and efficacy of the ChAdOx1 nCoV-19 vaccine (AZD1222) against SARS-CoV-2: an interim analysis of four randomised controlled trials in Brazil, South Africa, and the UK

    Get PDF
    Background: A safe and efficacious vaccine against severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), if deployed with high coverage, could contribute to the control of the COVID-19 pandemic. We evaluated the safety and efficacy of the ChAdOx1 nCoV-19 vaccine in a pooled interim analysis of four trials. Methods: This analysis includes data from four ongoing blinded, randomised, controlled trials done across the UK, Brazil, and South Africa. Participants aged 18 years and older were randomly assigned (1:1) to ChAdOx1 nCoV-19 vaccine or control (meningococcal group A, C, W, and Y conjugate vaccine or saline). Participants in the ChAdOx1 nCoV-19 group received two doses containing 5 × 1010 viral particles (standard dose; SD/SD cohort); a subset in the UK trial received a half dose as their first dose (low dose) and a standard dose as their second dose (LD/SD cohort). The primary efficacy analysis included symptomatic COVID-19 in seronegative participants with a nucleic acid amplification test-positive swab more than 14 days after a second dose of vaccine. Participants were analysed according to treatment received, with data cutoff on Nov 4, 2020. Vaccine efficacy was calculated as 1 - relative risk derived from a robust Poisson regression model adjusted for age. Studies are registered at ISRCTN89951424 and ClinicalTrials.gov, NCT04324606, NCT04400838, and NCT04444674. Findings: Between April 23 and Nov 4, 2020, 23 848 participants were enrolled and 11 636 participants (7548 in the UK, 4088 in Brazil) were included in the interim primary efficacy analysis. In participants who received two standard doses, vaccine efficacy was 62·1% (95% CI 41·0–75·7; 27 [0·6%] of 4440 in the ChAdOx1 nCoV-19 group vs71 [1·6%] of 4455 in the control group) and in participants who received a low dose followed by a standard dose, efficacy was 90·0% (67·4–97·0; three [0·2%] of 1367 vs 30 [2·2%] of 1374; pinteraction=0·010). Overall vaccine efficacy across both groups was 70·4% (95·8% CI 54·8–80·6; 30 [0·5%] of 5807 vs 101 [1·7%] of 5829). From 21 days after the first dose, there were ten cases hospitalised for COVID-19, all in the control arm; two were classified as severe COVID-19, including one death. There were 74 341 person-months of safety follow-up (median 3·4 months, IQR 1·3–4·8): 175 severe adverse events occurred in 168 participants, 84 events in the ChAdOx1 nCoV-19 group and 91 in the control group. Three events were classified as possibly related to a vaccine: one in the ChAdOx1 nCoV-19 group, one in the control group, and one in a participant who remains masked to group allocation. Interpretation: ChAdOx1 nCoV-19 has an acceptable safety profile and has been found to be efficacious against symptomatic COVID-19 in this interim analysis of ongoing clinical trials. Funding: UK Research and Innovation, National Institutes for Health Research (NIHR), Coalition for Epidemic Preparedness Innovations, Bill & Melinda Gates Foundation, Lemann Foundation, Rede D’Or, Brava and Telles Foundation, NIHR Oxford Biomedical Research Centre, Thames Valley and South Midland's NIHR Clinical Research Network, and AstraZeneca
    corecore