107 research outputs found

    Testing for hereditary thrombophilia: a retrospective analysis of testing referred to a national laboratory

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Predisposition to venous thrombosis may be assessed through testing for defects and/or deficiencies of a number of hereditary factors. There is potential for confusion about which of these tests are appropriate in which settings. At least one set of recommendations has been published to guide such testing, but it is unclear how widely these have been disseminated.</p> <p>Methods</p> <p>We performed a retrospective analysis of laboratory orders and results at a national referral laboratory to gain insight into physicians' ordering practices, specifically comparing them against the ordering practices recommended by a 2002 College of American Pathologists (CAP) consensus conference on thrombophilia testing. Measurements included absolute and relative ordering volumes and positivity rates from approximately 200,000 thrombophilia tests performed from September 2005 through August 2006 at a national reference laboratory. Quality control data were used to estimate the proportion of samples that may have been affected by anticoagulant therapy. A sample of ordering laboratories was surveyed in order to assess potential measurement bias.</p> <p>Results</p> <p>Total antigen assays for protein C, protein S and antithrombin were ordered almost as frequently as functional assays for these analytes. The DNA test for factor V Leiden was ordered much more often than the corresponding functional assay. In addition, relative positivity rates coupled with elevations in prothrombin time (PT) in many of these patients suggest that these tests are often ordered in the setting of oral anticoagulant therapy.</p> <p>Conclusion</p> <p>In this real-world setting, testing for inherited thrombophilia is frequently at odds with the recommendations of the CAP consensus conference. There is a need for wider dissemination of concise thrombophilia testing guidelines.</p

    Biomaterial Scaffolds as Pre‐metastatic Niche Mimics Systemically Alter the Primary Tumor and Tumor Microenvironment

    Full text link
    Primary tumor (PT) immune cells and pre‐metastatic niche (PMN) sites are critical to metastasis. Recently, synthetic biomaterial scaffolds used as PMN mimics are shown to capture both immune and metastatic tumor cells. Herein, studies are performed to investigate whether the scaffold‐mediated redirection of immune and tumor cells would alter the primary tumor microenvironment (TME). Transcriptomic analysis of PT cells from scaffold‐implanted and mock‐surgery mice identifies differentially regulated pathways relevant to invasion and metastasis progression. Transcriptomic differences are hypothesized to result from scaffold‐mediated modulations of immune cell trafficking and phenotype in the TME. Culturing tumor cells with conditioned media generated from PT immune cells of scaffold‐implanted mice decrease invasion in vitro more than two‐fold relative to mock surgery controls and reduce activity of invasion‐promoting transcription factors. Secretomic characterization of the conditioned media delineates interactions between immune cells in the TME and tumor cells, showing an increase in the pan‐metastasis inhibitor decorin and a concomitant decrease in invasion‐promoting chemokine (C‐C motif) ligand 2 (CCL2) in scaffold‐implanted mice. Flow cytometric and transcriptomic profiling of PT immune cells identify phenotypically distinct tumor‐associated macrophages (TAMs) in scaffold‐implanted mice, which may contribute to an invasion‐suppressive TME. Taken together, this study demonstrates biomaterial scaffolds systemically influence metastatic progression through manipulation of the TME.Biomaterial implants that mimic the pre‐metastatic niche are shown to redirect immune and tumor cell populations in vivo. However, the systemic effects of pre‐metastatic niche mimics on metastasis progression have yet to be characterized. In this work, synthetic biomaterial implants were shown to systemically alter the primary tumor and the tumor microenvironment to promote an invasion‐suppressive phenotype.Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/144244/1/adhm201700903-sup-0001-S1.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/144244/2/adhm201700903_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/144244/3/adhm201700903.pd

    Resolving catastrophic error bursts from cosmic rays in large arrays of superconducting qubits

    Full text link
    Scalable quantum computing can become a reality with error correction, provided coherent qubits can be constructed in large arrays. The key premise is that physical errors can remain both small and sufficiently uncorrelated as devices scale, so that logical error rates can be exponentially suppressed. However, energetic impacts from cosmic rays and latent radioactivity violate both of these assumptions. An impinging particle ionizes the substrate, radiating high energy phonons that induce a burst of quasiparticles, destroying qubit coherence throughout the device. High-energy radiation has been identified as a source of error in pilot superconducting quantum devices, but lacking a measurement technique able to resolve a single event in detail, the effect on large scale algorithms and error correction in particular remains an open question. Elucidating the physics involved requires operating large numbers of qubits at the same rapid timescales as in error correction, exposing the event's evolution in time and spread in space. Here, we directly observe high-energy rays impacting a large-scale quantum processor. We introduce a rapid space and time-multiplexed measurement method and identify large bursts of quasiparticles that simultaneously and severely limit the energy coherence of all qubits, causing chip-wide failure. We track the events from their initial localised impact to high error rates across the chip. Our results provide direct insights into the scale and dynamics of these damaging error bursts in large-scale devices, and highlight the necessity of mitigation to enable quantum computing to scale

    Readout of a quantum processor with high dynamic range Josephson parametric amplifiers

    Full text link
    We demonstrate a high dynamic range Josephson parametric amplifier (JPA) in which the active nonlinear element is implemented using an array of rf-SQUIDs. The device is matched to the 50 Ω\Omega environment with a Klopfenstein-taper impedance transformer and achieves a bandwidth of 250-300 MHz, with input saturation powers up to -95 dBm at 20 dB gain. A 54-qubit Sycamore processor was used to benchmark these devices, providing a calibration for readout power, an estimate of amplifier added noise, and a platform for comparison against standard impedance matched parametric amplifiers with a single dc-SQUID. We find that the high power rf-SQUID array design has no adverse effect on system noise, readout fidelity, or qubit dephasing, and we estimate an upper bound on amplifier added noise at 1.6 times the quantum limit. Lastly, amplifiers with this design show no degradation in readout fidelity due to gain compression, which can occur in multi-tone multiplexed readout with traditional JPAs.Comment: 9 pages, 8 figure

    Measurement-Induced State Transitions in a Superconducting Qubit: Within the Rotating Wave Approximation

    Full text link
    Superconducting qubits typically use a dispersive readout scheme, where a resonator is coupled to a qubit such that its frequency is qubit-state dependent. Measurement is performed by driving the resonator, where the transmitted resonator field yields information about the resonator frequency and thus the qubit state. Ideally, we could use arbitrarily strong resonator drives to achieve a target signal-to-noise ratio in the shortest possible time. However, experiments have shown that when the average resonator photon number exceeds a certain threshold, the qubit is excited out of its computational subspace, which we refer to as a measurement-induced state transition. These transitions degrade readout fidelity, and constitute leakage which precludes further operation of the qubit in, for example, error correction. Here we study these transitions using a transmon qubit by experimentally measuring their dependence on qubit frequency, average photon number, and qubit state, in the regime where the resonator frequency is lower than the qubit frequency. We observe signatures of resonant transitions between levels in the coupled qubit-resonator system that exhibit noisy behavior when measured repeatedly in time. We provide a semi-classical model of these transitions based on the rotating wave approximation and use it to predict the onset of state transitions in our experiments. Our results suggest the transmon is excited to levels near the top of its cosine potential following a state transition, where the charge dispersion of higher transmon levels explains the observed noisy behavior of state transitions. Moreover, occupation in these higher energy levels poses a major challenge for fast qubit reset

    Overcoming leakage in scalable quantum error correction

    Full text link
    Leakage of quantum information out of computational states into higher energy states represents a major challenge in the pursuit of quantum error correction (QEC). In a QEC circuit, leakage builds over time and spreads through multi-qubit interactions. This leads to correlated errors that degrade the exponential suppression of logical error with scale, challenging the feasibility of QEC as a path towards fault-tolerant quantum computation. Here, we demonstrate the execution of a distance-3 surface code and distance-21 bit-flip code on a Sycamore quantum processor where leakage is removed from all qubits in each cycle. This shortens the lifetime of leakage and curtails its ability to spread and induce correlated errors. We report a ten-fold reduction in steady-state leakage population on the data qubits encoding the logical state and an average leakage population of less than 1×10−31 \times 10^{-3} throughout the entire device. The leakage removal process itself efficiently returns leakage population back to the computational basis, and adding it to a code circuit prevents leakage from inducing correlated error across cycles, restoring a fundamental assumption of QEC. With this demonstration that leakage can be contained, we resolve a key challenge for practical QEC at scale.Comment: Main text: 7 pages, 5 figure
    • 

    corecore