136 research outputs found

    "Last-place Aversion": Evidence and Redistributive Implications

    Get PDF
    Why do low-income individuals often oppose redistribution? We hypothesize that an aversion to being in "last place" undercuts support for redistribution, with low-income individuals punishing those slightly below themselves to keep someone "beneath" them. In laboratory experiments, we find support for "last-place aversion" in the contexts of risk aversion and redistributive preferences. Participants choose gambles with the potential to move them out of last place that they reject when randomly placed in other parts of the distribution. Similarly, in money- transfer games, those randomly placed in second-to-last place are the least likely to costlessly give money to the player one rank below. Last-place aversion predicts that those earning just above the minimum wage will be most likely to oppose minimum-wage increases as they would no longer have a lower-wage group beneath them, a prediction we confirm using survey data.

    Observation Versus Intervention for Low-Grade Intracranial Dural Arteriovenous Fistulas

    Get PDF
    BACKGROUND: Low-grade intracranial dural arteriovenous fistulas (dAVF) have a benign natural history in the majority of cases. The benefit from treatment of these lesions is controversial. OBJECTIVE: To compare the outcomes of observation versus intervention for low-grade dAVFs. METHODS: We retrospectively reviewed dAVF patients from institutions participating in the CONsortium for Dural arteriovenous fistula Outcomes Research (CONDOR). Patients with low-grade (Borden type I) dAVFs were included and categorized into intervention or observation cohorts. The intervention and observation cohorts were matched in a 1:1 ratio using propensity scores. Primary outcome was modified Rankin Scale (mRS) at final follow-up. Secondary outcomes were excellent (mRS 0-1) and good (mRS 0-2) outcomes, symptomatic improvement, mortality, and obliteration at final follow-up. RESULTS: The intervention and observation cohorts comprised 230 and 125 patients, respectively. We found no differences in primary or secondary outcomes between the 2 unmatched cohorts at last follow-up (mean duration 36 mo), except obliteration rate was higher in the intervention cohort (78.5% vs 24.1%, P < .001). The matched intervention and observation cohorts each comprised 78 patients. We also found no differences in primary or secondary outcomes between the matched cohorts except obliteration was also more likely in the matched intervention cohort (P < .001). Procedural complication rates in the unmatched and matched intervention cohorts were 15.4% and 19.2%, respectively. CONCLUSION: Intervention for low-grade intracranial dAVFs achieves superior obliteration rates compared to conservative management, but it fails to improve neurological or functional outcomes. Our findings do not support the routine treatment of low-grade dAVFs

    Onyx embolization for dural arteriovenous fistulas:a multi-institutional study

    Get PDF
    BACKGROUND: Although the liquid embolic agent, Onyx, is often the preferred embolic treatment for cerebral dural arteriovenous fistulas (DAVFs), there have only been a limited number of single-center studies to evaluate its performance. OBJECTIVE: To carry out a multicenter study to determine the predictors of complications, obliteration, and functional outcomes associated with primary Onyx embolization of DAVFs. METHODS: From the Consortium for Dural Arteriovenous Fistula Outcomes Research (CONDOR) database, we identified patients who were treated for DAVF with Onyx-only embolization as the primary treatment between 2000 and 2013. Obliteration rate after initial embolization was determined based on the final angiographic run. Factors predictive of complete obliteration, complications, and functional independence were evaluated with multivariate logistic regression models. RESULTS: A total 146 patients with DAVFs were primarily embolized with Onyx. Mean follow-up was 29 months (range 0-129 months). Complete obliteration was achieved in 80 (55%) patients after initial embolization. Major cerebral complications occurred in six patients (4.1%). At last follow-up, 84% patients were functionally independent. Presence of flow symptoms, age over 65, presence of an occipital artery feeder, and preprocedural home anticoagulation use were predictive of non-obliteration. The transverse-sigmoid sinus junction location was associated with fewer complications, whereas the tentorial location was predictive of poor functional outcomes. CONCLUSIONS: In this multicenter study, we report satisfactory performance of Onyx as a primary DAVF embolic agent. The tentorium remains a more challenging location for DAVF embolization, whereas DAVFs located at the transverse-sigmoid sinus junction are associated with fewer complications

    Measurement-induced entanglement and teleportation on a noisy quantum processor

    Full text link
    Measurement has a special role in quantum theory: by collapsing the wavefunction it can enable phenomena such as teleportation and thereby alter the "arrow of time" that constrains unitary evolution. When integrated in many-body dynamics, measurements can lead to emergent patterns of quantum information in space-time that go beyond established paradigms for characterizing phases, either in or out of equilibrium. On present-day NISQ processors, the experimental realization of this physics is challenging due to noise, hardware limitations, and the stochastic nature of quantum measurement. Here we address each of these experimental challenges and investigate measurement-induced quantum information phases on up to 70 superconducting qubits. By leveraging the interchangeability of space and time, we use a duality mapping, to avoid mid-circuit measurement and access different manifestations of the underlying phases -- from entanglement scaling to measurement-induced teleportation -- in a unified way. We obtain finite-size signatures of a phase transition with a decoding protocol that correlates the experimental measurement record with classical simulation data. The phases display sharply different sensitivity to noise, which we exploit to turn an inherent hardware limitation into a useful diagnostic. Our work demonstrates an approach to realize measurement-induced physics at scales that are at the limits of current NISQ processors

    Insights into the Musa genome: Syntenic relationships to rice and between Musa species

    Get PDF
    <p>Abstract</p> <p>Background</p> <p><it>Musa </it>species (Zingiberaceae, Zingiberales) including bananas and plantains are collectively the fourth most important crop in developing countries. Knowledge concerning <it>Musa </it>genome structure and the origin of distinct cultivars has greatly increased over the last few years. Until now, however, no large-scale analyses of <it>Musa </it>genomic sequence have been conducted. This study compares genomic sequence in two <it>Musa </it>species with orthologous regions in the rice genome.</p> <p>Results</p> <p>We produced 1.4 Mb of <it>Musa </it>sequence from 13 BAC clones, annotated and analyzed them along with 4 previously sequenced BACs. The 443 predicted genes revealed that Zingiberales genes share GC content and distribution characteristics with eudicot and Poaceae genomes. Comparison with rice revealed microsynteny regions that have persisted since the divergence of the Commelinid orders Poales and Zingiberales at least 117 Mya. The previously hypothesized large-scale duplication event in the common ancestor of major cereal lineages within the Poaceae was verified. The divergence time distributions for <it>Musa</it>-Zingiber (Zingiberaceae, Zingiberales) orthologs and paralogs provide strong evidence for a large-scale duplication event in the <it>Musa </it>lineage after its divergence from the Zingiberaceae approximately 61 Mya. Comparisons of genomic regions from <it>M. acuminata </it>and <it>M. balbisiana </it>revealed highly conserved genome structure, and indicated that these genomes diverged circa 4.6 Mya.</p> <p>Conclusion</p> <p>These results point to the utility of comparative analyses between distantly-related monocot species such as rice and <it>Musa </it>for improving our understanding of monocot genome evolution. Sequencing the genome of <it>M. acuminata </it>would provide a strong foundation for comparative genomics in the monocots. In addition a genome sequence would aid genomic and genetic analyses of cultivated <it>Musa </it>polyploid genotypes in research aimed at localizing and cloning genes controlling important agronomic traits for breeding purposes.</p

    Non-Abelian braiding of graph vertices in a superconducting processor

    Full text link
    Indistinguishability of particles is a fundamental principle of quantum mechanics. For all elementary and quasiparticles observed to date - including fermions, bosons, and Abelian anyons - this principle guarantees that the braiding of identical particles leaves the system unchanged. However, in two spatial dimensions, an intriguing possibility exists: braiding of non-Abelian anyons causes rotations in a space of topologically degenerate wavefunctions. Hence, it can change the observables of the system without violating the principle of indistinguishability. Despite the well developed mathematical description of non-Abelian anyons and numerous theoretical proposals, the experimental observation of their exchange statistics has remained elusive for decades. Controllable many-body quantum states generated on quantum processors offer another path for exploring these fundamental phenomena. While efforts on conventional solid-state platforms typically involve Hamiltonian dynamics of quasi-particles, superconducting quantum processors allow for directly manipulating the many-body wavefunction via unitary gates. Building on predictions that stabilizer codes can host projective non-Abelian Ising anyons, we implement a generalized stabilizer code and unitary protocol to create and braid them. This allows us to experimentally verify the fusion rules of the anyons and braid them to realize their statistics. We then study the prospect of employing the anyons for quantum computation and utilize braiding to create an entangled state of anyons encoding three logical qubits. Our work provides new insights about non-Abelian braiding and - through the future inclusion of error correction to achieve topological protection - could open a path toward fault-tolerant quantum computing

    Suppressing quantum errors by scaling a surface code logical qubit

    Full text link
    Practical quantum computing will require error rates that are well below what is achievable with physical qubits. Quantum error correction offers a path to algorithmically-relevant error rates by encoding logical qubits within many physical qubits, where increasing the number of physical qubits enhances protection against physical errors. However, introducing more qubits also increases the number of error sources, so the density of errors must be sufficiently low in order for logical performance to improve with increasing code size. Here, we report the measurement of logical qubit performance scaling across multiple code sizes, and demonstrate that our system of superconducting qubits has sufficient performance to overcome the additional errors from increasing qubit number. We find our distance-5 surface code logical qubit modestly outperforms an ensemble of distance-3 logical qubits on average, both in terms of logical error probability over 25 cycles and logical error per cycle (2.914%±0.016%2.914\%\pm 0.016\% compared to 3.028%±0.023%3.028\%\pm 0.023\%). To investigate damaging, low-probability error sources, we run a distance-25 repetition code and observe a 1.7×1061.7\times10^{-6} logical error per round floor set by a single high-energy event (1.6×1071.6\times10^{-7} when excluding this event). We are able to accurately model our experiment, and from this model we can extract error budgets that highlight the biggest challenges for future systems. These results mark the first experimental demonstration where quantum error correction begins to improve performance with increasing qubit number, illuminating the path to reaching the logical error rates required for computation.Comment: Main text: 6 pages, 4 figures. v2: Update author list, references, Fig. S12, Table I

    SYMBA: An end-to-end VLBI synthetic data generation pipeline: Simulating Event Horizon Telescope observations of M 87

    Get PDF
    Context. Realistic synthetic observations of theoretical source models are essential for our understanding of real observational data. In using synthetic data, one can verify the extent to which source parameters can be recovered and evaluate how various data corruption effects can be calibrated. These studies are the most important when proposing observations of new sources, in the characterization of the capabilities of new or upgraded instruments, and when verifying model-based theoretical predictions in a direct comparison with observational data. Aims. We present the SYnthetic Measurement creator for long Baseline Arrays (SYMBA), a novel synthetic data generation pipeline for Very Long Baseline Interferometry (VLBI) observations. SYMBA takes into account several realistic atmospheric, instrumental, and calibration effects. Methods. We used SYMBA to create synthetic observations for the Event Horizon Telescope (EHT), a millimetre VLBI array, which has recently captured the first image of a black hole shadow. After testing SYMBA with simple source and corruption models, we study the importance of including all corruption and calibration effects, compared to the addition of thermal noise only. Using synthetic data based on two example general relativistic magnetohydrodynamics (GRMHD) model images of M 87, we performed case studies to assess the image quality that can be obtained with the current and future EHT array for different weather conditions. Results. Our synthetic observations show that the effects of atmospheric and instrumental corruptions on the measured visibilities are significant. Despite these effects, we demonstrate how the overall structure of our GRMHD source models can be recovered robustly with the EHT2017 array after performing calibration steps, which include fringe fitting, a priori amplitude and network calibration, and self-calibration. With the planned addition of new stations to the EHT array in the coming years, images could be reconstructed with higher angular resolution and dynamic range. In our case study, these improvements allowed for a distinction between a thermal and a non-thermal GRMHD model based on salient features in reconstructed images
    corecore