484 research outputs found

    Magic-State Functional Units: Mapping and Scheduling Multi-Level Distillation Circuits for Fault-Tolerant Quantum Architectures

    Full text link
    Quantum computers have recently made great strides and are on a long-term path towards useful fault-tolerant computation. A dominant overhead in fault-tolerant quantum computation is the production of high-fidelity encoded qubits, called magic states, which enable reliable error-corrected computation. We present the first detailed designs of hardware functional units that implement space-time optimized magic-state factories for surface code error-corrected machines. Interactions among distant qubits require surface code braids (physical pathways on chip) which must be routed. Magic-state factories are circuits comprised of a complex set of braids that is more difficult to route than quantum circuits considered in previous work [1]. This paper explores the impact of scheduling techniques, such as gate reordering and qubit renaming, and we propose two novel mapping techniques: braid repulsion and dipole moment braid rotation. We combine these techniques with graph partitioning and community detection algorithms, and further introduce a stitching algorithm for mapping subgraphs onto a physical machine. Our results show a factor of 5.64 reduction in space-time volume compared to the best-known previous designs for magic-state factories.Comment: 13 pages, 10 figure

    Resource Optimized Quantum Architectures for Surface Code Implementations of Magic-State Distillation

    Full text link
    Quantum computers capable of solving classically intractable problems are under construction, and intermediate-scale devices are approaching completion. Current efforts to design large-scale devices require allocating immense resources to error correction, with the majority dedicated to the production of high-fidelity ancillary states known as magic-states. Leading techniques focus on dedicating a large, contiguous region of the processor as a single "magic-state distillation factory" responsible for meeting the magic-state demands of applications. In this work we design and analyze a set of optimized factory architectural layouts that divide a single factory into spatially distributed factories located throughout the processor. We find that distributed factory architectures minimize the space-time volume overhead imposed by distillation. Additionally, we find that the number of distributed components in each optimal configuration is sensitive to application characteristics and underlying physical device error rates. More specifically, we find that the rate at which T-gates are demanded by an application has a significant impact on the optimal distillation architecture. We develop an optimization procedure that discovers the optimal number of factory distillation rounds and number of output magic states per factory, as well as an overall system architecture that interacts with the factories. This yields between a 10x and 20x resource reduction compared to commonly accepted single factory designs. Performance is analyzed across representative application classes such as quantum simulation and quantum chemistry.Comment: 16 pages, 14 figure

    Optimized Surface Code Communication in Superconducting Quantum Computers

    Full text link
    Quantum computing (QC) is at the cusp of a revolution. Machines with 100 quantum bits (qubits) are anticipated to be operational by 2020 [googlemachine,gambetta2015building], and several-hundred-qubit machines are around the corner. Machines of this scale have the capacity to demonstrate quantum supremacy, the tipping point where QC is faster than the fastest classical alternative for a particular problem. Because error correction techniques will be central to QC and will be the most expensive component of quantum computation, choosing the lowest-overhead error correction scheme is critical to overall QC success. This paper evaluates two established quantum error correction codes---planar and double-defect surface codes---using a set of compilation, scheduling and network simulation tools. In considering scalable methods for optimizing both codes, we do so in the context of a full microarchitectural and compiler analysis. Contrary to previous predictions, we find that the simpler planar codes are sometimes more favorable for implementation on superconducting quantum computers, especially under conditions of high communication congestion.Comment: 14 pages, 9 figures, The 50th Annual IEEE/ACM International Symposium on Microarchitectur

    Impacto social de la corrupción en Colombia

    Get PDF
    La corrupción es considerada como una gran enfermedad política en las últimas décadas, tal como lo manifestamos en nuestro planteamiento del problema, en Colombia se ha conservado una cultura de corrupción en todos los niveles sociales, dejando a un lado todas las necesidades de un pueblo, no sin antes mencionar los daños que causa este síndrome a lo largo del tiempo, haciendo que cada día se ahonde en el mismo remolino por lo que se está conllevando a la perdida de todos los valores, principios y dignidad de los nuestro pueblo Colombiano

    Perceptions about alcohol harm and alcohol-control strategies among people with high risk of alcohol consumption in Alberta, Canada and Queensland, Australia

    Get PDF
    Objectives: To explore alcohol perceptions and their association hazardous alcohol use in the populations of Alberta, Canada and Queensland, Australia. Methods: Data from 2500 participants of the 2013 Alberta Survey and the 2013 Queensland Social Survey was analyzed. Regression analyses were used to explore the association between alcohol perceptions and its association with hazardous alcohol use. Results: Greater hazardous alcohol use was found in Queenslanders than Albertans (p<0.001). Overall, people with hazardous alcohol were less likely to believe that alcohol use contributes to health problems (odds ratio [OR], 0.46; 95% confidence interval [CI], 0.27 to 0.78; p<0.01) and to a higher risk of injuries (OR, 0.54; 95% CI, 0.33 to 0.90; p<0.05). Albertans with hazardous alcohol use were less likely to believe that alcohol contributes to health problems (OR, 0.48; 95% CI, 0.26 to 0.92; p<0.05) and were also less likely to choose a highly effective strategy as the best way for the government to reduce alcohol problems (OR, 0.63; 95% CI, 0.43 to 0.91; p=0.01). Queenslanders with hazardous alcohol use were less likely to believe that alcohol was a major contributor to injury (OR, 0.39; 95% CI, 0.20 to 0.77; p<0.01). Conclusions: Our results suggest that people with hazardous alcohol use tend to underestimate the negative effect of alcohol consumption on health and its contribution to injuries. In addition, Albertans with hazardous alcohol use were less in favor of strategies considered highly effective to reduce alcohol harm, probably because they perceive them as a potential threat to their own alcohol consumption. These findings represent valuable sources of information for local health authorities and policymakers when designing suitable strategies to target alcohol-related problems

    Roundtable: Does All Human-Rights Funding Use a Human Rights-based Approach?

    Get PDF
    In this session, presenters and attendees will discuss different dimensions of the question, Does All Human-Rights Funding Use a Human Rights-based Approach? In the U.S., grant strategies and decisions have historically been made by individuals and funders behind closed doors, with little transparency and accountability. Grant seekers, not to mention the public at large, have rarely had insight into how those decisions are made or any influence on the process. What criteria are they using? To whom are they accountable? And how do they make their decisions about what and who they are going to fund? Replacing traditional hierarchical models of funding, participatory grantmaking applies a human rights-based approach to how funding is determined and who makes the funding decisions. The emphasis is on the practice as well as on the impact of the funds. In this participatory panel presentation, moderated by a staff person from the International Human Rights Funders Group (IHRFG), panelists and audience will explore a range of models for transforming the relationship of social justice actors with funders

    The Future of the German-Jewish Past: Memory and the Question of Antisemitism

    Get PDF
    Germany’s acceptance of its direct responsibility for the Holocaust has strengthened its relationship with Israel and has led to a deep commitment to combat antisemitism and rebuild Jewish life in Germany. As we draw close to a time when there will be no more firsthand experience of the horrors of the Holocaust, there is great concern about what will happen when German responsibility turns into history. Will the present taboo against open antisemitism be lifted as collective memory fades? There are alarming signs of the rise of the far right, which includes blatantly antisemitic elements, already visible in public discourse. The evidence is unmistakable―overt antisemitism is dramatically increasing once more. The Future of the German-Jewish Past deals with the formidable challenges created by these developments. It is conceptualized to offer a variety of perspectives and views on the question of the future of the German-Jewish past. The volume addresses topics such as antisemitism, Holocaust memory, historiography, and political issues relating to the future relationship between Jews, Israel, and Germany. While the central focus of this volume is Germany, the implications go beyond the German-Jewish experience and relate to some of the broader challenges facing modern societies today.https://docs.lib.purdue.edu/purduepress_ebooks/1058/thumbnail.jp

    Mandated data archiving greatly improves access to research data

    Full text link
    The data underlying scientific papers should be accessible to researchers both now and in the future, but how best can we ensure that these data are available? Here we examine the effectiveness of four approaches to data archiving: no stated archiving policy, recommending (but not requiring) archiving, and two versions of mandating data deposition at acceptance. We control for differences between data types by trying to obtain data from papers that use a single, widespread population genetic analysis, STRUCTURE. At one extreme, we found that mandated data archiving policies that require the inclusion of a data availability statement in the manuscript improve the odds of finding the data online almost a thousand-fold compared to having no policy. However, archiving rates at journals with less stringent policies were only very slightly higher than those with no policy at all. At one extreme, we found that mandated data archiving policies that require the inclusion of a data availability statement in the manuscript improve the odds of finding the data online almost a thousand fold compared to having no policy. However, archiving rates at journals with less stringent policies were only very slightly higher than those with no policy at all. We also assessed the effectiveness of asking for data directly from authors and obtained over half of the requested datasets, albeit with about 8 days delay and some disagreement with authors. Given the long term benefits of data accessibility to the academic community, we believe that journal based mandatory data archiving policies and mandatory data availability statements should be more widely adopted

    Tile size selection for low-power tile-based architectures

    Get PDF
    In this paper, we investigate the power implications of tile size selection for tile-based processors. We refer to this investigation as a tile granularity study. This is accomplished by distilling the architectural cost of tiles with different computational widths into a system metric we call the Granularity Indicator (GI). The GI is then compared against the communications exposed when algorithms are partitioned across multiple tiles. Through this comparison, the tile granularity that best fits a given set of algorithms can be determined, reducing the system power for that set of algorithms. When the GI analysis is applied to the Synchroscalar tile architecture[1], we find that Synchroscalar\u27s already low power consumption can be further reduced by 14% when customized for execution of the 802.11a receiver. In addition, the GI can also be a used to evaluate tile size when considering multiple applications simultaneously, providing a convenient platform for hardware-software co-design
    • …
    corecore