2,343 research outputs found

    Braiding fractional quantum Hall quasiholes on a superconducting quantum processor

    Full text link
    Direct experimental detection of anyonic exchange statistics in fractional quantum Hall systems by braiding the excitations and measuring the wave-function phase is an enormous challenge. Here, we use a small, noisy quantum computer to emulate direct braiding within the framework of a simplified model applicable to a thin cylinder geometry and measure the topological phase. Our algorithm first prepares the ground state with two quasiholes. It then applies a unitary operation controlled by an ancilla, corresponding to a sequence of adiabatic evolutions that takes one quasihole around the other. We finally extract the phase of the wave function from measuring the ancilla with a compound error mitigation strategy. Our results open a new avenue for studying braiding statistics in fractional Hall states.Comment: 9 pages, 8 figure

    Isolated Majorana mode in a quantum computer from a duality twist

    Full text link
    Experimental investigation of the interplay of dualities, generalized symmetries, and topological defects is an important challenge in condensed matter physics and quantum materials. A simple model exhibiting this physics is the transverse-field Ising model, which can host a noninvertible topological defect that performs the Kramers-Wannier duality transformation. When acting on one point in space, this duality defect imposes the duality twisted boundary condition and binds a single Majorana zero mode. This Majorana zero mode is unusual as it lacks localized partners and has an infinite lifetime, even in finite systems. Using Floquet driving of a closed Ising chain with a duality defect, we generate this Majorana zero mode in a digital quantum computer. We detect the mode by measuring its associated persistent autocorrelation function using an efficient sampling protocol and a compound strategy for error mitigation. We also show that the Majorana zero mode resides at the domain wall between two regions related by a Kramers-Wannier duality. Finally, we highlight the robustness of the isolated Majorana zero mode to integrability and symmetry-breaking perturbations. Our findings offer an experimental approach to investigating exotic topological defects in Floquet systems.Comment: 6 pages, 5 figures, 2 pages of supplemental materia

    Best practices for quantum error mitigation with digital zero-noise extrapolation

    Full text link
    Digital zero-noise extrapolation (dZNE) has emerged as a common approach for quantum error mitigation (QEM) due to its conceptual simplicity, accessibility, and resource efficiency. In practice, however, properly applying dZNE to extend the computational reach of noisy quantum processors is rife with subtleties. Here, based on literature review and original experiments on noisy simulators and real quantum hardware, we define best practices for QEM with dZNE for each step of the workflow, including noise amplification, execution on the quantum device, extrapolation to the zero-noise limit, and composition with other QEM methods. We anticipate that this effort to establish best practices for dZNE will be extended to other QEM methods, leading to more reproducible and rigorous calculations on noisy quantum hardware.Comment: 10 pages, 11 figures, submitted to IEEE Quantum Week 202

    Assessing the Ability of Self-Attention Networks to Learn Word Order

    Full text link
    Self-attention networks (SAN) have attracted a lot of interests due to their high parallelization and strong performance on a variety of NLP tasks, e.g. machine translation. Due to the lack of recurrence structure such as recurrent neural networks (RNN), SAN is ascribed to be weak at learning positional information of words for sequence modeling. However, neither this speculation has been empirically confirmed, nor explanations for their strong performances on machine translation tasks when "lacking positional information" have been explored. To this end, we propose a novel word reordering detection task to quantify how well the word order information learned by SAN and RNN. Specifically, we randomly move one word to another position, and examine whether a trained model can detect both the original and inserted positions. Experimental results reveal that: 1) SAN trained on word reordering detection indeed has difficulty learning the positional information even with the position embedding; and 2) SAN trained on machine translation learns better positional information than its RNN counterpart, in which position embedding plays a critical role. Although recurrence structure make the model more universally-effective on learning word order, learning objectives matter more in the downstream tasks such as machine translation.Comment: ACL 201

    Direct Intracellular Delivery of Cell Impermeable Probes of Protein Glycosylation Using Nanostraws

    Get PDF
    Bioorthogonal chemistry is an effective tool for elucidating metabolic pathways and measuring cellular activity, yet its use is currently limited by the difficulty of getting probes past the cell membrane and into the cytoplasm, especially if more complex probes are desired. Here we present a simple and minimally perturbative technique to deliver functional probes of glycosylation into cells by using a nanostructured “nanostraw” delivery system. Nanostraws provide direct intracellular access to cells through fluid conduits that remain small enough to minimize cell perturbation. First, we demonstrate that our platform can deliver an unmodified azidosugar, N-azidoacetylmannosamine, into cells with similar effectiveness to a chemical modification strategy (peracetylation). We then show that the nanostraw platform enables direct delivery of an azidosugar modified with a charged uridine diphosphate group (UDP) that prevents intracellular penetration, thereby bypassing multiple enzymatic processing steps. By effectively removing the requirement for cell permeability from the probe, the nanostraws expand the toolbox of bioorthogonal probes that can be used to study biological processes on a single, easy-to-use platform

    Context-Aware Self-Attention Networks

    Full text link
    Self-attention model have shown its flexibility in parallel computation and the effectiveness on modeling both long- and short-term dependencies. However, it calculates the dependencies between representations without considering the contextual information, which have proven useful for modeling dependencies among neural representations in various natural language tasks. In this work, we focus on improving self-attention networks through capturing the richness of context. To maintain the simplicity and flexibility of the self-attention networks, we propose to contextualize the transformations of the query and key layers, which are used to calculates the relevance between elements. Specifically, we leverage the internal representations that embed both global and deep contexts, thus avoid relying on external resources. Experimental results on WMT14 English-German and WMT17 Chinese-English translation tasks demonstrate the effectiveness and universality of the proposed methods. Furthermore, we conducted extensive analyses to quantity how the context vectors participate in the self-attention model.Comment: AAAI 201
    corecore