388 research outputs found

    Crystallization of the Ca2+-ATPase of Sarcoplasmic Reticulum by Calcium and Lanthanide Ions

    Get PDF
    Two-dimensional crystalline arrays of Ca2+-ATPase molecules develop in sarcoplasmic reticulum vesicles exposed to Ca2+ or lanthanide ions. The Ca2+- or lanthanide-induced crystals are presumed to represent the E1 conformation of the Ca2+-ATPase, and their crystal form is clearly different from the earlier described E2 crystals induced by Na3VO4 in the presence of ethylene glycol bis(beta aminoethyl ether)-N,N,N',N'-tetraacetic acid (Taylor, K. A., Dux, L., and Martonosi, A. (1984) J. Mol. Biol. 174, 193-204). Analysis of the crystalline arrays by negative staining or freeze-fracture electron microscopy reveals obliquely oriented rows of particles corresponding to individual Ca2+-ATPase molecules. Computer analysis of the negatively stained lanthanide-induced crystalline Ca2+-ATPase arrays shows that the molecules are arranged in a P1 lattice. The pear-shaped profiles of Ca2+-ATPase molecules seen in projection in the density maps are similar to those seen in vanadate-induced crystals. The space group and unit cell dimensions of the E1 crystals are consistent with Ca2+-ATPase monomers as structural units, while the vanadate-induced E2 crystals form by lateral aggregation of chains of Ca2+-ATPase dimers. The transition between the E1 and E2 conformations may involve a shift in the monomer-oligomer equilibrium of the Ca2+-ATPase. The formation of E1 crystals by PrCl3 is promoted by inside negative membrane potential, presumably through stabilization of the E1 conformation of the enzyme. Cleavage of the Ca2+-ATPase by trypsin into two major fragments (A and B) did not interfere with the Ca2+- or the Pr3+-induced crystallization

    Architectural Support for Optimizing Huge Page Selection Within the OS

    Get PDF
    © 2023 Copyright held by the owner/author(s). This document is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/ This document is the Accepted version of a Published Work that appeared in final form in 56th ACM/IEEE International Symposium on Microarchitecture (MICRO), Toronto, Canada. To access the final edited and published work see https://doi.org/10.1145/3613424.3614296Irregular, memory-intensive applications often incur high translation lookaside buffer (TLB) miss rates that result in significant address translation overheads. Employing huge pages is an effective way to reduce these overheads, however in real systems the number of available huge pages can be limited when system memory is nearly full and/or fragmented. Thus, huge pages must be used selectively to back application memory. This work demonstrates that choosing memory regions that incur the most TLB misses for huge page promotion best reduces address translation overheads. We call these regions High reUse TLB-sensitive data (HUBs). Unlike prior work which relies on expensive per-page software counters to identify promotion regions, we propose new architectural support to identify these regions dynamically at application runtime. We propose a promotion candidate cache (PCC) that identifies HUB candidates based on hardware page table walks after a lastlevel TLB miss. This small, fixed-size structure tracks huge pagealigned regions (consisting of base pages), ranks them based on observed page table walk frequency, and only keeps the most frequently accessed ones. Evaluated on applications of various memory intensity, our approach successfully identifies application pages incurring the highest address translation overheads. Our approach demonstrates that with the help of a PCC, the OS only needs to promote 4% of the application footprint to achieve more than 75% of the peak achievable performance, yielding 1.19-1.33× speedups over 4KB base pages alone. In real systems where memory is typically fragmented, the PCC outperforms Linux’s page promotion policy by 14% (when 50% of total memory is fragmented) and 16% (when 90% of total memory is fragmented) respectively

    Sarcoplasmic Reticulum

    Full text link

    An analytical error model for quantum computer simulation

    Full text link
    Quantum computers (QCs) must implement quantum error correcting codes (QECCs) to protect their logical qubits from errors, and modeling the effectiveness of QECCs on QCs is an important problem for evaluating the QC architecture. The previously developed Monte Carlo (MC) error models may take days or weeks of execution to produce an accurate result due to their random sampling approach. We present an alternative analytical error model that generates, over the course of executing the quantum program, a probability tree of the QC's error states. By calculating the fidelity of the quantum program directly, this error model has the potential for enormous speedups over the MC model when applied to small yet useful problem sizes. We observe a speedup on the order of 1,000X when accuracy is required, and we evaluate the scaling properties of this new analytical error model

    Architectures for Multinode Superconducting Quantum Computers

    Full text link
    Many proposals to scale quantum technology rely on modular or distributed designs where individual quantum processors, called nodes, are linked together to form one large multinode quantum computer (MNQC). One scalable method to construct an MNQC is using superconducting quantum systems with optical interconnects. However, a limiting factor of these machines will be internode gates, which may be two to three orders of magnitude noisier and slower than local operations. Surmounting the limitations of internode gates will require a range of techniques, including improvements in entanglement generation, the use of entanglement distillation, and optimized software and compilers, and it remains unclear how improvements to these components interact to affect overall system performance, what performance from each is required, or even how to quantify the performance of each. In this paper, we employ a `co-design' inspired approach to quantify overall MNQC performance in terms of hardware models of internode links, entanglement distillation, and local architecture. In the case of superconducting MNQCs with microwave-to-optical links, we uncover a tradeoff between entanglement generation and distillation that threatens to degrade performance. We show how to navigate this tradeoff, lay out how compilers should optimize between local and internode gates, and discuss when noisy quantum links have an advantage over purely classical links. Using these results, we introduce a roadmap for the realization of early MNQCs which illustrates potential improvements to the hardware and software of MNQCs and outlines criteria for evaluating the landscape, from progress in entanglement generation and quantum memory to dedicated algorithms such as distributed quantum phase estimation. While we focus on superconducting devices with optical interconnects, our approach is general across MNQC implementations.Comment: 23 pages, white pape
    corecore