300 research outputs found

    Is there a Moore's law for quantum computing?

    Full text link
    There is a common wisdom according to which many technologies can progress according to some exponential law like the empirical Moore's law that was validated for over half a century with the growth of transistors number in chipsets. As a still in the making technology with a lot of potential promises, quantum computing is supposed to follow the pack and grow inexorably to maturity. The Holy Grail in that domain is a large quantum computer with thousands of errors corrected logical qubits made themselves of thousands, if not more, of physical qubits. These would enable molecular simulations as well as factoring 2048 RSA bit keys among other use cases taken from the intractable classical computing problems book. How far are we from this? Less than 15 years according to many predictions. We will see in this paper that Moore's empirical law cannot easily be translated to an equivalent in quantum computing. Qubits have various figures of merit that won't progress magically thanks to some new manufacturing technique capacity. However, some equivalents of Moore's law may be at play inside and outside the quantum realm like with quantum computers enabling technologies, cryogeny and control electronics. Algorithms, software tools and engineering also play a key role as enablers of quantum computing progress. While much of quantum computing future outcomes depends on qubit fidelities, it is progressing rather slowly, particularly at scale. We will finally see that other figures of merit will come into play and potentially change the landscape like the quality of computed results and the energetics of quantum computing. Although scientific and technological in nature, this inventory has broad business implications, on investment, education and cybersecurity related decision-making processes.Comment: 32 pages, 24 figure

    A gift from Pandora's box : The software crisis.

    Get PDF

    A Physical Unclonable Function Based on Inter-Metal Layer Resistance Variations and an Evaluation of its Temperature and Voltage Stability

    Get PDF
    Keying material for encryption is stored as digital bistrings in non-volatile memory (NVM) on FPGAs and ASICs in current technologies. However, secrets stored this way are not secure against a determined adversary, who can use probing attacks to steal the secret. Physical Unclonable functions (PUFs) have emerged as an alternative. PUFs leverage random manufacturing variations as the source of entropy for generating random bitstrings, and incorporate an on-chip infrastructure for measuring and digitizing the corresponding variations in key electrical parameters, such as delay or voltage. PUFs are designed to reproduce a bitstring on demand and therefore eliminate the need for on-chip storage. In this dissertation, I propose a kind of PUF that measures resistance variations in inter-metal layers that define the power grid of the chip and evaluate its temperature and voltage stability. First, I introduce two implementations of a power grid-based PUF (PG-PUF). Then, I analyze the quality of bit strings generated without considering environmental variations from the PG-PUFs that leverage resistance variations in: 1) the power grid metal wires in 60 copies of a 90 nm chip and 2) in the power grid metal wires of 58 copies of a 65 nm chip. Next, I carry out a series of experiments in a set of 63 chips in IBM\u27s 90 nm technology at 9 TV corners, i.e., over all combination of 3 temperatures: -40oC, 25oC and 85oC and 3 voltages: nominal and +/-10% of the nominal supply voltage. The randomness, uniqueness and stability characteristics of bitstrings generated from PG-PUFs are evaluated. The stability of the PG-PUF and an on-chip voltage-to-digital (VDC) are also evaluated at 9 temperature-voltage corners. I introduce several techniques that have not been previously described, including a mechanism to eliminate voltage trends or \u27bias\u27 in the power grid voltage measurements, as well as a voltage threshold, Triple-Module-Redundancy (TMR) and majority voting scheme to identify and exclude unstable bits

    Gift from Pandora's Box : the software crisis

    Get PDF

    The ingenuity of common workmen: and the invention of the computer

    Get PDF
    Since World War II, state support for scientific research has been assumed crucial to technological and economic progress. Governments accordingly spent tremendous sums to that end. Nothing epitomizes the alleged fruits of that involvement better than the electronic digital computer. The first such computer has been widely reputed to be the ENIAC, financed by the U.S. Army for the war but finished afterwards. Vastly improved computers followed, initially paid for in good share by the Federal Government of the United States, but with the private sector then dominating, both in development and use, and computers are of major significance.;Despite the supposed success of public-supported science, evidence is that computers would have evolved much the same without it but at less expense. Indeed, the foundations of modern computer theory and technology were articulated before World War II, both as a tool of applied mathematics and for information processing, and the computer was itself on the cusp of reality. Contrary to popular understanding, the ENIAC actually represented a movement backwards and a dead end.;Rather, modern computation derived more directly, for example, from the prewar work of John Vincent Atanasoff and Clifford Berry, a physics professor and graduate student, respectively, at Iowa State College (now University) in Ames, Iowa. They built the Atanasoff Berry Computer (ABC), which, although special purpose and inexpensive, heralded the efficient and elegant design of modern computers. Moreover, while no one foresaw commercialization of computers based on the ungainly and costly ENIAC, the commercial possibilities of the ABC were immediately evident, although unrealized due to war. Evidence indicates, furthermore, that the private sector was willing and able to develop computers beyond the ABC and could have done so more effectively than government, to the most sophisticated machines.;A full and inclusive history of computers suggests that Adam Smith, the eighteenth century Scottish philosopher, had it right. He believed that minimal and aloof government best served society, and that the inherent genius of citizens was itself enough to ensure the general prosperity

    Characterization and mitigation of process variation in digital circuits and systems

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 155-166).Process variation threatens to negate a whole generation of scaling in advanced process technologies due to performance and power spreads of greater than 30-50%. Mitigating this impact requires a thorough understanding of the variation sources, magnitudes and spatial components at the device, circuit and architectural levels. This thesis explores the impacts of variation at each of these levels and evaluates techniques to alleviate them in the context of digital circuits and systems. At the device level, we propose isolation and measurement of variation in the intrinsic threshold voltage of a MOSFET using sub-threshold leakage currents. Analysis of the measured data, from a test-chip implemented on a 0. 18[mu]m CMOS process, indicates that variation in MOSFET threshold voltage is a truly random process dependent only on device dimensions. Further decomposition of the observed variation reveals no systematic within-die variation components nor any spatial correlation. A second test-chip capable of characterizing spatial variation in digital circuits is developed and implemented in a 90nm triple-well CMOS process. Measured variation results show that the within-die component of variation is small at high voltages but is an increasing fraction of the total variation as power-supply voltage decreases. Once again, the data shows no evidence of within-die spatial correlation and only weak systematic components. Evaluation of adaptive body-biasing and voltage scaling as variation mitigation techniques proves voltage scaling is more effective in performance modification with reduced impact to idle power compared to body-biasing.(cont.) Finally, the addition of power-supply voltages in a massively parallel multicore processor is explored to reduce the energy required to cope with process variation. An analytic optimization framework is developed and analyzed; using a custom simulation methodology, total energy of a hypothetical 1K-core processor based on the RAW core is reduced by 6-16% with the addition of only a single voltage. Analysis of yield versus required energy demonstrates that a combination of disabling poor-performing cores and additional power-supply voltages results in an optimal trade-off between performance and energy.by Nigel Anthony Drego.Ph.D

    Compilation Optimizations to Enhance Resilience of Big Data Programs and Quantum Processors

    Get PDF
    Modern computers can experience a variety of transient errors due to the surrounding environment, known as soft faults. Although the frequency of these faults is low enough to not be noticeable on personal computers, they become a considerable concern during large-scale distributed computations or systems in more vulnerable environments like satellites. These faults occur as a bit flip of some value in a register, operation, or memory during execution. They surface as either program crashes, hangs, or silent data corruption (SDC), each of which can waste time, money, and resources. Hardware methods, such as shielding or error correcting memory (ECM), exist, though they can be difficult to implement, expensive, and may be limited to only protecting against errors in specific locations. Researchers have been exploring software detection and correction methods as an alternative, commonly trading either overhead in execution time or memory usage to protect against faults. Quantum computers, a relatively recent advancement in computing technology, experience similar errors on a much more severe scale. The errors are more frequent, costly, and difficult to detect and correct. Error correction algorithms like Shor’s code promise to completely remove errors, but they cannot be implemented on current noisy intermediate-scale quantum (NISQ) systems due to the low number of available qubits. Until the physical systems become large enough to support error correction, researchers instead have been studying other methods to reduce and compensate for errors. In this work, we present two methods for improving the resilience of classical processes, both single- and multi-threaded. We then introduce quantum computing and compare the nature of errors and correction methods to previous classical methods. We further discuss two designs for improving compilation of quantum circuits. One method, focused on quantum neural networks (QNNs), takes advantage of partial compilation to avoid recompiling the entire circuit each time. The other method is a new approach to compiling quantum circuits using graph neural networks (GNNs) to improve the resilience of quantum circuits and increase fidelity. By using GNNs with reinforcement learning, we can train a compiler to provide improved qubit allocation that improves the success rate of quantum circuits

    Space station data system analysis/architecture study. Task 2: Options development, DR-5. Volume 3: Programmatic options

    Get PDF
    Task 2 in the Space Station Data System (SSDS) Analysis/Architecture Study is the development of an information base that will support the conduct of trade studies and provide sufficient data to make design/programmatic decisions. This volume identifies the preferred options in the programmatic category and characterizes these options with respect to performance attributes, constraints, costs, and risks. The programmatic category includes methods used to administrate/manage the development, operation and maintenance of the SSDS. The specific areas discussed include standardization/commonality; systems management; and systems development, including hardware procurement, software development and system integration, test and verification

    Digital document imaging systems: An overview and guide

    Get PDF
    This is an aid to NASA managers in planning the selection of a Digital Document Imaging System (DDIS) as a possible solution for document information processing and storage. Intended to serve as a manager's guide, this document contains basic information on digital imaging systems, technology, equipment standards, issues of interoperability and interconnectivity, and issues related to selecting appropriate imaging equipment based upon well defined needs

    Growth and change in the computer industry : impacts on plant location and labor demand

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Urban Studies and Planning, 1985.MICROFICHE COPY AVAILABLE IN ARCHIVES AND ROTCH.Bibliography: leaves 363-376.by Susan Hasler Bartlett.Ph.D
    • …
    corecore