729 research outputs found
Recommended from our members
Hybrid Analog-Digital Co-Processing for Scientific Computation
In the past 10 years computer architecture research has moved to more heterogeneity and less adherence to conventional abstractions. Scientists and engineers hold an unshakable belief that computing holds keys to unlocking humanity's Grand Challenges. Acting on that belief they have looked deeper into computer architecture to find specialized support for their applications. Likewise, computer architects have looked deeper into circuits and devices in search of untapped performance and efficiency. The lines between computer architecture layers---applications, algorithms, architectures, microarchitectures, circuits and devices---have blurred. Against this backdrop, a menagerie of computer architectures are on the horizon, ones that forgo basic assumptions about computer hardware, and require new thinking of how such hardware supports problems and algorithms.
This thesis is about revisiting hybrid analog-digital computing in support of diverse modern workloads. Hybrid computing had extensive applications in early computing history, and has been revisited for small-scale applications in embedded systems. But architectural support for using hybrid computing in modern workloads, at scale and with high accuracy solutions, has been lacking.
I demonstrate solving a variety of scientific computing problems, including stochastic ODEs, partial differential equations, linear algebra, and nonlinear systems of equations, as case studies in hybrid computing. I solve these problems on a system of multiple prototype analog accelerator chips built by a team at Columbia University. On that team I made contributions toward programming the chips, building the digital interface, and validating the chips' functionality. The analog accelerator chip is intended for use in conjunction with a conventional digital host computer.
The appeal and motivation for using an analog accelerator is efficiency and performance, but it comes with limitations in accuracy and problem sizes that we have to work around.
The first problem is how to do problems in this unconventional computation model. Scientific computing phrases problems as differential equations and algebraic equations. Differential equations are a continuous view of the world, while algebraic equations are a discrete one. Prior work in analog computing mostly focused on differential equations; algebraic equations played a minor role in prior work in analog computing. The secret to using the analog accelerator to support modern workloads on conventional computers is that these two viewpoints are interchangeable. The algebraic equations that underlie most workloads can be solved as differential equations,
and differential equations are naturally solvable in the analog accelerator chip. A hybrid analog-digital computer architecture can focus on solving linear and nonlinear algebra problems to support many workloads.
The second problem is how to get accurate solutions using hybrid analog-digital computing. The reason that the analog computation model gives less accurate solutions is it gives up representing numbers as digital binary numbers, and instead uses the full range of analog voltage and current to represent real numbers. Prior work has established that encoding data in analog signals gives an energy efficiency advantage as long as the analog data precision is limited. While the analog accelerator alone may be useful for energy-constrained applications where inputs and outputs are imprecise, we are more interested in using analog in conjunction with digital for precise solutions. This thesis gives novel insight that the trick to do so is to solve nonlinear problems where low-precision guesses are useful for conventional digital algorithms.
The third problem is how to solve large problems using hybrid analog-digital computing. The reason the analog computation model can't handle large problems is it gives up step-by-step discrete-time operation, instead allowing variables to evolve smoothly in continuous time. To make that happen the analog accelerator works by chaining hardware for mathematical operations end-to-end. During computation analog data flows through the hardware with no overheads in control logic and memory accesses. The downside is then the needed hardware size grows alongside problem sizes. While scientific computing researchers have for a long time split large problems into smaller subproblems to fit in digital computer constraints, this thesis is a first attempt to consider these divide-and-conquer algorithms as an essential tool in using the analog model of computation.
As we enter the post-Moore’s law era of computing, unconventional architectures will offer specialized models of computation that uniquely support specific problem types. Two prominent examples are deep neural networks and quantum computers. Recent trends in computer science research show these unconventional architectures will soon have broad adoption. In this thesis I show another specialized, unconventional architecture is to use analog accelerators to solve problems in scientific computing. Computer architecture researchers will discover other important models of computation in the future. This thesis is an example of the discovery process, implementation, and evaluation of how an unconventional architecture supports specialized workloads
Recommended from our members
University capability as a micro-foundation for the Triple Helix model: the case of China
This paper aims to advance our understanding of the Triple Helix model from a micro-foundational perspective by articulating the notion of university capability. From an external evaluative viewpoint we suggest that university capability consists of (1) resource base, (2) motivation/objective, (3) resource allocation and coordination mechanisms, and (4) regional outcomes. Based on qualitative data collected from two leading cities in innovation and regional development in China, our study unpacks university capability by distinguishing resources and capabilities. Furthermore, this paper empirically elucidates two different approaches to deal with university capability. Our conceptualization of university capability may be a useful analytical tool to better understand the role of ‘university’ and its relationship with the other actors in the Triple Helix model
Recommended from our members
Hybrid Continuous-Discrete Computer: from ISA to Microarchitecture
In this project, we design an instruction set architecture for a proposed hybrid continuous-discrete computer (HCDC) chip. The ISA harnesses the microarchitectural features and analog circuitry provided in the hardware. We describe the workloads that are suitable for the HCDC architecture. The underlying microarchitecture for the HCDC chip, including its controllers, datapaths, and interfaces to analog and digital functional units are specified in detail
QDB: From Quantum Algorithms Towards Correct Quantum Programs
With the advent of small-scale prototype quantum computers, researchers can now code and run quantum algorithms that were previously proposed but not fully implemented. In support of this growing interest in quantum computing experimentation, programmers need new tools and techniques to write and debug QC code. In this work, we implement a range of QC algorithms and programs in order to discover what types of bugs occur and what defenses against those bugs are possible in QC programs. We conduct our study by running small-sized QC programs in QC simulators in order to replicate published results in QC implementations. Where possible, we cross-validate results from programs written in different QC languages for the same problems and inputs. Drawing on this experience, we provide a taxonomy for QC bugs, and we propose QC language features that would aid in writing correct code
Statistical Assertions for Validating Patterns and Finding Bugs in Quantum Programs
In support of the growing interest in quantum computing experimentation,
programmers need new tools to write quantum algorithms as program code.
Compared to debugging classical programs, debugging quantum programs is
difficult because programmers have limited ability to probe the internal states
of quantum programs; those states are difficult to interpret even when
observations exist; and programmers do not yet have guidelines for what to
check for when building quantum programs. In this work, we present quantum
program assertions based on statistical tests on classical observations. These
allow programmers to decide if a quantum program state matches its expected
value in one of classical, superposition, or entangled types of states. We
extend an existing quantum programming language with the ability to specify
quantum assertions, which our tool then checks in a quantum program simulator.
We use these assertions to debug three benchmark quantum programs in factoring,
search, and chemistry. We share what types of bugs are possible, and lay out a
strategy for using quantum programming patterns to place assertions and prevent
bugs.Comment: In The 46th Annual International Symposium on Computer Architecture
(ISCA '19). arXiv admin note: text overlap with arXiv:1811.0544
Logical Abstractions for Noisy Variational Quantum Algorithm Simulation
Due to the unreliability and limited capacity of existing quantum computer
prototypes, quantum circuit simulation continues to be a vital tool for
validating next generation quantum computers and for studying variational
quantum algorithms, which are among the leading candidates for useful quantum
computation. Existing quantum circuit simulators do not address the common
traits of variational algorithms, namely: 1) their ability to work with noisy
qubits and operations, 2) their repeated execution of the same circuits but
with different parameters, and 3) the fact that they sample from circuit final
wavefunctions to drive a classical optimization routine. We present a quantum
circuit simulation toolchain based on logical abstractions targeted for
simulating variational algorithms. Our proposed toolchain encodes quantum
amplitudes and noise probabilities in a probabilistic graphical model, and it
compiles the circuits to logical formulas that support efficient repeated
simulation of and sampling from quantum circuits for different parameters.
Compared to state-of-the-art state vector and density matrix quantum circuit
simulators, our simulation approach offers greater performance when sampling
from noisy circuits with at least eight to 20 qubits and with around 12
operations on each qubit, making the approach ideal for simulating near-term
variational quantum algorithms. And for simulating noise-free shallow quantum
circuits with 32 qubits, our simulation approach offers a reduction
in sampling cost versus quantum circuit simulation techniques based on tensor
network contraction.Comment: ASPLOS '21, April 19-23, 2021, Virtual, US
- …