4,551 research outputs found
Making "Fast" Atomic Operations Computationally Tractable
Communication overhead is the most commonly used performance metric for the operation complexity of distributed algorithms in message-passing environments. However, aside with communication, many distributed operations utilize complex computations to reach their desired outcomes. Therefore, a most accurate operation latency measure should account of both computation and communication metrics.
In this paper we focus on the efficiency of read and write operations in an atomic read/write shared memory emulation in the message-passing environment. We examine the operation complexity of the best known atomic register algorithm, that allows all read and write operations to complete in a single communication round-trip. Such operations are called fast. At its heart, the algorithm utilizes a predicate to allow processes to compute their outcome. We show that the predicate used is computationally hard, by devising a computationally equivalent problem and reducing that to Maximum Biclique, a known NP-hard problem. To improve the computational complexity of the algorithm we derive a new predicate that leads to a new algorithm, we call ccFast, and has the following properties: (i) can be computed in polynomial time, rendering each read operation in ccFast tractable compared to the read operations in the original algorithm, (ii) the messages used in ccFast are reduced in size, compared to the original algorithm, by almost a linear factor, (iii) allows all operations in ccFast to be fast, and (iv) allows ccFast to preserve atomicity. A linear time}algorithm for the computation of the new predicate is presented along with an analysis of the message complexity of the new algorithm. We believe that the new algorithm redefines the term fast capturing both the communication and the computation metrics of each operation
Convexity in source separation: Models, geometry, and algorithms
Source separation or demixing is the process of extracting multiple
components entangled within a signal. Contemporary signal processing presents a
host of difficult source separation problems, from interference cancellation to
background subtraction, blind deconvolution, and even dictionary learning.
Despite the recent progress in each of these applications, advances in
high-throughput sensor technology place demixing algorithms under pressure to
accommodate extremely high-dimensional signals, separate an ever larger number
of sources, and cope with more sophisticated signal and mixing models. These
difficulties are exacerbated by the need for real-time action in automated
decision-making systems.
Recent advances in convex optimization provide a simple framework for
efficiently solving numerous difficult demixing problems. This article provides
an overview of the emerging field, explains the theory that governs the
underlying procedures, and surveys algorithms that solve them efficiently. We
aim to equip practitioners with a toolkit for constructing their own demixing
algorithms that work, as well as concrete intuition for why they work
Computational Methods for Sparse Solution of Linear Inverse Problems
The goal of the sparse approximation problem is to approximate a target signal using a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the major practical algorithms for sparse approximation. Specific attention is paid to computational issues, to the circumstances in which individual methods tend to perform well, and to the theoretical guarantees available. Many fundamental questions in electrical engineering, statistics, and applied mathematics can be posed as sparse approximation problems, making these algorithms versatile and relevant to a plethora of applications
Theory and Applications of Robust Optimization
In this paper we survey the primary research, both theoretical and applied,
in the area of Robust Optimization (RO). Our focus is on the computational
attractiveness of RO approaches, as well as the modeling power and broad
applicability of the methodology. In addition to surveying prominent
theoretical results of RO, we also present some recent results linking RO to
adaptable models for multi-stage decision-making problems. Finally, we
highlight applications of RO across a wide spectrum of domains, including
finance, statistics, learning, and various areas of engineering.Comment: 50 page
MADNESS: A Multiresolution, Adaptive Numerical Environment for Scientific Simulation
MADNESS (multiresolution adaptive numerical environment for scientific
simulation) is a high-level software environment for solving integral and
differential equations in many dimensions that uses adaptive and fast harmonic
analysis methods with guaranteed precision based on multiresolution analysis
and separated representations. Underpinning the numerical capabilities is a
powerful petascale parallel programming environment that aims to increase both
programmer productivity and code scalability. This paper describes the features
and capabilities of MADNESS and briefly discusses some current applications in
chemistry and several areas of physics
Simulating chemistry efficiently on fault-tolerant quantum computers
Quantum computers can in principle simulate quantum physics exponentially
faster than their classical counterparts, but some technical hurdles remain.
Here we consider methods to make proposed chemical simulation algorithms
computationally fast on fault-tolerant quantum computers in the circuit model.
Fault tolerance constrains the choice of available gates, so that arbitrary
gates required for a simulation algorithm must be constructed from sequences of
fundamental operations. We examine techniques for constructing arbitrary gates
which perform substantially faster than circuits based on the conventional
Solovay-Kitaev algorithm [C.M. Dawson and M.A. Nielsen, \emph{Quantum Inf.
Comput.}, \textbf{6}:81, 2006]. For a given approximation error ,
arbitrary single-qubit gates can be produced fault-tolerantly and using a
limited set of gates in time which is or ; with sufficient parallel preparation of ancillas, constant average
depth is possible using a method we call programmable ancilla rotations.
Moreover, we construct and analyze efficient implementations of first- and
second-quantized simulation algorithms using the fault-tolerant arbitrary gates
and other techniques, such as implementing various subroutines in constant
time. A specific example we analyze is the ground-state energy calculation for
Lithium hydride.Comment: 33 pages, 18 figure
Hilbert space structure of a solid state quantum computer: two-electron states of a double quantum dot artificial molecule
We study theoretically a double quantum dot hydrogen molecule in the GaAs
conduction band as the basic elementary gate for a quantum computer with the
electron spins in the dots serving as qubits. Such a two-dot system provides
the necessary two-qubit entanglement required for quantum computation. We
determine the excitation spectrum of two horizontally coupled quantum dots with
two confined electrons, and study its dependence on an external magnetic field.
In particular, we focus on the splitting of the lowest singlet and triplet
states, the double occupation probability of the lowest states, and the
relative energy scales of these states. We point out that at zero magnetic
field it is difficult to have both a vanishing double occupation probability
for a small error rate and a sizable exchange coupling for fast gating. On the
other hand, finite magnetic fields may provide finite exchange coupling for
quantum computer operations with small errors. We critically discuss the
applicability of the envelope function approach in the current scheme and also
the merits of various quantum chemical approaches in dealing with few-electron
problems in quantum dots, such as the Hartree-Fock self-consistent field
method, the molecular orbital method, the Heisenberg model, and the Hubbard
model. We also discuss a number of relevant issues in quantum dot quantum
computing in the context of our calculations, such as the required design
tolerance, spin decoherence, adiabatic transitions, magnetic field control, and
error correction.Comment: 22 2-column pages, 11 figures. Published versio
- …