137 research outputs found
A Unified Method for Placing Problems in Polylogarithmic Depth
In this work we consider the term evaluation problem which is, given a term over some algebra and a valid input to the term, computing the value of the term on that input. In contrast to previous methods we allow the algebra to be completely general and consider the problem of obtaining an efficient upper bound for this problem. Many variants of the problems where the algebra is well behaved have been studied. For example, the problem over the Boolean semiring or over the semiring (N,+,*). We extend this line of work.
Our efficient term evaluation algorithm then serves as a tool for obtaining polylogarithmic depth upper bounds for various well-studied problems. To demonstrate the utility of our result we show new bounds and reprove known results for a large spectrum of problems. In particular, the applications of the algorithm we consider include (but are not restricted to) arithmetic formula evaluation, word problems for tree and visibly pushdown automata, and various problems related to bounded tree-width and clique-width graphs
Prizing on Paths: A PTAS for the Highway Problem
In the highway problem, we are given an n-edge line graph (the highway), and
a set of paths (the drivers), each one with its own budget. For a given
assignment of edge weights (the tolls), the highway owner collects from each
driver the weight of the associated path, when it does not exceed the budget of
the driver, and zero otherwise. The goal is choosing weights so as to maximize
the profit.
A lot of research has been devoted to this apparently simple problem. The
highway problem was shown to be strongly NP-hard only recently
[Elbassioni,Raman,Ray-'09]. The best-known approximation is O(\log n/\log\log
n) [Gamzu,Segev-'10], which improves on the previous-best O(\log n)
approximation [Balcan,Blum-'06].
In this paper we present a PTAS for the highway problem, hence closing the
complexity status of the problem. Our result is based on a novel randomized
dissection approach, which has some points in common with Arora's quadtree
dissection for Euclidean network design [Arora-'98]. The basic idea is
enclosing the highway in a bounding path, such that both the size of the
bounding path and the position of the highway in it are random variables. Then
we consider a recursive O(1)-ary dissection of the bounding path, in subpaths
of uniform optimal weight. Since the optimal weights are unknown, we construct
the dissection in a bottom-up fashion via dynamic programming, while computing
the approximate solution at the same time. Our algorithm can be easily
derandomized. We demonstrate the versatility of our technique by presenting
PTASs for two variants of the highway problem: the tollbooth problem with a
constant number of leaves and the maximum-feasibility subsystem problem on
interval matrices. In both cases the previous best approximation factors are
polylogarithmic [Gamzu,Segev-'10,Elbassioni,Raman,Ray,Sitters-'09]
Aspects of practical implementations of PRAM algorithms
The PRAM is a shared memory model of parallel computation which abstracts away from inessential engineering details. It provides a very simple architecture independent model and provides a good programming environment. Theoreticians of the computer science community have proved that it is possible to emulate the theoretical PRAM model using current technology. Solutions have been found for effectively interconnecting processing elements, for routing data on these networks and for distributing the data among memory modules without hotspots. This thesis reviews this emulation and the possibilities it provides for large scale general purpose parallel computation. The emulation employs a bridging model which acts as an interface between the actual hardware and the PRAM model. We review the evidence that such a scheme crn achieve scalable parallel performance and portable parallel software and that PRAM algorithms can be optimally implemented on such practical models. In the course of this review we presented the following new results:
1. Concerning parallel approximation algorithms, we describe an NC algorithm for finding an approximation to a minimum weight perfect matching in a complete weighted graph. The algorithm is conceptually very simple and it is also the first NC-approximation algorithm for the task with a sub-linear performance ratio.
2. Concerning graph embedding, we describe dense edge-disjoint embeddings of the complete binary tree with n leaves in the following n-node communication networks: the hypercube, the de Bruijn and shuffle-exchange networks and the 2-dimcnsional mesh. In the embeddings the maximum distance from a leaf to the root of the tree is asymptotically optimally short. The embeddings facilitate efficient implementation of many PRAM algorithms on networks employing these graphs as interconnection networks.
3. Concerning bulk synchronous algorithmics, we describe scalable transportable algorithms for the following three commonly required types of computation; balanced tree computations. Fast Fourier Transforms and matrix multiplications
Revisiting Area Convexity: Faster Box-Simplex Games and Spectrahedral Generalizations
We investigate different aspects of area convexity [Sherman '17], a
mysterious tool introduced to tackle optimization problems under the
challenging geometry. We develop a deeper understanding of its
relationship with more conventional analyses of extragradient methods
[Nemirovski '04, Nesterov '07]. We also give improved solvers for the
subproblems required by variants of the [Sherman '17] algorithm, designed
through the lens of relative smoothness [Bauschke-Bolte-Teboulle '17,
Lu-Freund-Nesterov '18].
Leveraging these new tools, we give a state-of-the-art first-order algorithm
for solving box-simplex games (a primal-dual formulation of
regression) in a matrix with bounded rows, using matrix-vector queries. As a consequence, we obtain improved
complexities for approximate maximum flow, optimal transport, min-mean-cycle,
and other basic combinatorial optimization problems. We also develop a
near-linear time algorithm for a matrix generalization of box-simplex games,
capturing a family of problems closely related to semidefinite programs
recently used as subroutines in robust statistics and numerical linear algebra
DNA computation
This is the first ever doctoral thesis in the field of DNA computation. The field has its roots
in the late 1950s, when the Nobel laureate Richard Feynman first introduced the concept of
computing at a molecular level. Feynman's visionary idea was only realised in 1994, when
Leonard Adleman performed the first ever truly molecular-level computation using DNA
combined with the tools and techniques of molecular biology. Since Adleman reported the
results of his seminal experiment, there has been a flurry
of interest in the idea of using DNA
to perform computations. The potential benefits of using this particular molecule are enormous:
by harnessing the massive inherent parallelism of performing concurrent operations
on trillions of strands, we may one day be able to compress the power of today's supercomputer
into a single test tube. However, if we compare the development of DNA-based
computers to that of their silicon counterparts, it is clear that molecular computers are still
in their infancy. Current work in this area is concerned mainly with abstract models of
computation and simple proof-of-principle experiments. The goal of this thesis is to present
our contribution to the field, placing it in the context of the existing body of work. Our
new results concern a general model of DNA computation, an error-resistant implementation
of the model, experimental investigation of the implementation and an assessment of
the complexity and viability of DNA computations. We begin by recounting the historical
background to the search for the structure of DNA. By providing a detailed description of
this molecule and the operations we may perform on it, we lay down the foundations for subsequent
chapters. We then describe the basic models of DNA computation that have been
proposed to date. In particular, we describe our parallel filtering model, which is the first
to provide a general framework for the elegant expression of algorithms for NP-complete
problems. The implementation of such abstract models is crucial to their success. Previous
experiments that have been carried out suffer from their reliance on various error-prone laboratory
techniques. We show for the first time how one particular operation, hybridisation
extraction, may be replaced by an error-resistant enzymatic separation technique. We also
describe a novel solution read-out procedure that utilizes cloning, and is sufficiently general
to allow it to be used in any experimental implementation. The results of preliminary
tests
of these techniques are then reported. Several important conclusions are to be drawn from these investigations, and we report these in the hope that they will provide useful experimental
guidance in the future. The final contribution of this thesis is a rigorous consideration
of the complexity and viability of DNA computations. We argue that existing analyses of
models of DNA computation are flawed and unrealistic. In order to obtain more realistic
measures of the time and space complexity of DNA computations we describe a new strong
model, and reassess previously described algorithms within it. We review the search for
"killer applications": applications of DNA computing that will establish the superiority
of
this paradigm within a certain domain. We conclude the thesis with a description of several
open problems in the field of DNA computation
Parallelizing quantum circuit synthesis
We present an algorithmic framework for parallel quantum circuit synthesis using meet-in-the-middle synthesis techniques. We also present two implementations thereof, using both threaded and hybrid parallelization techniques.
We give examples where applying parallelism offers a speedup on the time of circuit synthesis for 2- and 3-qubit circuits. We use a threaded algorithm to synthesize 3-qubit circuits with optimal T -count 9, and 11, breaking the previous record of T-count 7. As the estimated runtime of the framework is inversely proportional to the number of processors, we propose an implementation using hybrid parallel programming which can take full advantage of a computing cluster’s thousands of cores. This implementation has the potential to synthesize circuits which were previously deemed impossible due to the exponential runtime of existing algorithms
A Survey on Approximation in Parameterized Complexity: Hardness and Algorithms
Parameterization and approximation are two popular ways of coping with
NP-hard problems. More recently, the two have also been combined to derive many
interesting results. We survey developments in the area both from the
algorithmic and hardness perspectives, with emphasis on new techniques and
potential future research directions
A PTAS for the Highway Problem
In the highway problem, we are given an n-edge line graph (the highway), and a set of paths (the drivers), each one with its own budget. For a given assignment of edge weights (the tolls), the highway owner collects from each driver the weight of the associated path, when it does not exceed the budget of the driver, and zero otherwise. The goal is choosing weights so as to maximize the profit. A lot of research has been devoted to this apparently simple problem. The highway problem was shown to be strongly NP-hard only recently [Elbassioni,Raman,Ray,Sitters-'09]. The best-known approximation is O(log n / log log n) [Gamzu,Segev-'10], which improves on the previous-best O(log n) approximation [Balcan,Blum-'06]. Better approximations are known for a number of special cases. Finding a constant (or better!) approximation algorithm for the general case is a challenging open problem. In this paper we present a PTAS for the highway problem, hence closing the complexity status of the problem. Our result is based on a novel randomized dissection approach, which has some points in common with Arora's quadtree dissection for Euclidean network design [Arora-'98]. The basic idea is enclosing the highway in a bounding path, such that both the size of the bounding path and the position of the highway in it are random variables. Then we consider a recursive O(1)-ary dissection of the bounding path, in subpaths of uniform optimal weight. Since the optimal weights are unknown, we construct the dissection in a bottom-up fashion via dynamic programming, while computing the approximate solution at the same time. Our algorithm can be easily derandomized. The same basic approach provides PTASs also for two generalizations of the problem: the tollbooth problem with a constant number of leaves and the \emph{maximum-feasibility subsystem} problem on interval matrices. In both cases the previous best approximation factors are polylogarithmic [Gamzu,Segev-'10,Elbassioni,Raman,Ray,Sitters-'09]
Quantum computing for finance
Quantum computers are expected to surpass the computational capabilities of
classical computers and have a transformative impact on numerous industry
sectors. We present a comprehensive summary of the state of the art of quantum
computing for financial applications, with particular emphasis on stochastic
modeling, optimization, and machine learning. This Review is aimed at
physicists, so it outlines the classical techniques used by the financial
industry and discusses the potential advantages and limitations of quantum
techniques. Finally, we look at the challenges that physicists could help
tackle
- …