22,056 research outputs found
Bounded Counter Languages
We show that deterministic finite automata equipped with two-way heads
are equivalent to deterministic machines with a single two-way input head and
linearly bounded counters if the accepted language is strictly bounded,
i.e., a subset of for a fixed sequence of symbols . Then we investigate linear speed-up for counter machines. Lower
and upper time bounds for concrete recognition problems are shown, implying
that in general linear speed-up does not hold for counter machines. For bounded
languages we develop a technique for speeding up computations by any constant
factor at the expense of adding a fixed number of counters
Exponential Separation of Quantum and Classical Online Space Complexity
Although quantum algorithms realizing an exponential time speed-up over the
best known classical algorithms exist, no quantum algorithm is known performing
computation using less space resources than classical algorithms. In this
paper, we study, for the first time explicitly, space-bounded quantum
algorithms for computational problems where the input is given not as a whole,
but bit by bit. We show that there exist such problems that a quantum computer
can solve using exponentially less work space than a classical computer. More
precisely, we introduce a very natural and simple model of a space-bounded
quantum online machine and prove an exponential separation of classical and
quantum online space complexity, in the bounded-error setting and for a total
language. The language we consider is inspired by a communication problem (the
set intersection function) that Buhrman, Cleve and Wigderson used to show an
almost quadratic separation of quantum and classical bounded-error
communication complexity. We prove that, in the framework of online space
complexity, the separation becomes exponential.Comment: 13 pages. v3: minor change
The Cost of Address Translation
Modern computers are not random access machines (RAMs). They have a memory
hierarchy, multiple cores, and virtual memory. In this paper, we address the
computational cost of address translation in virtual memory. Starting point for
our work is the observation that the analysis of some simple algorithms (random
scan of an array, binary search, heapsort) in either the RAM model or the EM
model (external memory model) does not correctly predict growth rates of actual
running times. We propose the VAT model (virtual address translation) to
account for the cost of address translations and analyze the algorithms
mentioned above and others in the model. The predictions agree with the
measurements. We also analyze the VAT-cost of cache-oblivious algorithms.Comment: A extended abstract of this paper was published in the proceedings of
ALENEX13, New Orleans, US
B-LOG: A branch and bound methodology for the parallel execution of logic programs
We propose a computational methodology -"B-LOG"-, which offers the potential for an effective implementation of Logic Programming in a parallel computer. We also propose a weighting scheme to guide the search process through the graph and we apply the concepts of parallel "branch and bound" algorithms in order to perform a "best-first" search using an information theoretic bound. The concept of "session" is used to speed up the search process in a succession of similar queries. Within a session, we strongly modify the bounds in a local database, while bounds kept in a global database are weakly modified to provide a better initial condition for other sessions. We
also propose an implementation scheme based on a database
machine using "semantic paging", and the "B-LOG processor" based on a scoreboard driven controller
A Swiss Pocket Knife for Computability
This research is about operational- and complexity-oriented aspects of
classical foundations of computability theory. The approach is to re-examine
some classical theorems and constructions, but with new criteria for success
that are natural from a programming language perspective.
Three cornerstones of computability theory are the S-m-ntheorem; Turing's
"universal machine"; and Kleene's second recursion theorem. In today's
programming language parlance these are respectively partial evaluation,
self-interpretation, and reflection. In retrospect it is fascinating that
Kleene's 1938 proof is constructive; and in essence builds a self-reproducing
program.
Computability theory originated in the 1930s, long before the invention of
computers and programs. Its emphasis was on delimiting the boundaries of
computability. Some milestones include 1936 (Turing), 1938 (Kleene), 1967
(isomorphism of programming languages), 1985 (partial evaluation), 1989 (theory
implementation), 1993 (efficient self-interpretation) and 2006 (term register
machines).
The "Swiss pocket knife" of the title is a programming language that allows
efficient computer implementation of all three computability cornerstones,
emphasising the third: Kleene's second recursion theorem. We describe
experiments with a tree-based computational model aiming for both fast program
generation and fast execution of the generated programs.Comment: In Proceedings Festschrift for Dave Schmidt, arXiv:1309.455
- …