23,900 research outputs found
Process Realizability
We develop a notion of realizability for Classical Linear Logic based on a
concurrent process calculus.Comment: Appeared in Foundations of Secure Computation: Proceedings of the
1999 Marktoberdorf Summer School, F. L. Bauer and R. Steinbruggen, eds. (IOS
Press) 2000, 167-18
Reducibility of Gene Patterns in Ciliates using the Breakpoint Graph
Gene assembly in ciliates is one of the most involved DNA processings going
on in any organism. This process transforms one nucleus (the micronucleus) into
another functionally different nucleus (the macronucleus). We continue the
development of the theoretical models of gene assembly, and in particular we
demonstrate the use of the concept of the breakpoint graph, known from another
branch of DNA transformation research. More specifically: (1) we characterize
the intermediate gene patterns that can occur during the transformation of a
given micronuclear gene pattern to its macronuclear form; (2) we determine the
number of applications of the loop recombination operation (the most basic of
the three molecular operations that accomplish gene assembly) needed in this
transformation; (3) we generalize previous results (and give elegant
alternatives for some proofs) concerning characterizations of the micronuclear
gene patterns that can be assembled using a specific subset of the three
molecular operations.Comment: 30 pages, 13 figure
The Power of Choice in Priority Scheduling
Consider the following random process: we are given queues, into which
elements of increasing labels are inserted uniformly at random. To remove an
element, we pick two queues at random, and remove the element of lower label
(higher priority) among the two. The cost of a removal is the rank of the label
removed, among labels still present in any of the queues, that is, the distance
from the optimal choice at each step. Variants of this strategy are prevalent
in state-of-the-art concurrent priority queue implementations. Nonetheless, it
is not known whether such implementations provide any rank guarantees, even in
a sequential model.
We answer this question, showing that this strategy provides surprisingly
strong guarantees: Although the single-choice process, where we always insert
and remove from a single randomly chosen queue, has degrading cost, going to
infinity as we increase the number of steps, in the two choice process, the
expected rank of a removed element is while the expected worst-case
cost is . These bounds are tight, and hold irrespective of the
number of steps for which we run the process.
The argument is based on a new technical connection between "heavily loaded"
balls-into-bins processes and priority scheduling.
Our analytic results inspire a new concurrent priority queue implementation,
which improves upon the state of the art in terms of practical performance
Profile Likelihood Biclustering
Biclustering, the process of simultaneously clustering the rows and columns
of a data matrix, is a popular and effective tool for finding structure in a
high-dimensional dataset. Many biclustering procedures appear to work well in
practice, but most do not have associated consistency guarantees. To address
this shortcoming, we propose a new biclustering procedure based on profile
likelihood. The procedure applies to a broad range of data modalities,
including binary, count, and continuous observations. We prove that the
procedure recovers the true row and column classes when the dimensions of the
data matrix tend to infinity, even if the functional form of the data
distribution is misspecified. The procedure requires computing a combinatorial
search, which can be expensive in practice. Rather than performing this search
directly, we propose a new heuristic optimization procedure based on the
Kernighan-Lin heuristic, which has nice computational properties and performs
well in simulations. We demonstrate our procedure with applications to
congressional voting records, and microarray analysis.Comment: 40 pages, 11 figures; R package in development at
https://github.com/patperry/biclustp
Certifying cost annotations in compilers
We discuss the problem of building a compiler which can lift in a provably
correct way pieces of information on the execution cost of the object code to
cost annotations on the source code. To this end, we need a clear and flexible
picture of: (i) the meaning of cost annotations, (ii) the method to prove them
sound and precise, and (iii) the way such proofs can be composed. We propose a
so-called labelling approach to these three questions. As a first step, we
examine its application to a toy compiler. This formal study suggests that the
labelling approach has good compositionality and scalability properties. In
order to provide further evidence for this claim, we report our successful
experience in implementing and testing the labelling approach on top of a
prototype compiler written in OCAML for (a large fragment of) the C language
- …