47,499 research outputs found
Type-based complexity analysis for fork processes
International audienceWe introduce a type system for concurrent programs described as a parallel imperative language using while-loops and fork/wait instructions, in which processes do not share a global memory, in order to analyze computational complexity. The type system provides an analysis of the data-flow based both on a data ramification principle related to tiering discipline and on secure typed languages. The main result states that well-typed processes characterize exactly the set of functions computable in polynomial space under termination, confluence and lock-freedom assumptions. More precisely, each process computes in polynomial time so that the evaluation of a process may be performed in polynomial time on a parallel model of computation. Type inference of the presented analysis is decidable in linear time provided that basic operator semantics is known
Faster Mutation Analysis via Equivalence Modulo States
Mutation analysis has many applications, such as asserting the quality of
test suites and localizing faults. One important bottleneck of mutation
analysis is scalability. The latest work explores the possibility of reducing
the redundant execution via split-stream execution. However, split-stream
execution is only able to remove redundant execution before the first mutated
statement.
In this paper we try to also reduce some of the redundant execution after the
execution of the first mutated statement. We observe that, although many
mutated statements are not equivalent, the execution result of those mutated
statements may still be equivalent to the result of the original statement. In
other words, the statements are equivalent modulo the current state.
In this paper we propose a fast mutation analysis approach, AccMut. AccMut
automatically detects the equivalence modulo states among a statement and its
mutations, then groups the statements into equivalence classes modulo states,
and uses only one process to represent each class. In this way, we can
significantly reduce the number of split processes. Our experiments show that
our approach can further accelerate mutation analysis on top of split-stream
execution with a speedup of 2.56x on average.Comment: Submitted to conferenc
Higher levels of process synchronisation
Four new synchronisation primitives (SEMAPHOREs, RESOURCEs, EVENTs and BUCKETs) were introduced in the KRoC 0.8beta release of occam for SPARC (SunOS/Solaris) and Alpha (OSF/1) UNIX workstations [1][2][3]. This paper reports on the rationale, application and implementation of two of these (SEMAPHOREs and EVENTs). Details on the other two may be found on the web [4]. The new primitives are designed to support higher-level mechanisms of SHARING between parallel processes and give us greater powers of expression. They will also let greater levels of concurrency be safely exploited from future parallel architectures, such as those providing (virtual) shared-memory. They demonstrate that occam is neutral in any debate between the merits of message-passing versus shared-memory parallelism, enabling applications to take advantage of whichever paradigm (or mixture of paradigms) is the most appropriate. The new primitives could be (but are not) implemented in terms of traditional channels, but only at the expense of increased complexity and computational overhead. The primitives are immediately useful even for uni-processors - for example, the cost of a fair ALT can be reduced from O(n) to O(1). In fact, all the operations associated with new primitives have constant space and time complexities; and the constants are very low. The KRoC release provides an Abstract Data Type interface to the primitives. However, direct use of such mechanisms still allows the user to misuse them. They must be used in the ways prescribed (in this paper and in [4]) else their semantics become unpredictable. No tool is provided to check correct usage at this level. The intention is to bind those primitives found to be useful into higher level versions of occam. Some of the primitives (e.g. SEMAPHOREs) may never themselves be made visible in the language, but may be used to implement bindings of higher-level paradigms (such as SHARED channels and BLACKBOARDs). The compiler will perform the relevant usage checking on all new language bindings, closing the security loopholes opened by raw use of the primitives. The paper closes by relating this work with the notions of virtual transputers, microcoded schedulers, object orientation and Java threads
Recommended from our members
Debugging real-time software in a host-target environment
A common paradigm for the development of process-control or embedded computer software is to do most of the implementation and testing on a large host computer, then retarget the code for final checkout and production execution on the target machine. The host machine is usually large and provides a variety of program development tools, while the target may be a small, bare machine. A difficulty with the paradigm arises when the software developed has real-time constraints and is composed of multiple communicating processes. If a test execution on the target fails, it may be exceptionally tedious to determine the cause of the failure. Host machine debuggers cannot normally be applied, because the same program processing the same data will frequently exhibit different behavior on the host. Differences in processor speed, scheduling algorithm, and the like, account for the disparity. This paper proposes a partial solution to this problem, in which the errant execution reconstructed and made amenable to source language level debugging on the host. The solution involves the integrated application of a static concurrency analyzer, an interactive interpreter, and a graphical program visualization aid. Though generally applicable, the solution is described here in the context of multi-tasked real-time Ada* programs
3 tera-basepairs as a fundamental limit for robust DNA replication
10 p.-2 tab.In order to maintain functional robustness and species integrity, organisms must ensure high fidelity of the genome duplication process. This is particularly true during early development, where cell division is often occurring both rapidly and coherently. By studying the extreme limits of suppressing DNA replication failure due to double fork stall errors, we uncover a fundamental constant that describes a trade-off between genome size and architectural complexity of the developing organism. This constant has the approximate value N_U â 3Ă10^12 basepairs, and depends only on two highly conserved molecular properties of DNA biology. We show that our theory is successful in interpreting a diverse range of data across the Eukaryota.MAM, LA and TJN acknowledge prior support from the Scottish Universities Life Sciences Alliance. JJB acknowledges support from Cancer Research UK (grant C303/A14301) and the Wellcome Trust (grant WT096598MA). TJN acknowledges prior support from the National Institutes of Health (Physical Sciences in Oncology Centers, U54 CA143682).Peer reviewe
A Formal Model For Real-Time Parallel Computation
The imposition of real-time constraints on a parallel computing environment-
specifically high-performance, cluster-computing systems- introduces a variety
of challenges with respect to the formal verification of the system's timing
properties. In this paper, we briefly motivate the need for such a system, and
we introduce an automaton-based method for performing such formal verification.
We define the concept of a consistent parallel timing system: a hybrid system
consisting of a set of timed automata (specifically, timed Buchi automata as
well as a timed variant of standard finite automata), intended to model the
timing properties of a well-behaved real-time parallel system. Finally, we give
a brief case study to demonstrate the concepts in the paper: a parallel matrix
multiplication kernel which operates within provable upper time bounds. We give
the algorithm used, a corresponding consistent parallel timing system, and
empirical results showing that the system operates under the specified timing
constraints.Comment: In Proceedings FTSCS 2012, arXiv:1212.657
âBreaking up is hard to doâ: the formation and resolution of sister chromatid intertwines
The absolute necessity to resolve every intertwine between the two strands of the DNA double helix provides a massive challenge to the cellular processes that duplicate and segregate chromosomes. Although the overwhelming majority of intertwines between the parental DNA strands are resolved during DNA replication, there are numerous chromosomal contexts where some intertwining is maintained into mitosis. These mitotic sister chromatid intertwines (SCIs) can be found as; short regions of unreplicated DNA, fully replicated and intertwined sister chromatidsâcommonly referred to as DNA catenationâand as sister chromatid linkages generated by homologous recombination-associated processes. Several overlapping mechanisms, including intra-chromosomal compaction, topoisomerase action and Holliday junction resolvases, ensure that all SCIs are removed before they can prevent normal chromosome segregation. Here, I discuss why some DNA intertwines persist into mitosis and review our current knowledge of the SCI resolution mechanisms that are employed in both prokaryotes and eukaryotes, including how deregulating SCI formation during DNA replication or disrupting the resolution processes may contribute to aneuploidy in cancer
- âŠ