63,891 research outputs found
Parallel scheduling of recursively defined arrays
A new method of automatic generation of concurrent programs which constructs arrays defined by sets of recursive equations is described. It is assumed that the time of computation of an array element is a linear combination of its indices, and integer programming is used to seek a succession of hyperplanes along which array elements can be computed concurrently. The method can be used to schedule equations involving variable length dependency vectors and mutually recursive arrays. Portions of the work reported here have been implemented in the PS automatic program generation system
Rediflow Multiprocessing
We discuss the concepts underlying Rediflow, a multiprocessing system being designed to support concurrent programming through a hybrid model of reduction, dataflow, and von Neumann processes. The techniques of automatic load-balancing in Rediflow are described in some detail
ThreadScan: Automatic and Scalable Memory Reclamation
The concurrent memory reclamation problem is that of devising a way for a deallocating thread to verify that no other concurrent threads hold references to a memory block being deallocated. To date, in the absence of automatic garbage collection, there is no satisfactory solution to this problem. Existing tracking methods like hazard pointers, reference counters, or epoch-based techniques like RCU, are either prohibitively expensive or require significant programming expertise, to the extent that implementing them efficiently can be worthy of a publication. None of the existing techniques are automatic or even semi-automated. In this paper, we take a new approach to concurrent memory reclamation: instead of manually tracking access to memory locations as done in techniques like hazard pointers, or restricting shared accesses to specific epoch boundaries as in RCU, our algorithm, called ThreadScan, leverages operating system signaling to automatically detect which memory locations are being accessed by concurrent threads. Initial empirical evidence shows that ThreadScan scales surprisingly well and requires negligible programming effort beyond the standard use of Malloc and Free
Flow Java: Declarative Concurrency for Java
This thesis presents the design, implementation, and evaluation of
Flow Java, a programming language for the implementation of concurrent
programs. Flow Java adds powerful programming abstractions for
automatic synchronization of concurrent programs to Java. The
abstractions added are single assignment variables (logic variables)
and futures (read-only views of logic variables).
The added abstractions conservatively extend Java with respect to
types, parameter passing, and concurrency. Futures support secure
concurrent abstractions and are essential for seamless integration of
single assignment variables into Java. These abstractions allow for
simple and concise implementation of high-level concurrent programming
abstractions.
Flow Java is implemented as a moderate extension to the
GNU gcj/libjava Java compiler and runtime environment. The
extension is not specific to a particular implementation, it could
easily be incorporated into other Java implementations.
The thesis presents three implementation strategies for single
assignment variables. One strategy uses forwarding and dereferencing
while the two others are variants of Taylor's scheme. Taylor's scheme
represents logic variables as a circular list. The thesis presents a
new adaptation of Taylor's scheme to a concurrent language using
operating system threads.
The Flow Java system is evaluated using standard Java
benchmarks. Evaluation shows that in most cases the overhead incurred
by the extensions is between 10% and 50%. For some pathological
cases the runtime increases by up to 150%. Concurrent programs making
use of Flow Java's automatic synchronization, generally perform as
good as corresponding Java programs. In some cases Flow Java programs
outperform Java programs by as much as 33%
Synthesis of Parametric Programs using Genetic Programming and Model Checking
Formal methods apply algorithms based on mathematical principles to enhance
the reliability of systems. It would only be natural to try to progress from
verification, model checking or testing a system against its formal
specification into constructing it automatically. Classical algorithmic
synthesis theory provides interesting algorithms but also alarming high
complexity and undecidability results. The use of genetic programming, in
combination with model checking and testing, provides a powerful heuristic to
synthesize programs. The method is not completely automatic, as it is fine
tuned by a user that sets up the specification and parameters. It also does not
guarantee to always succeed and converge towards a solution that satisfies all
the required properties. However, we applied it successfully on quite
nontrivial examples and managed to find solutions to hard programming
challenges, as well as to improve and to correct code. We describe here several
versions of our method for synthesizing sequential and concurrent systems.Comment: In Proceedings INFINITY 2013, arXiv:1402.661
Parallel scheduling of recursively defined arrays
AbstractThis paper describes a new method of automatic generation of concurrent programs which construct arrays defined by sets of recursive equations. We assume that the time of computation of an array element is a linear combination of its indices, and we use integer programming to seek a succession of hyperplanes along which array elements can be computed concurrently. The method can be used to schedule equations involving variable length dependency vectors and mutually recursive arrays. Portions of the work reported here have been implemented in the PS automatic program generation system
The CIAO multiparadigm compiler and system: A progress report
Abstract is not available
The CIAO Multi-Dialect Compiler and System: An Experimentation Workbench for Future (C)LP Systems
CIAO is an advanced programming environment supporting Logic and Constraint programming. It offers a simple concurrent kernel on top of which declarative and non-declarative extensions are added via librarles. Librarles are available for supporting the ISOProlog standard, several constraint domains, functional and higher order programming, concurrent and distributed programming, internet programming, and others. The source language allows declaring properties of predicates via assertions, including types and modes. Such properties are checked at compile-time or at run-time. The compiler and system architecture are designed to natively support modular global analysis, with the two objectives of proving properties in assertions and performing program optimizations, including transparently exploiting parallelism in programs. The purpose of this paper is to report on recent progress made in the context of the CIAO system, with special emphasis on the capabilities of the compiler, the techniques used for supporting such capabilities, and the results in the áreas of program analysis and transformation already obtained with the system
- …