30,834 research outputs found
Implementing atomic actions in Ada 95
Atomic actions are an important dynamic structuring technique that aid the construction of fault-tolerant concurrent systems. Although they were developed some years ago, none of the well-known commercially-available programming languages directly support their use. This paper summarizes software fault tolerance techniques for concurrent systems, evaluates the Ada 95 programming language from the perspective of its support for software fault tolerance, and shows how Ada 95 can be used to implement software fault tolerance techniques. In particular, it shows how packages, protected objects, requeue, exceptions, asynchronous transfer of control, tagged types, and controlled types can be used as building blocks from which to construct atomic actions with forward and backward error recovery, which are resilient to deserter tasks and task abortion
Logic programming in the context of multiparadigm programming: the Oz experience
Oz is a multiparadigm language that supports logic programming as one of its
major paradigms. A multiparadigm language is designed to support different
programming paradigms (logic, functional, constraint, object-oriented,
sequential, concurrent, etc.) with equal ease. This article has two goals: to
give a tutorial of logic programming in Oz and to show how logic programming
fits naturally into the wider context of multiparadigm programming. Our
experience shows that there are two classes of problems, which we call
algorithmic and search problems, for which logic programming can help formulate
practical solutions. Algorithmic problems have known efficient algorithms.
Search problems do not have known efficient algorithms but can be solved with
search. The Oz support for logic programming targets these two problem classes
specifically, using the concepts needed for each. This is in contrast to the
Prolog approach, which targets both classes with one set of concepts, which
results in less than optimal support for each class. To explain the essential
difference between algorithmic and search programs, we define the Oz execution
model. This model subsumes both concurrent logic programming
(committed-choice-style) and search-based logic programming (Prolog-style).
Instead of Horn clause syntax, Oz has a simple, fully compositional,
higher-order syntax that accommodates the abilities of the language. We
conclude with lessons learned from this work, a brief history of Oz, and many
entry points into the Oz literature.Comment: 48 pages, to appear in the journal "Theory and Practice of Logic
Programming
Out-Of-Place debugging: a debugging architecture to reduce debugging interference
Context. Recent studies show that developers spend most of their programming
time testing, verifying and debugging software. As applications become more and
more complex, developers demand more advanced debugging support to ease the
software development process.
Inquiry. Since the 70's many debugging solutions were introduced. Amongst
them, online debuggers provide a good insight on the conditions that led to a
bug, allowing inspection and interaction with the variables of the program.
However, most of the online debugging solutions introduce \textit{debugging
interference} to the execution of the program, i.e. pauses, latency, and
evaluation of code containing side-effects.
Approach. This paper investigates a novel debugging technique called
\outofplace debugging. The goal is to minimize the debugging interference
characteristic of online debugging while allowing online remote capabilities.
An \outofplace debugger transfers the program execution and application state
from the debugged application to the debugger application, both running in
different processes.
Knowledge. On the one hand, \outofplace debugging allows developers to debug
applications remotely, overcoming the need of physical access to the machine
where the debugged application is running. On the other hand, debugging happens
locally on the remote machine avoiding latency. That makes it suitable to be
deployed on a distributed system and handle the debugging of several processes
running in parallel.
Grounding. We implemented a concrete out-of-place debugger for the Pharo
Smalltalk programming language. We show that our approach is practical by
performing several benchmarks, comparing our approach with a classic remote
online debugger. We show that our prototype debugger outperforms by a 1000
times a traditional remote debugger in several scenarios. Moreover, we show
that the presence of our debugger does not impact the overall performance of an
application.
Importance. This work combines remote debugging with the debugging experience
of a local online debugger. Out-of-place debugging is the first online
debugging technique that can minimize debugging interference while debugging a
remote application. Yet, it still keeps the benefits of online debugging ( e.g.
step-by-step execution). This makes the technique suitable for modern
applications which are increasingly parallel, distributed and reactive to
streams of data from various sources like sensors, UI, network, etc
Concurrent object-oriented programming: The MP-Eiffel approach
This article evaluates several possible approaches for integrating concurrency into
object-oriented programming languages, presenting afterwards, a new language named
MP-Eiffel. MP-Eiffel was designed attempting to include all the essential properties
of both concurrent and object-oriented programming with simplicity and safety.
A special care was taken to achieve the orthogonality of all the language mechanisms,
allowing their joint use without unsafe side-effects (such as inheritance anomalies)
Multiparty interactions in dependable distributed systems
PhD ThesisWith the expansion of computer networks, activities involving computer communication
are becoming more and more distributed. Such distribution can
include processing, control, data, network management, and security. Although
distribution can improve the reliability of a system by replicating
components, sometimes an increase in distribution can introduce some undesirable
faults. To reduce the risks of introducing, and to improve the chances
of removing and tolerating faults when distributing applications, it is important
that distributed systems are implemented in an organized way.
As in sequential programming, complexity in distributed, in particular
parallel, program development can be managed by providing appropriate
programming language constructs. Language constructs can help both by
supporting encapsulation so as to prevent unwanted interactions between
program components and by providing higher-level abstractions that reduce
programmer effort by allowing compilers to handle mundane, error-prone
aspects of parallel program implementation.
A language construct that supports encapsulation of interactions between
multiple parties (objects or processes) is referred in the literature as multiparty
interaction. In a multiparty interaction, several parties somehow "come
together" to produce an intermediate and temporary combined state, use this
state to execute some activity, and then leave the interaction and continue
their normal execution.
There has been a lot of work in the past years on multiparty interaction,
but most of it has been concerned with synchronisation, or handshaking,
between parties rather than the encapsulation of several activities executed
in parallel by the interaction participants. The programmer is therefore left
responsible for ensuring that the processes involved in a cooperative activity
do not interfere with, or suffer interference from, other processes not involved
in the activity.
Furthermore, none of this work has discussed the provision of features
that would facilitate the design of multiparty interactions that are expected
to cope with faults - whether in the environment that the computer system
has to deal with, in the operation of the underlying computer hardware or
software, or in the design of the processes that are involved in the interaction.
In this thesis the concept of multiparty interaction is integrated with
the concept of exception handling in concurrent activities. The final result
is a language in which the concept of multiparty interaction is extended
by providing it with a mechanism to handle concurrent exceptions. This
extended concept is called dependable multiparty interaction.
The features and requirements for multiparty interaction and exception
handling provided in a set of languages surveyed in this thesis, are integrated
to describe the new dependable multiparty interaction construct. Additionally,
object-oriented architectures for dependable multiparty interactions are
described, and a full implementation of one of the architectures is provided.
This implementation is then applied to a set of case studies. The case studies
show how dependable multiparty interactions can be used to design and
implement a safety-critical system, a multiparty programming abstraction,
and a parallel computation model.Brazilian Research Agency CNPq
Ada (trademark) projects at NASA. Runtime environment issues and recommendations
Ada practitioners should use this document to discuss and establish common short term requirements for Ada runtime environments. The major current Ada runtime environment issues are identified through the analysis of some of the Ada efforts at NASA and other research centers. The runtime environment characteristics of major compilers are compared while alternate runtime implementations are reviewed. Modifications and extensions to the Ada Language Reference Manual to address some of these runtime issues are proposed. Three classes of projects focusing on the most critical runtime features of Ada are recommended, including a range of immediately feasible full scale Ada development projects. Also, a list of runtime features and procurement issues is proposed for consideration by the vendors, contractors and the government
Using mobility and exception handling to achieve mobile agents that survive server crash failures
Mobile agent technology, when designed and used effectively, can minimize bandwidth consumption and autonomously provide a snapshot of the current context of a distributed system. Protecting mobile agents from server crashes is a challenging issue, since developers normally have no control over remote servers. Server crash failures can leave replicas, instable storage, unavailable for an unknown time period. Furthermore, few systems have considered the need for using a fault tolerant protocol among a group of collaborating mobile agents. This thesis uses exception handling to protect mobile agents from server crash failures. An exception model is proposed for mobile agents and two exception handler designs are investigated. The first exists at the server that created the mobile agent and uses a timeout mechanism. The second, the mobile shadow scheme, migrates with the mobile agent and operates at the previous server visited by the mobile agent. A case study application has been developed to compare the performance of the two exception handler designs. Performance results demonstrate that although the second design is slower it offers the smaller trip time when handling a server crash. Furthermore, no modification of the server environment is necessary. This thesis shows that the mobile shadow exception handling scheme reduces complexity for a group of mobile agents to survive server crashes. The scheme deploys a replica that monitors the server occupied by the master, at each stage of the itinerary. The replica exists at the previous server visited in the itinerary. Consequently, each group member is a single fault tolerant entity with respect to server crash failures. Other schemes introduce greater complexity and performance overheads since, for each stage of the itinerary, a group of replicas is sent to servers that offer an equivalent service. In addition, future research is established for fault tolerance in groups of collaborating mobile agents
HaTS: Hardware-Assisted Transaction Scheduler
In this paper we present HaTS, a Hardware-assisted Transaction Scheduler. HaTS improves performance of concurrent applications by classifying the executions of their atomic blocks (or in-memory transactions) into scheduling queues, according to their so called conflict indicators. The goal is to group those transactions that are conflicting while letting non-conflicting transactions proceed in parallel. Two core innovations characterize HaTS. First, HaTS does not assume the availability of precise information associated with incoming transactions in order to proceed with the classification. It relaxes this assumption by exploiting the inherent conflict resolution provided by Hardware Transactional Memory (HTM). Second, HaTS dynamically adjusts the number of the scheduling queues in order to capture the actual application contention level. Performance results using the STAMP benchmark suite show up to 2x improvement over state-of-the-art HTM-based scheduling techniques
- …