61 research outputs found
Improving Reuse of Distributed Transaction Software with Transaction-Aware Aspects
Implementing crosscutting concerns for transactions is difficult, even using Aspect-Oriented Programming Languages (AOPLs) such as AspectJ. Many of these challenges arise because the context of a transaction-related crosscutting concern consists of loosely-coupled abstractions like dynamically-generated identifiers, timestamps, and tentative value sets of distributed resources. Current AOPLs do not provide joinpoints and pointcuts for weaving advice into high-level abstractions or contexts, like transaction contexts. Other challenges stem from the essential complexity in the nature of the data, operations on the data, or the volume of data, and accidental complexity comes from the way that the problem is being solved, even using common transaction frameworks. This dissertation describes an extension to AspectJ, called TransJ, with which developers can implement transaction-related crosscutting concerns in cohesive and loosely-coupled aspects. It also presents a preliminary experiment that provides evidence of improvement in reusability without sacrificing the performance of applications requiring essential transactions. This empirical study is conducted using the extended-quality model for transactional application to define measurements on the transaction software systems. This quality model defines three goals: the first relates to code quality (in terms of its reusability); the second to software performance; and the third concerns software development efficiency. Results from this study show that TransJ can improve the reusability while maintaining performance of TransJ applications requiring transaction for all eight areas addressed by the hypotheses: better encapsulation and separation of concern; loose Coupling, higher-cohesion and less tangling; improving obliviousness; preserving the software efficiency; improving extensibility; and hasten the development process
Predictive Monitoring against Pattern Regular Languages
In this paper, we focus on the problem of dynamically analysing concurrent
software against high-level temporal specifications. Existing techniques for
runtime monitoring against such specifications are primarily designed for
sequential software and remain inadequate in the presence of concurrency --
violations may be observed only in intricate thread interleavings, requiring
many re-runs of the underlying software. Towards this, we study the problem of
predictive runtime monitoring, inspired by the analogous problem of predictive
data race detection studied extensively recently. The predictive runtime
monitoring question asks, given an execution , if it can be soundly
reordered to expose violations of a specification.
In this paper, we focus on specifications that are given in regular
languages. Our notion of reorderings is trace equivalence, where an execution
is considered a reordering of another if it can be obtained from the latter by
successively commuting adjacent independent actions. We first show that the
problem of predictive admits a super-linear lower bound of , where
is the number of events in the execution, and is a parameter
describing the degree of commutativity. As a result, predictive runtime
monitoring even in this setting is unlikely to be efficiently solvable.
Towards this, we identify a sub-class of regular languages, called pattern
languages (and their extension generalized pattern languages). Pattern
languages can naturally express specific ordering of some number of (labelled)
events, and have been inspired by popular empirical hypotheses, the `small bug
depth' hypothesis. More importantly, we show that for pattern (and generalized
pattern) languages, the predictive monitoring problem can be solved using a
constant-space streaming linear-time algorithm
Coordination fiable de services de données à base de politiques actives
We propose an approach for adding non-functional properties (exception handling, atomicity, security, persistence) to services' coordinations. The approach is based on an Active Policy Model (AP Model) for representing services' coordinations with non-functional properties as a collection of types. In our model, a services' coordination is represented as a workflow composed of an ordered set of activities, each activity in charge of implementing a call to a service' operation. We use the type Activity for representing a workflow and its components (i.e., the workflow' activities and the order among them). A non-functional property is represented as one or several Active Policy types, each policy composed of a set of event-condition-action rules in charge of implementing an aspect of the property. Instances of active policy and activity types are considered in the model as entities that can be executed. We use the Execution Unit type for representing them as entities that go through a series of states at runtime. When an active policy is associated to one or several execution units, its rules verify whether each unit respects the implemented non-functional property by evaluating their conditions over their execution unit state, and when the property is not verified, the rules execute their actions for enforcing the property at runtime. We also proposed a proof of concept Active Policy Execution Engine for executing an active policy oriented workflow modelled using our AP Model. The engine implements an execution model that determines how AP, Rule and Activity instances interact among each other for adding non-functional properties (NFPs) to a workflow at execution time. We validated the AP Model and the Active Policy Execution Engine by defining active policy types for addressing exception handling, atomicity, state management, persistency and authentication properties. These active policy types were used for implementing reliable service oriented applications, and mashups for integrating data from services.Nous proposons une approche pour ajouter des propriétés non-fonctionnelles (traitement d'exceptions, atomicité, sécurité, persistance) à des coordinations de services. L'approche est basée sur un Modèle de Politiques Actives (AP Model) pour représenter les coordinations de services avec des propriétés non-fonctionnelles comme une collection de types. Dans notre modèle, une coordination de services est représentée comme un workflow compose d'un ensemble ordonné d'activité. Chaque activité est en charge d'implante un appel à l'opération d'un service. Nous utilisons le type Activité pour représenter le workflow et ses composants (c-à-d, les activités du workflow et l'ordre entre eux). Une propriété non-fonctionnelle est représentée comme un ou plusieurs types de politiques actives, chaque politique est compose d'un ensemble de règles événement-condition-action qui implantent un aspect d'un propriété. Les instances des entités du modèle, politique active et activité peuvent être exécutées. Nous utilisons le type unité d'exécution pour les représenter comme des entités dont l'exécution passe par des différents états d'exécution en exécution. Lorsqu'une politique active est associée à une ou plusieurs unités d'exécution, les règles vérifient si l'unité d'exécution respecte la propriété non-fonctionnelle implantée en évaluant leurs conditions sur leurs états d'exécution. Lorsqu'une propriété n'est pas vérifiée, les règles exécutant leurs actions pour renforcer les propriétés en cours d'exécution. Nous avons aussi proposé un Moteur d'exécution de politiques actives pour exécuter un workflow orientés politiques actives modélisé en utilisant notre AP Model. Le moteur implante un modèle d'exécution qui détermine comment les instances d'une AP, une règle et une activité interagissent entre elles pour ajouter des propriétés non-fonctionnelles (NFP) à un workflow en cours d'exécution. Nous avons validé le modèle AP et le moteur d'exécution de politiques actives en définissant des types de politiques actives pour adresser le traitement d'exceptions, l'atomicité, le traitement d'état, la persistance et l'authentification. Ces types de politiques actives ont été utilisés pour implanter des applications à base de services fiables, et pour intégrer les données fournies par des services à travers des mashups
Software for Visualization and Coordination of the Distributed Simulation Modeling Process
Simulation modeling projects commonly involve distributed team collaboration. It is currently difficult to perform collaboration in distributed modeling process for two reasons: 1) Simulation modeling in general requires modelers to manage complexities (such as tracking model revisions, recording scenario assumptions and organizing external artifacts) related to the model. 2) Distributed collaboration requires collaborators to maintain change awareness. While proper information technology support is known to lessen the difficulties of collaborations, there is limited software support for complexity management in generic modeling process and change awareness in distributed collaboration, therefore require tremendous amount of effort in management and communication. This thesis describes a new system that supports distributed modeling process. The system provides modeling repositories to help manage modeling complexities and a visual workspace to provide change awareness information. The system has been shown to substantially reduce modeling effort in distributed modeling, is extensible and easy to use
Coarser Equivalences for Causal Concurrency
Trace theory is a principled framework for defining equivalence relations for
concurrent program runs based on a commutativity relation over the set of
atomic steps taken by individual program threads. Its simplicity, elegance, and
algorithmic efficiency makes it useful in many different contexts including
program verification and testing. We study relaxations of trace equivalence
with the goal of maintaining its algorithmic advantages.
We first prove that the largest appropriate relaxation of trace equivalence,
an equivalence relation that preserves the order of steps taken by each thread
and what write operation each read operation observes, does not yield efficient
algorithms. We prove a linear space lower bound for the problem of checking, in
a streaming setting, if two arbitrary steps of a concurrent program run are
causally concurrent (i.e. they can be reordered in an equivalent run) or
causally ordered (i.e. they always appear in the same order in all equivalent
runs). The same problem can be decided in constant space for trace equivalence.
Next, we propose a new commutativity-based notion of equivalence called grain
equivalence that is strictly more relaxed than trace equivalence, and yet
yields a constant space algorithm for the same problem. This notion of
equivalence uses commutativity of grains, which are sequences of atomic steps,
in addition to the standard commutativity from trace theory. We study the two
distinct cases when the grains are contiguous subwords of the input program run
and when they are not, formulate the precise definition of causal concurrency
in each case, and show that they can be decided in constant space, despite
being strict relaxations of the notion of causal concurrency based on trace
equivalence
Mutable Class Design Pattern
The dissertation proposes, presents and analyzes a new design pattern, the Mutable Class pattern, to support the processing of large-scale heterogeneous data models with multiple families of algorithms. Handling data-algorithm associations represents an important topic across a variety of application domains. As a result, it has been addressed by multiple approaches, including the Visitor pattern and the aspect-oriented programming (AOP) paradigm. Existing solutions, however, bring additional constraints and issues. For example, the Visitor pattern freezes the class hierarchies of application models and the AOP-based projects, such as Spring AOP, introduce significant overhead for processing large-scale models with fine-grain objects. The Mutable Class pattern addresses the limitations of these solutions by providing an alternative approach designed after the Class model of the UML specification. Technically, it extends a data model class with a class mutator supporting the interchangeability of operations.
Design patterns represent reusable solutions to recurring problems. According to the design pattern methodology, the definition of these solutions encompasses multiple topics, such as the problem and applicability, structure, collaborations among participants, consequences, implementation aspects, and relation with other patterns. The dissertation provides a formal description of the Mutable Class pattern for processing heterogeneous tree-based models and elaborates on it with a comprehensive analysis in the context of several applications and alternative solutions. Particularly, the commonality of the problem and reusability of this approach is demonstrated and evaluated within two application domains: computational accelerator physics and compiler construction. Furthermore, as a core part of the Unified Accelerator Library (UAL) framework, the scalability boundary of the pattern has been challenged and explored with different categories of application architectures and computational infrastructures including distributed three-tier systems.
The Mutable Class pattern targets a common problem arising from software engineering: the evolution of type systems and associated algorithms. Future research includes applying this design pattern in other contexts, such as heterogeneous information networks and large-scale processing platforms, and examining variations and alternative design patterns for solving related classes of problems
Finding and Tolerating Concurrency Bugs.
Shared-memory multi-threaded programming is inherently more difficult than single-threaded programming. The main source of complexity is that, the threads of an application can interleave in so many different ways. To ensure correctness, a programmer has to test all possible thread interleavings, which, however, is impractical. Many rare thread interleavings remain untested in production systems, and they are the major cause for a majority of concurrency bugs.
Given that untested interleavings are the major cause of a majority of the concurrency bugs, this dissertation explores two possible ways to tackle concurrency bugs in this dissertation. One is to expose untested interleavings during testing to find concurrency bugs. The other is to avoid untested interleavings during production runs to tolerate concurrency bugs. The key is an efficient and effective way to encode and remember tested interleavings.
This dissertation first discusses two hypotheses about concurrency bugs: the small scope hypothesis and the value independent hypothesis. Based on these two hypotheses, this dissertation defines a set of interleaving patterns, called interleaving idioms, which are used to encode tested interleavings. The empirical analysis shows that the idiom based interleaving encoding scheme is able to represent most of the concurrency bugs that are used in the study.
Then, this dissertation discusses an open source testing tool called Maple. It memoizes tested interleavings and actively seeks to expose untested interleavings. The results show that Maple is able to expose concurrency bugs and expose interleavings faster than other conventional testing techniques.
Finally, this dissertation discusses two parallel runtime system designs which seek to avoid untested interleavings during production runs to tolerate concurrency bugs. Avoiding untested interleavings significantly improve correctness because most of the concurrency bugs are caused by untested interleavings. Also, the performance overhead for disallowing untested interleavings is low as commonly occuring interleavings should have been tested in a well-tested program.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/99765/1/jieyu_1.pd
Fast, Sound and Effectively Complete Dynamic Race Detection
Writing concurrent programs is highly error-prone due to the nondeterminism
in interprocess communication. The most reliable indicators of errors in
concurrency are data races, which are accesses to a shared resource that can be
executed consecutively. We study the algorithmic problem of predicting data
races in lock-based concurrent programs. The input consists of a concurrent
trace , and the task is to determine all pairs of events of that
constitute a data race. The problem lies at the heart of concurrent
verification and has been extensively studied for over three decades. However,
existing polynomial-time sound techniques are highly incomplete and can miss
many simple races.
In this work we develop M2: a new polynomial-time algorithm for this problem,
which has no false positives. In addition, our algorithm is complete for input
traces that consist of two processes, i.e., it provably detects all races in
the trace. We also develop sufficient conditions for detecting completeness
dynamically in cases of more than two processes. We make an experimental
evaluation of our algorithm on a standard set of benchmarks taken from recent
literature on the topic. Our tool soundly reports thousands of races and misses
at most one race in the whole benchmark set. In addition, our technique detects
all racy memory locations of the benchmark set. Finally, its running times are
comparable, and often smaller than the theoretically fastest, yet highly
incomplete, existing methods. To our knowledge, M2 is the first sound algorithm
that achieves such a level of performance on both running time and completeness
of the reported races
- …