16 research outputs found
An Extension of PlusCal for Modeling Distributed Algorithms
International audienceThe PlusCal language combines the expressive power of TLA+ with the “look and feel” of imperative pseudo-code in order to allow users to express algorithms at a high level of abstraction. PlusCal algorithms are translated to TLA+ specifications and can be formally verified using the TLA+ Toolbox. We propose a small extension of PlusCal, tentatively called Distributed PlusCal, intended for simplifying the presentation of distributed algorithms in PlusCal.Distributed systems consist of nodes that communicate by message passing. It is convenient to model a node as running several threads that share local memory. For example, one thread may execute the main algorithm, while a separate thread listens for incoming messages. Although PlusCal offers processes, they have a single thread of execution. Different threads of the same node must therefore be modeled as individual processes, and variables representing the local memory of a node must be declared as global variables, obscuring the structure of the code. Our first extension allows a PlusCal process to have several code blocks that execute in parallel. Besides, Distributed PlusCal explicitly identifies variables representing communication channels and introduces associated send and receive operations. In contrast to using ordinary variables and writing macros or operator definitions for channel operations, making channels part of the language gives us some more flexibility in the TLA+ translation
Formal Verification of Authorization Policies for Enterprise Social Networks using PlusCal-2
International audienceInformation security research has been a highly active and widely studied research direction. In the domain of of Enterprise Social Networks (ESNs), the security challenges are amplified as they aim to incorporate the social technologies in an enterprise setup and thus asserting greater control on information security. Further, the security challenges may not be limited to the boundaries of a single enterprise and need to be catered for a federated environment where users from different ESNs can collaborate. In this paper, we address the problem of federated authorization for the ESNs and present an approach for combining user level policies with the enterprise policies. We present the formal verification technique for ESNs and how it can be used to identify the conflicts in the policies. It allows us to bridge the gap between user-centric or enterprise-centric approaches as required by the domain of ESN. We apply our specification of ESNs on a scenario and discuss the model checking results
Data-centric concurrency control on the java programming language
Dissertação para obtenção do Grau de Mestre em
Engenharia InformáticaThe multi-core paradigm has propelled shared-memory concurrent programming to an important role in software development. Its use is however limited by the constructs
that provide a layer of abstraction for synchronizing access to shared resources. Reasoning with these constructs is not trivial due to their concurrent nature. Data-races and deadlocks occur in concurrent programs, encumbering the programmer and further reducing his productivity.
Even though the constructs should be as unobtrusive and intuitive as possible, performance must also be kept high compared to legacy lock-based mechanism. Failure to
guarantee similar performance will hinder a system from adoption.
Recent research attempts to address these issues. However, the current state of the
art in concurrency control mechanisms is mostly code-centric and not intuitive. Its codecentric nature requires the specification of the zones in the code that require synchronization,contributing to the decentralization of concurrency bugs and error-proneness of the programmer. On the other hand, the only data-centric approach, AJ [VTD06], exposes excessive detail to the programmer and fails to provide complete deadlock-freedom.
Given this state of the art, our proposal intends to provide the programmer a set
of unobtrusive data-centric constructs. These will guarantee desirable security properties:
composability, atomicity, and deadlock-freedom in all scenarios. For that purpose,
a lower level mechanism (ResourceGroups) will be used. The model proposed resides on
the known concept of atomic variables, the basis for our concurrency control mechanism.
To infer the efficiency of our work, it is compared to Java synchronized blocks, transactional memory and AJ, where our system demonstrates a competitive performance and an equivalent level of expressivity.RepComp project(PTDC/EIA-EIA/108963/2008
Foundations for Behavioural Model Elaboration Using Modal Transition Systems
Modal Transition Systems (MTS) are an extension of Labelled Transition Systems
(LTS) that have been shown to be useful to reason about system behaviour in the
context of partial information. MTSs distinguish between required, proscribed
and unknown behaviour and come equipped with a notion of refinement that supports
incremental modelling where unknown behaviour is iteratively elaborated
into required or proscribed behaviour.
A particularly useful notion in the context of software and requirements engineering
is that of “merge”. Merging two consistent models is a process that should
result in a minimal common refinement of both models where consistency is defined
as the existence of one common refinement. One of the current limitations
of MTS merging is that a complete and correct algorithm for merging has not
been developed. Hence, an engineer attempting to merge partial descriptions may
be prevented to do so by overconstrained algorithms or algorithms that introduce
behaviour that does not follow from the partial descriptions being merged. In
this thesis we study the problems of consistency and merge for the existing MTSs
semantics - strong and weak semantics - and provide a complete characterization
of MTS consistency as well as a complete and correct algorithm for MTS merging
using these semantics.
Strong and weak semantics require MTS models to have the same communicating
alphabet, the latter allowing the use of a distinguished unobservable action. In
this work we show that the requirement of fixing the alphabet for MTS semantics
and the treatment of observable actions are limiting if MTSs are to support
incremental elaboration of partial behaviour models. We present a novel observational
semantics for MTS, branching alphabet semantics, inspired by branching
LTS equivalence, which supports the elaboration of model behaviour including
the extension of the alphabet of the system to describe behaviour aspects that
previously had not been taken into account. Furthermore, we show that some
unintuitive refinements allowed by weak semantics are avoided, and prove a number
of theorems that relate branching refinement with alphabet refinement and
consistency. These theorems, which do not hold for other semantics, support the
argument for considering branching alphabet as a sound semantics to support
behaviour model elaboration
Interim research assessment 2003-2005 - Computer Science
This report primarily serves as a source of information for the 2007 Interim Research Assessment Committee for Computer Science at the three technical universities in the Netherlands. The report also provides information for others interested in our research activities
Dash+: Extending Alloy with Replicated Processes for Modelling Transition Systems
Modelling systems abstractly shows great promise to uncover bugs early in system development.
The formal language Alloy provides the means of writing constraints abstractly but lacks explicit constructs for describing
transition systems. Extensions to Alloy, such as Electrum, DynAlloy, and Dash, provide such constructs. However, still missing
are language constructs to describe easily multiple processes with the same behavior (replicated processes) running in parallel
as is found in languages such as PlusCal and Promela.
We propose extensions to Dash for replicated processes. The result is Dash+: an Alloy language extension
for describing transition systems that include both concurrent and hierarchical states and replicated concurrent processes. The processes
can communicate via buffers or exchange information through variables and events. The key contributions of our novel approach are:
1) Replicated and non-replicated components can be nested arbitrarily at any level in the state hierarchy
2) Replicated components can exchange information directly without resorting to global variables as is the case in PlusCal and Promela
3) A modeller can abstractly model the topology of the processes (ring, list, etc.) through constraints on the set indexing the processes
4) Buffers can be used to facilitate communication between replicated components
Dash+ stays consistent with the semantics of Dash and uses the notion of big steps and small steps to describe changes in the system.
The semantics are implemented in a translation to Alloy in a way that accommodates the following model checking options: traces-based model checking, transitive closure-based model checking (TCMC), and Electrum.
Our implementation is fully integrated into the Alloy Analyzer.
This thesis presents case studies to demonstrate the features of Dash+ in modelling systems with concurrent processes and the benefits that Dash+ offers over existing languages. We check for properties in each of the models in the case studies to demonstrate how different model checking options can be used
A toolkit for model checking of electronic contracts
PhD ThesisIn the business world, contracts are used to regulate business interactions
between trading parties. In this context, an electronic contracting systems
can be used to monitor business–to–business interactions to ensure that
they comply with the rights (permissions), obligations and prohibitions
stipulated in contract clauses. Such an electronic contracting system will
require an executable version of the contract (e-contract) for compliance
checking. It is important to be verify the correctness properties of an e-
contract before deploying it for compliance checking. Model checkers are
widely used for automatic verification of concurrent systems. However,
such tools for e-contracts with means for expressing directly and intu-
itively key concepts that appear recurrently in contracts, such as execu-
tions of business operations, granting (cancellation, suspension, fulfilment,
violation, etc.) of rights, obligations and prohibitions to role players are
not yet available.
This thesis rectifies the situation by developing a high-level e-contract
verification toolkit using the Spin model checker. A formal Contractual
Business-To-Business interaction (CB2B) model based on the concepts of
contract compliance checking developed earlier at Newcastle university
has been constructed. Further, Promela, the input language of the Spin
model checker, has been extended in a manner that enables specification
of contract clauses in terms of contract entities: role players, business
operations, rights, obligations and prohibitions. A given contract can now
be expressed using extended Promela as a set of declarations and a set of
Event-Condition-Action rules. In addition, the designer can specify the
correctness requirements to be verified in Linear-Temporal-Logic directly
in terms of the contract entities. A notable feature is that the CB2B model
automatically checks for contract independent properties: properties that
must hold for all contracts. For example, at run time, a contract should
not simultaneously grant a role player a right to perform an operation
and also prohibit it. Thus, the toolkit hides much of the intricate details
of dealing with Promela processes communicating through channels and
enables a designer to build verifiable abstract models directly in terms of
contract entities.
The usefulness of the toolkit is demonstrated by trying out a number of
contract examples used by researchers working on contract verification.
The thesis also shows how the toolkit can be used for generating test
cases for testing an implemented system
RitHM: A Modular Software Framework for Runtime Monitoring Supporting Complete and Lossy Traces
Runtime verification (RV) is an effective and automated method for specification based offline testing as well as online monitoring of complex real-world systems. Firstly, a software framework for RV needs to exhibit certain design features to support usability, modifiability and efficiency. While usability and modifiability are important for providing support for expressive logical formalisms, efficiency is required to reduce the extra overhead at run time. Secondly, most existing techniques assume the existence of a complete execution trace for RV. However, real-world systems often produce incomplete execution traces due to reasons such as network issues, logging failures, etc. A few verification techniques have recently emerged for performing verification of incomplete execution traces. While some of these techniques sacrifice soundness, others are too restrictive in their tolerance for incompleteness.
For addressing the first problem, we introduce RitHM, a comprehensive framework, which enables development and integration of efficient verification techniques. RitHM's design takes into account various state-of-the-art techniques that are developed to optimize RV w.r.t. the efficiency of monitors and expressivity of logical formalisms. RitHM's design supports modifiability by allowing a reuse of efficient monitoring algorithms in the form of plugins, which can utilize heterogeneous back-ends. RitHM also supports extensions of logical formalisms through logic plugins. It also facilitates the interoperability between implementations of monitoring algorithms, and this feature allows utilizing different efficient algorithms for monitoring different sub-parts of a specification.
We evaluate RitHM's architecture and architectures of a few more tools using architecture trade-off analysis (ATAM) method. We also report empirical results, where RitHM is used for monitoring real-world systems. The results underscore the importance of various design features of RitHM.
For addressing the second problem, we identify a fragment of LTL specifications, which can be soundly monitored in the presence of transient loss events in an execution trace. We present an offline algorithm, which identifies whether an LTL formula is monitorable in a presence of a transient loss of events and constructs a loss-tolerant monitor depending upon the monitorability of the formula.
Our experimental results demonstrate that our method increases the applicability of RV for monitoring various real-world applications, which produce lossy traces. The extra overhead caused by our constructed monitors is minimal as demonstrated by application of our method on commonly used patterns of LTL formulas