154 research outputs found
Reconstructing the Past: The Case of the Spadina Expressway
In order to build resilient systems that can be operational for a long time, it is important that analysts are able to model the evolution of the requirements of that system. The Evolving Intentions framework models how stakeholders’ goals change over time. In this work, our aim is to validate applicability and effectiveness of this technique on a substantial case. In the absence of ground truth about future evolutions, we used historical data and rational reconstruction to understand how a project evolved in the past. Seeking a well-documented project with varying stakeholder intentions over a substantial period of time, we selected requirements of the Toronto Spadina Expressway. In this paper, we report on the experience and the results of modeling this project over different time periods, which enabled us to assess the modeling and reasoning capabilities of the approach, its support for asking and answering ‘what if’ questions, and the maturity of the underlying tool support. We also demonstrate a novel process for creating time-based models through the construction and merging of scenarios
Optimizing Computation of Recovery Plans for BPEL Applications
Web service applications are distributed processes that are composed of
dynamically bounded services. In our previous work [15], we have described a
framework for performing runtime monitoring of web service against behavioural
correctness properties (described using property patterns and converted into
finite state automata). These specify forbidden behavior (safety properties)
and desired behavior (bounded liveness properties). Finite execution traces of
web services described in BPEL are checked for conformance at runtime. When
violations are discovered, our framework automatically proposes and ranks
recovery plans which users can then select for execution. Such plans for safety
violations essentially involve "going back" - compensating the executed actions
until an alternative behaviour of the application is possible. For bounded
liveness violations, recovery plans include both "going back" and "re-planning"
- guiding the application towards a desired behaviour. Our experience, reported
in [16], identified a drawback in this approach: we compute too many plans due
to (a) overapproximating the number of program points where an alternative
behaviour is possible and (b) generating recovery plans for bounded liveness
properties which can potentially violate safety properties. In this paper, we
describe improvements to our framework that remedy these problems and describe
their effectiveness on a case study.Comment: In Proceedings TAV-WEB 2010, arXiv:1009.330
Automatic Analysis of Consistency Between Implementations and Requirements
Formal methods like model checking can be used to demonstrate
that safety properties of embedded systems are enforced by the
system's requirements. Unfortunately, proving these properties
provides no guarantee that they will be preserved in an
implementation of the system. We have developed a tool, called
Analyzer, which helps discover instances of inconsistency and
incompleteness in implementations with respect to requirements.
Analyzer uses requirements information to automatically generate
properties which ensure that required state transitions appear in
a model of an implementation. A model is created through abstract
interpretation of an implementation annotated with assertions about
values of state variables which appear in requirements. Analyzer
determines if the model satisfies both automatically-generated and
user-specified safety properties.
This paper presents a description of our implementation of Analyzer
and our experience in applying it to a small but realistic problem.
(Also cross-referenced as UMIACS-TR-94-137
Partial Behavioural Models for Requirements and Early Design
The talk will discuss the problem of creation, management, and specifically merging of partial behavioural models, expressed as model transition systems. We argue why this formalism is essential in the early stages of the software cycle and then discuss why and how to merge information coming from different sources using this formalism. The talk is based on papers presented in FSE\u2704 and FME\u2706 and will also include emerging results on synthesizing partial behavioural models from temporal properties and scenarios
Early Verification of Legal Compliance via Bounded Satisfiability Checking
Legal properties involve reasoning about data values and time. Metric
first-order temporal logic (MFOTL) provides a rich formalism for specifying
legal properties. While MFOTL has been successfully used for verifying legal
properties over operational systems via runtime monitoring, no solution exists
for MFOTL-based verification in early-stage system development captured by
requirements. Given a legal property and system requirements, both formalized
in MFOTL, the compliance of the property can be verified on the requirements
via satisfiability checking. In this paper, we propose a practical, sound, and
complete (within a given bound) satisfiability checking approach for MFOTL. The
approach, based on satisfiability modulo theories (SMT), employs a
counterexample-guided strategy to incrementally search for a satisfying
solution. We implemented our approach using the Z3 SMT solver and evaluated it
on five case studies spanning the healthcare, business administration, banking
and aviation domains. Our results indicate that our approach can efficiently
determine whether legal properties of interest are met, or generate
counterexamples that lead to compliance violations
Property Satisfiability Analysis for Product Lines of Modelling Languages
© 2022 IEEE.  Personal use of this material is permitted.  Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Software engineering uses models throughout most phases of the development process. Models are defined using modelling languages. To make these languages applicable to a wider set of scenarios and customizable to specific needs, researchers have proposed using product lines to specify modelling language variants. However, there is currently a lack of efficient techniques for ensuring correctness with respect to properties of the models accepted by a set of language variants. This may prevent detecting problematic combinations of language variants that produce undesired effects at the model level. To attack this problem, we first present a classification of instantiability properties for language product lines. Then, we propose a novel approach to lifting the satisfiability checking of model properties of individual language variants, to the product line level. Finally, we report on an implementation of our proposal in the Merlin tool, and demonstrate the efficiency gains of our lifted analysis method compared to an enumerative analysis of each individual language variantThis work has been funded by the Spanish Ministry of Science (RTI2018-095255-B-I00), the R&D programme of
Madrid (P2018/TCS-4314), and by NSERC. We thank the anonymous referees for their useful comment
Property-Based Methods for Collaborative Model Development
Industrial
applications
of
mo del-driven
engineering
to
de-
velop
large
and
complex
systems
resulted
in
an
increasing
demand
for
collab oration
features.
However,
use
cases
such
as
mo del
di�erencing
and
merging
have
turned
out
to
b e
a
di�cult
challenge,
due
to
(i)
the
graph-
like
nature
of
mo dels,
and
(ii)
the
complexity
of
certain
op erations
(e.g.
hierarchy
refactoring)
that
are
common
to day.
In
the
pap er,
we
present
a
novel
search-based
automated
mo del
merge
approach
where
rule-based
design
space
exploration
is
used
to
search
the
space
of
solution
candi-
dates
that
represent
con�ict-free
merged
mo dels.
Our
metho d
also
allows
engineers
to
easily
incorp orate
domain-sp eci�c
knowledge
into
the
merge
pro cess
to
provide
b etter
solutions.
The
merge
pro cess
automatically
cal-
culates
multiple
merge
candidates
to
b e
presented
to
domain
exp erts
for
�nal
selection.
Furthermore,
we
prop ose
to
adopt
a
generic
synthetic
b enchmark
to
carry
out
an
initial
scalability
assessment
for
mo del
merge
with
large
mo dels
and
large
change
sets
A verification-driven framework for iterative design of controllers
Controllers often are large and complex reactive software systems and thus they typically cannot be developed as monolithic products. Instead, they are usually comprised of multiple components that interact to provide the desired functionality. Components themselves can be complex and in turn be decomposed into multiple sub-components. Designing such systems is complicated and must follow systematic approaches, based on recursive decomposition strategies that yield a modular structure. This paper proposes FIDDle–a comprehensive verification-driven framework which provides support for designers during development. FIDDle supports hierarchical decomposition of components into sub-components through formal specification in terms of pre- and post-conditions as well as independent development, reuse and verification of sub-components. The framework allows the development of an initial, partially specified design of the controller, in which certain components, yet to be defined, are precisely identified. These components can be associated with pre- and post-conditions, i.e., a contract, that can be distributed to third-party developers. The framework ensures that if the components are compliant with their contracts, they can be safely integrated into the initial partial design without additional rework. As a result, FIDDle supports an iterative design process and guarantees correctness of the system at any step of development. We evaluated the effectiveness of FIDDle in supporting an iterative and incremental development of components using the K9 Mars Rover example developed at NASA Ames. This can be considered as an initial, yet substantive, validation of the approach in a realistic setting. We also assessed the scalability of FIDDle by comparing its efficiency with the classical model checkers implemented within the LTSA toolset. Results show that FIDDle scales as well as classical model checking as the number of the states of the components under development and their environments grow
- …