22,417 research outputs found
Discovery and Selection of Certified Web Services Through Registry-Based Testing and Verification
Reliability and trust are fundamental prerequisites for the establishment of functional relationships among peers in a Collaborative Networked Organisation (CNO), especially in the context of Virtual Enterprises where economic benefits can be directly at stake. This paper presents a novel approach towards effective service discovery and selection that is no longer based on informal, ambiguous and potentially unreliable service descriptions, but on formal specifications that can be used to verify and certify the actual Web service implementations. We propose the use of Stream X-machines (SXMs) as a powerful modelling formalism for constructing the behavioural specification of a Web service, for performing verification through the generation of exhaustive test cases, and for performing validation through animation or model checking during service selection
Strategy Synthesis for Autonomous Agents Using PRISM
We present probabilistic models for autonomous agent search and retrieve missions derived from Simulink models for an Unmanned Aerial Vehicle (UAV) and show how probabilistic model checking and the probabilistic model checker PRISM can be used for optimal controller generation. We introduce a sequence of scenarios relevant to UAVs and other autonomous agents such as underwater and ground vehicles. For each scenario we demonstrate how it can be modelled using the PRISM language, give model checking statistics and present the synthesised optimal controllers. We conclude with a discussion of the limitations when using probabilistic model checking and PRISM in this context and what steps can be taken to overcome them. In addition, we consider how the controllers can be returned to the UAV and adapted for use on larger search areas
A Holistic Approach in Embedded System Development
We present pState, a tool for developing "complex" embedded systems by
integrating validation into the design process. The goal is to reduce
validation time. To this end, qualitative and quantitative properties are
specified in system models expressed as pCharts, an extended version of
hierarchical state machines. These properties are specified in an intuitive way
such that they can be written by engineers who are domain experts, without
needing to be familiar with temporal logic. From the system model, executable
code that preserves the verified properties is generated. The design is
documented on the model and the documentation is passed as comments into the
generated code. On the series of examples we illustrate how models and
properties are specified using pState.Comment: In Proceedings F-IDE 2015, arXiv:1508.0338
Model-Based Testing of Safety Critical Real-Time Control Logic Software
The paper presents the experience of the authors in model based testing of
safety critical real-time control logic software. It describes specifics of the
corresponding industrial settings and discusses technical details of usage of
UniTESK model based testing technology in these settings. Finally, we discuss
possible future directions of safety critical software development processes
and a place of model based testing techniques in it.Comment: In Proceedings MBT 2012, arXiv:1202.582
Interaction-driven definition of e-business processes
Business-to-business interaction (B2Bi) is the next step for corporate IT [1]. Business relationships become increasingly dynamic, and new requirements emerge for data and process management. Standardisation initiatives are successfully targeting business ontology [4]. Still, business agility mainly depends on the flexibility of the business processes of a company. In the B2B space, traditional approaches to process modelling and management are inadequate. Today more than ever, traditional workflow management is crucial for the internal effectiveness of a company. Internal efficiency is a prerequisite for external agility. From both a technical and a business perspective, internal workflow management relies on specific assumptions in terms of resources involved in the process, as well as the process itself [2]. Level of control, availability, reliability, and cost stability are parameters that traditional process models and technology can almost take for granted. A single authority ruling on the process definition and the total control over process execution are also basic concepts for internal workflows. From a business perspective, a big upfront investment is put in the complete definition of process specifications. A different conceptual framework is required for the definition and management of e-business processes [3, 5]. The intrinsic capability to adapt to rapidly changing business requirements becomes crucial. The line of research explored in this paper derives from an approach to process modelling and management that explicitly targets the peculiarities and dynamics of B2Bi. In the model we propose, the upfront specification of the interaction logic of a company can be limited to partially specified processes and basic interaction rules. Specific information is then gathered from the observation of actual instances of business interaction, and used to refine and extend the initial model. In addition to the enforcement of explicit business requirement, the goal is to capture and leverage implicit operational knowledge. In the following sections, we present an overview of the methodology we are currently experimenting with for the inference of complex processes from business interaction flows. For our initial experiments, we focus on business messages compliant with the RosettaNet standard [4]
A model for digital preservation repository risk relationships
The paper introduces the Preserved Object and Repository Risk Ontology (PORRO), a model that relates preservation functionality with associated risks and opportunities for their mitigation. Building on work undertaken in a range of EU and UK funded research projects (including the Digital Curation Centre , DigitalPreservationEurope and DELOS ), this ontology illustrates relationships between fundamental digital library goals and their parameters; associated rights and responsibilities; practical activities and resources involved in their accomplishment; and risks facing digital libraries and their collections. Its purpose is to facilitate a comprehensive understanding of risk causality and to illustrate opportunities for mitigation and avoidance.
The ontology reflects evidence accumulated from a series of institutional audits and evaluations, including a specific subset of digital libraries in the DELOS project which led to the definition of a digital library preservation risk profile. Its applicability is intended to be widespread, and its coverage expected to evolve to reflect developments within the community.
Attendees will gain an understanding of the model and learn how they can utilize this online resource to inform their own risk management activities
Graph Based Reduction of Program Verification Conditions
Increasing the automaticity of proofs in deductive verification of C programs
is a challenging task. When applied to industrial C programs known heuristics
to generate simpler verification conditions are not efficient enough. This is
mainly due to their size and a high number of irrelevant hypotheses. This work
presents a strategy to reduce program verification conditions by selecting
their relevant hypotheses. The relevance of a hypothesis is determined by the
combination of a syntactic analysis and two graph traversals. The first graph
is labeled by constants and the second one by the predicates in the axioms. The
approach is applied on a benchmark arising in industrial program verification
- …