5,363 research outputs found
Revisiting Semantics of Interactions for Trace Validity Analysis
Interaction languages such as MSC are often associated with formal semantics
by means of translations into distinct behavioral formalisms such as automatas
or Petri nets. In contrast to translational approaches we propose an
operational approach. Its principle is to identify which elementary
communication actions can be immediately executed, and then to compute, for
every such action, a new interaction representing the possible continuations to
its execution. We also define an algorithm for checking the validity of
execution traces (i.e. whether or not they belong to an interaction's
semantics). Algorithms for semantic computation and trace validity are analyzed
by means of experiments.Comment: 18 pages of contents and 2 pages for references, 10 figures.
Published in ETAPS-FASE2020 : "23rd International Conference on Fundamental
Approaches to Software Engineering" in the "research papers" categor
Bridging the Gap Between Requirements and Model Analysis : Evaluation on Ten Cyber-Physical Challenge Problems
Formal verfication and simulation are powerful tools to validate requirements against complex systems. [Problem] Requirements are developed in early stages of the software lifecycle and are typically written in ambiguous natural language. There is a gap between such requirements and formal notations that can be used by verification tools, and lack of support for proper association of requirements with software artifacts for verification. [Principal idea] We propose to write requirements in an intuitive, structured natural language with formal semantics, and to support formalization and model/code verification as a smooth, well-integrated process. [Contribution] We have developed an end-to-end, open source requirements analysis framework that checks Simulink models against requirements written in structured natural language. Our framework is built in the Formal Requirements Elicitation Tool (fret); we use fret's requirements language named fretish, and formalization of fretish requirements in temporal logics. Our proposed framework contributes the following features: 1) automatic extraction of Simulink model information and association of fretish requirements with target model signals and components; 2) translation of temporal logic formulas into synchronous dataflow cocospec specifications as well as Simulink monitors, to be used by verification tools; we establish correctness of our translation through extensive automated testing; 3) interpretation of counterexamples produced by verification tools back at requirements level. These features support a tight integration and feedback loop between high level requirements and their analysis. We demonstrate our approach on a major case study: the Ten Lockheed Martin Cyber-Physical, aerospace-inspired challenge problems
A small-step approach to multi-trace checking against interactions
Interaction models describe the exchange of messages between the different
components of distributed systems. We have previously defined a small-step
operational semantics for interaction models. The paper extends this work by
presenting an approach for checking the validity of multi-traces against
interaction models. A multi-trace is a collection of traces (sequences of
emissions and receptions), each representing a local view of the same global
execution of the distributed system. We have formally proven our approach,
studied its complexity, and implemented it in a prototype tool. Finally, we
discuss some observability issues when testing distributed systems via the
analysis of multi-traces.Comment: long version - 26 pages (23 for paper, 2 for bibliography, and a 1
page annex) - 15 figures (1 in annex
A Term-based Approach for Generating Finite Automata from Interaction Diagrams
Non-deterministic Finite Automata (NFA) represent regular languages
concisely, increasing their appeal for applications such as word recognition.
This paper proposes a new approach to generate NFA from an interaction language
such as UML Sequence Diagrams or Message Sequence Charts. Via an operational
semantics, we generate a NFA from a set of interactions reachable using the
associated execution relation. In addition, by applying simplifications on
reachable interactions to merge them, it is possible to obtain reduced NFA
without relying on costly NFA reduction techniques. Experimental results
regarding NFA generation and their application in trace analysis are also
presented.Comment: 29 pages (15 pages paper, 3 pages references, 11 pages appendix) 9
figures in paper, 14 figures in appendi
Taking Sides In Peacekeeping: Impartiality And The Future Of The United Nations
United Nations peacekeeping has undergone radical transformation in the new millennium. \u27Taking Sides in Peacekeeping\u27 explores this transformation and its implications, in what is the first conceptual and empirical study of impartiality in UN peacekeeping. The book challenges dominant scholarly approaches that conceive of norms as linear and static, conceptualizing impartiality as a \u27composite\u27 norm, one that is not free-standing but an aggregate of other principles-each of which can change and is open to contestation. Drawing on a large body of primary evidence, it uses the composite norm to trace the evolution of impartiality, and to illuminate the macro-level politics surrounding its institutionalization at the UN, as well as the micro-level politics surrounding its implementation in the Democratic Republic of the Congo, site of the largest and costliest peacekeeping mission in UN history. This book reveals that, despite a veneer of consensus, impartiality is in fact highly contested. As the collection of principles it refers to has expanded to include human rights and civilian protection, deep disagreements have arisen over what keeping peace impartially actually means. Beyond the semantics, the book shows how this contestation, together with the varying expectations and incentives created by the norm, has resulted in perverse and unintended consequences that have politicized peacekeeping and, in some cases, effectively converted UN forces into one warring party among many. The author assesses the implications of this radical transformation for the future of peacekeeping and for the UN\u27s role as guarantor of international peace and security
Managing Requirement Volatility in an Ontology-Driven Clinical LIMS Using Category Theory. International Journal of Telemedicine and Applications
Requirement volatility is an issue in software engineering in general, and in
Web-based clinical applications in particular, which often originates from an
incomplete knowledge of the domain of interest. With advances in the health
science, many features and functionalities need to be added to, or removed
from, existing software applications in the biomedical domain. At the same
time, the increasing complexity of biomedical systems makes them more difficult
to understand, and consequently it is more difficult to define their
requirements, which contributes considerably to their volatility. In this
paper, we present a novel agent-based approach for analyzing and managing
volatile and dynamic requirements in an ontology-driven laboratory information
management system (LIMS) designed for Web-based case reporting in medical
mycology. The proposed framework is empowered with ontologies and formalized
using category theory to provide a deep and common understanding of the
functional and nonfunctional requirement hierarchies and their interrelations,
and to trace the effects of a change on the conceptual framework.Comment: 36 Pages, 16 Figure
Productive Development of Scalable Network Functions with NFork
Despite decades of research, developing correct and scalable concurrent
programs is still challenging. Network functions (NFs) are not an exception.
This paper presents NFork, a system that helps NF domain experts to
productively develop concurrent NFs by abstracting away concurrency from
developers. The key scheme behind NFork's design is to exploit NF
characteristics to overcome the limitations of prior work on concurrency
programming. Developers write NFs as sequential programs, and during runtime,
NFork performs transparent parallelization by processing packets in different
cores. Exploiting NF characteristics, NFork leverages transactional memory and
develops efficient concurrent data structures to achieve scalability and
guarantee the absence of concurrency bugs.
Since NFork manages concurrency, it further provides (i) a profiler that
reveals the root causes of scalability bottlenecks inherent to the NF's
semantics and (ii) actionable recipes for developers to mitigate these root
causes by relaxing the NF's semantics. We show that NFs developed with NFork
achieve competitive scalability with those in Cisco VPP [16], and NFork's
profiler and recipes can effectively aid developers in optimizing NF
scalability.Comment: 16 pages, 8 figure
- …