485 research outputs found
Unlocking Blocked Communicating Processes
We study the problem of disentangling locked processes via code refactoring.
We identify and characterise a class of processes that is not lock-free; then
we formalise an algorithm that statically detects potential locks and propose
refactoring procedures that disentangle detected locks. Our development is cast
within a simple setting of a finite linear CCS variant \^a although it suffices
to illustrate the main concepts, we also discuss how our work extends to other
language extensions.Comment: In Proceedings WWV 2015, arXiv:1508.0338
Collaborative Verification-Driven Engineering of Hybrid Systems
Hybrid systems with both discrete and continuous dynamics are an important
model for real-world cyber-physical systems. The key challenge is to ensure
their correct functioning w.r.t. safety requirements. Promising techniques to
ensure safety seem to be model-driven engineering to develop hybrid systems in
a well-defined and traceable manner, and formal verification to prove their
correctness. Their combination forms the vision of verification-driven
engineering. Often, hybrid systems are rather complex in that they require
expertise from many domains (e.g., robotics, control systems, computer science,
software engineering, and mechanical engineering). Moreover, despite the
remarkable progress in automating formal verification of hybrid systems, the
construction of proofs of complex systems often requires nontrivial human
guidance, since hybrid systems verification tools solve undecidable problems.
It is, thus, not uncommon for development and verification teams to consist of
many players with diverse expertise. This paper introduces a
verification-driven engineering toolset that extends our previous work on
hybrid and arithmetic verification with tools for (i) graphical (UML) and
textual modeling of hybrid systems, (ii) exchanging and comparing models and
proofs, and (iii) managing verification tasks. This toolset makes it easier to
tackle large-scale verification tasks
Automated analysis of feature models: Quo vadis?
Feature models have been used since the 90's to describe software product lines as a way of reusing common parts in a family of software systems. In 2010, a systematic literature review was published summarizing the advances and settling the basis of the area of Automated Analysis of Feature Models (AAFM). From then on, different studies have applied the AAFM in different domains. In this paper, we provide an overview of the evolution of this field since 2010 by performing a systematic mapping study considering 423 primary sources. We found six different variability facets where the AAFM is being applied that define the tendencies: product configuration and derivation; testing and evolution; reverse engineering; multi-model variability-analysis; variability modelling and variability-intensive systems. We also confirmed that there is a lack of industrial evidence in most of the cases. Finally, we present where and when the papers have been published and who are the authors and institutions that are contributing to the field. We observed that the maturity is proven by the increment in the number of journals published along the years as well as the diversity of conferences and workshops where papers are published. We also suggest some synergies with other areas such as cloud or mobile computing among others that can motivate further research in the future.Ministerio de Economía y Competitividad TIN2015-70560-RJunta de Andalucía TIC-186
Separation of Concerns in Feature Modeling: Support and Applications
International audienceFeature models (FMs) are a popular formalism for describing the commonality and variability of software product lines (SPLs) in terms of features. SPL development increasingly involves manipulating many large FMs, and thus scalable modular techniques that support compositional development of complex SPLs are required. In this paper, we describe how a set of complementary operators (aggregate, merge, slice) provides practical support for separation of concerns in feature modeling. We show how the combination of these operators can assist in tedious and error prone tasks such as automated correction of FM anomalies, update and extraction of FM views, reconciliation of FMs and reasoning about properties of FMs. For each task, we report on practical applications in different domains. We also present a technique that can efficiently decompose FMs with thousands of features and report our experimental results
Open-source software product line extraction processes: the ArgoUML-SPL and Phaser cases
Software Product Lines (SPLs) are rarely developed from scratch. Commonly, they emerge
from one product when there is a need to create tailored variants, or from existing variants
created in an ad-hoc way once their separated maintenance and evolution become challenging. Despite the vast literature about re-engineering systems into SPLs and related technical
approaches, there is a lack of detailed analysis of the process itself and the effort involved.
In this paper, we provide and analyze empirical data of the extraction processes of two open source case studies, namely ArgoUML and Phaser. Both cases emerged from the transition
of a monolithic system into an SPL. The analysis relies on information mined from the
version control history of their respective source-code repositories and the discussion with
developers that took part in the process. Unlike previous works that focused mostly on the
structural results of the final SPL, the contribution of this study is an in-depth characterization of the processes. With this work, we aimed at providing a deeper understanding of the
strategies for SPL extraction and their implications. Our results indicate that the source code
changes can range from almost a fourth to over half of the total lines of code. Developers
may or may not use branching strategies for feature extraction. Additionally, the problems
faced during the extraction process may be due to lack of tool support, complexity on managing feature dependencies and issues with feature constraints. We made publicly available
the datasets and the analysis scripts of both case studies to be used as a baseline for extractive
SPL adoption research and practice.This research was partially funded by CNPq, grant no. 408356/2018-9; FAPPR, grant no. 51435; and FAPERJ PDR-10 Fellowship 202073/2020.
Open access funding provided by Johannes Kepler University Lin
Recommended from our members
Supervised Design-Space Exploration
Low-cost Very Large Scale Integration (VLSI) electronics have revolutionized daily life and expanded the role of computation in science and engineering. Meanwhile, process-technology scaling has changed VLSI design to an exploration process that strives for the optimal balance among multiple objectives, such as power, performance, and area, i.e. multi-objective Pareto-set optimization. Besides, modern VLSI design has shifted to synthesis-centric methodologies in order to boost the design productivity, which leads to better design quality given limited time and resources. However, current decade-old synthesis-centric design methodologies suffer from: (i) long synthesis tool runtime, (ii) elusive optimal setting of many synthesis knobs, (iii) limitation to one design implementation per synthesis run, and (iv) limited capability of digesting only component-level designs as opposed to holistic system-wide synthesis. These challenges make Design Space Exploration (DSE) with synthesis tools a daunting task for both novice and experienced VLSI designers, thus stagnating the development of more powerful (i.e. more complex) computer systems.
To address these challenges, I propose Supervised Design-Space Exploration (SDSE), an abstraction layer between a designer and a synthesis tool, aiming to autonomously supervise synthesis jobs for DSE. For system-level exploration, SDSE can approximate a system Pareto set given limited information: only lightweight component characterization is required, yet the necessary component synthesis jobs are discovered on-the-fly in order to compose the system Pareto set. For component-level exploration, SDSE can approximate a component Pareto set by iteratively refining the approximation with promising knob settings, guided by synthesis-result estimation with machine-learning models. Combined, SDSE has been applied with the three major synthesis stages, namely high-level, logic, and physical synthesis, to the design of heterogeneous accelerator cores as well as high-performance processor cores. In particular, SDSE has been successfully integrated into the IBM Synthesis Tuning System, yielding 20% better circuit performance than the original system on the design of a 22nm server processor that is currently in production.
Looking ahead, SDSE can be applied to other VLSI designs beyond the accelerator and the programmable cores. Moreover, SDSE opens several research avenues for: (i) new development and deployment platforms of synthesis tools, (ii) large-scale collaborative design engineering, and (iii) new computer-aided design approaches for new classes of systems beyond VLSI chips
- …