88,420 research outputs found
Analysis of Feature Models Using Alloy: A Survey
Feature Models (FMs) are a mechanism to model variability among a family of
closely related software products, i.e. a software product line (SPL). Analysis
of FMs using formal methods can reveal defects in the specification such as
inconsistencies that cause the product line to have no valid products. A
popular framework used in research for FM analysis is Alloy, a light-weight
formal modeling notation equipped with an efficient model finder. Several works
in the literature have proposed different strategies to encode and analyze FMs
using Alloy. However, there is little discussion on the relative merits of each
proposal, making it difficult to select the most suitable encoding for a
specific analysis need. In this paper, we describe and compare those strategies
according to various criteria such as the expressivity of the FM notation or
the efficiency of the analysis. This survey is the first comparative study of
research targeted towards using Alloy for FM analysis. This review aims to
identify all the best practices on the use of Alloy, as a part of a framework
for the automated extraction and analysis of rich FMs from natural language
requirement specifications.Comment: In Proceedings FMSPLE 2016, arXiv:1603.0857
Model Matching Challenge: Benchmarks for Ecore and BPMN Diagrams
In the last couple of years, Model Driven Engineering (MDE) gained a
prominent role in the context of software engineering. In the MDE paradigm,
models are considered first level artifacts which are iteratively developed by
teams of programmers over a period of time. Because of this, dedicated tools
for versioning and management of models are needed. A central functionality
within this group of tools is model comparison and differencing. In two
disjunct research projects, we identified a group of general matching problems
where state-of-the-art comparison algorithms delivered low quality results. In
this article, we will present five edit operations which are the cause for
these low quality results. The reasons why the algorithms fail, as well as
possible solutions, are also discussed. These examples can be used as
benchmarks by model developers to assess the quality and applicability of a
model comparison tool for a given model type.Comment: 7 pages, 7 figure
Grand Challenges of Traceability: The Next Ten Years
In 2007, the software and systems traceability community met at the first
Natural Bridge symposium on the Grand Challenges of Traceability to establish
and address research goals for achieving effective, trustworthy, and ubiquitous
traceability. Ten years later, in 2017, the community came together to evaluate
a decade of progress towards achieving these goals. These proceedings document
some of that progress. They include a series of short position papers,
representing current work in the community organized across four process axes
of traceability practice. The sessions covered topics from Trace Strategizing,
Trace Link Creation and Evolution, Trace Link Usage, real-world applications of
Traceability, and Traceability Datasets and benchmarks. Two breakout groups
focused on the importance of creating and sharing traceability datasets within
the research community, and discussed challenges related to the adoption of
tracing techniques in industrial practice. Members of the research community
are engaged in many active, ongoing, and impactful research projects. Our hope
is that ten years from now we will be able to look back at a productive decade
of research and claim that we have achieved the overarching Grand Challenge of
Traceability, which seeks for traceability to be always present, built into the
engineering process, and for it to have "effectively disappeared without a
trace". We hope that others will see the potential that traceability has for
empowering software and systems engineers to develop higher-quality products at
increasing levels of complexity and scale, and that they will join the active
community of Software and Systems traceability researchers as we move forward
into the next decade of research
Technical Debt Prioritization: State of the Art. A Systematic Literature Review
Background. Software companies need to manage and refactor Technical Debt
issues. Therefore, it is necessary to understand if and when refactoring
Technical Debt should be prioritized with respect to developing features or
fixing bugs. Objective. The goal of this study is to investigate the existing
body of knowledge in software engineering to understand what Technical Debt
prioritization approaches have been proposed in research and industry. Method.
We conducted a Systematic Literature Review among 384 unique papers published
until 2018, following a consolidated methodology applied in Software
Engineering. We included 38 primary studies. Results. Different approaches have
been proposed for Technical Debt prioritization, all having different goals and
optimizing on different criteria. The proposed measures capture only a small
part of the plethora of factors used to prioritize Technical Debt qualitatively
in practice. We report an impact map of such factors. However, there is a lack
of empirical and validated set of tools. Conclusion. We observed that technical
Debt prioritization research is preliminary and there is no consensus on what
are the important factors and how to measure them. Consequently, we cannot
consider current research conclusive and in this paper, we outline different
directions for necessary future investigations
Formal Verification of Security Protocol Implementations: A Survey
Automated formal verification of security protocols has been mostly focused on analyzing high-level abstract models which, however, are significantly different from real protocol implementations written in programming languages. Recently, some researchers have started investigating techniques that bring automated formal proofs closer to real implementations. This paper surveys these attempts, focusing on approaches that target the application code that implements protocol logic, rather than the libraries that implement cryptography. According to these approaches, libraries are assumed to correctly implement some models. The aim is to derive formal proofs that, under this assumption, give assurance about the application code that implements the protocol logic. The two main approaches of model extraction and code generation are presented, along with the main techniques adopted for each approac
- …