85 research outputs found
Debian Packages Repositories as Software Product Line Models. Towards Automated Analysis
The automated analysis of variability models in
general and feature models in particular is a thriving research
topic. There have been numerous contributions along the last
twenty years in this area including both, research papers and
tools. However, the lack of realistic variability models to evaluate
those techniques and tools is recognized as a major problem
by the community. To address this issue, we looked for large–
scale variability models in the open source community. We found
that the Debian package dependency language can be interpreted
as software product line variability model. Moreover, we found
that those models can be automatically analysed in a software
product line variability model-like style. In this paper, we take
a first step towards the automated analysis of Debian package
dependency language. We provide a mapping from these models
to propositional formulas. We also show how this could allow
us to perform analysis operations on the repositories like the
detection of anomalies (e.g. packages that cannot be installed).CICYT TIN2009- 07366Junta de Andalucía TIC-253
Towards interoperability of i* models using iStarML
Goal-oriented and agent-oriented modelling provides an effective approach to the understanding of distributed information
systems that need to operate in open, heterogeneous and evolving environments. Frameworks, firstly introduced more than ten
years ago, have been extended along language variants, analysis methods and CASE tools, posing language semantics and tool interoperability issues. Among them, the i* framework is one the most widespread. We focus on i*-based modelling languages and tools and on the problem of supporting model exchange between them. In this paper, we introduce the i* interoperability problem and derive an XML interchange format, called iStarML, as a practical solution to this problem. We first discuss the main requirements for its definition, then we characterise the core concepts of i* and we detail the tags and options of the interchange format. We complete the presentation of iStarML showing some possible applications. Finally, a survey on the i* community perception about iStarML is included for assessment purposes.Preprin
Recommended from our members
Automated generation of computationally hard feature models using evolutionary algorithms
This is the post-print version of the final paper published in Expert Systems with Applications. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2014 Elsevier B.V.A feature model is a compact representation of the products of a software product line. The automated extraction of information from feature models is a thriving topic involving numerous analysis operations, techniques and tools. Performance evaluations in this domain mainly rely on the use of random feature models. However, these only provide a rough idea of the behaviour of the tools with average problems and are not sufficient to reveal their real strengths and weaknesses. In this article, we propose to model the problem of finding computationally hard feature models as an optimization problem and we solve it using a novel evolutionary algorithm for optimized feature models (ETHOM). Given a tool and an analysis operation, ETHOM generates input models of a predefined size maximizing aspects such as the execution time or the memory consumption of the tool when performing the operation over the model. This allows users and developers to know the performance of tools in pessimistic cases providing a better idea of their real power and revealing performance bugs. Experiments using ETHOM on a number of analyses and tools have successfully identified models producing much longer executions times and higher memory consumption than those obtained with random models of identical or even larger size.European Commission (FEDER), the Spanish Government and
the Andalusian Government
ETHOM: An Evolutionary Algorithm for Optimized Feature Models Generation (v. 1.2): Technical Report ISA-2012-TR-05
A feature model defines the valid combinations of features in a domain.
The automated extraction of information from feature models is a thriving
topic involving numerous analysis operations, techniques and tools.
The progress of this discipline is leading to an increasing concern to test
and compare the performance of analysis solutions using tough input models
that show the behaviour of the tools in extreme situations (e.g. those
producing longest execution times or highest memory consumption). Currently,
these feature models are generated randomly ignoring the internal
aspects of the tools under tests. As a result, these only provide a rough idea
of the behaviour of the tools with average problems and are not sufficient
to reveal their real strengths and weaknesses.
In this technical report, we model the problem of finding computationally–
hard feature models as an optimization problem and we solve it using a
novel evolutionary algorithm. Given a tool and an analysis operation, our
algorithm generates input models of a predefined size maximizing aspects
as the execution time or the memory consumption of the tool when performing
the operation over the model. This allows users and developers to
know the behaviour of tools in pessimistic cases providing a better idea of
their real power. Experiments using our evolutionary algorithm on a number
of analysis operations and tools have successfully identified input models
causing much longer executions times and higher memory consumption
than random models of identical or even larger size. Our solution is generic
and applicable to a variety of optimization problems on feature models, not
only those involving analysis operations. In view of the positive results, we
expect this work to be the seed for a new wave of research contributions
exploiting the benefit of evolutionary programming in the field of feature
modelling
A Scalable Design Framework for Variability Management in Large-Scale Software Product Lines
Variability management is one of the major challenges in software product line adoption, since it needs to be efficiently managed at various levels of the software product line development process (e.g., requirement analysis, design, implementation, etc.).
One of the main challenges within variability management is the handling and effective visualization of large-scale (industry-size) models, which in many projects, can reach the order of thousands, along with the dependency relationships that exist among them. These have raised many concerns regarding the scalability of current variability management tools and techniques and their lack of industrial adoption.
To address the scalability issues, this work employed a combination of quantitative and qualitative research methods to identify the reasons behind the limited scalability of existing variability management tools and techniques. In addition to producing a comprehensive catalogue of existing tools, the outcome form this stage helped understand the major limitations of existing tools.
Based on the findings, a novel approach was created for managing variability that employed two main principles for supporting scalability. First, the separation-of-concerns principle was employed by creating multiple views of variability models to alleviate information overload. Second, hyperbolic trees were used to visualise models (compared to Euclidian space trees traditionally used). The result was an approach that can represent models encompassing hundreds of variability points and complex relationships. These concepts were demonstrated by implementing them in an existing variability management tool and using it to model a real-life product line with over a thousand variability points.
Finally, in order to assess the work, an evaluation framework was designed based on various established usability assessment best practices and standards. The framework was then used with several case studies to benchmark the performance of this work against other existing tools
Automated analysis of feature models 20 years later: a literature review
Software product line engineering is about producing a set of related products that share more commonalities than
variabilities. Feature models are widely used for variability and commonality management in software product
lines. Feature models are information models where a set of products are represented as a set of features in a
single model. The automated analysis of feature models deals with the computer–aided extraction of information
from feature models. The literature on this topic has contributed with a set of operations, techniques, tools and
empirical results which have not been surveyed until now. This paper provides a comprehensive literature review
on the automated analysis of feature models 20 years after of their invention. This paper contributes by bringing
together previously-disparate streams of work to help shed light on this thriving area. We also present a conceptual
framework to understand the different proposals as well as categorise future contributions. We finally discuss the
different studies and propose some challenges to be faced in the future.CICYT TIN2009-07366CICYT TIN2006-00472Junta de Andalucía TIC-253
ETHOM: An Evolutionary Algorithm for Optimized Feature Models Generation - TECHNICAL REPORT ISA-2012-TR-01 (v. 1.1)
A feature model defines the valid combinations of features in a domain.
The automated extraction of information from feature models is a thriv ing topic involving numerous analysis operations, techniques and tools.
The progress of this discipline is leading to an increasing concern to test
and compare the performance of analysis solutions using tough input mod els that show the behaviour of the tools in extreme situations (e.g. those
producing longest execution times or highest memory consumption). Cur rently, these feature models are generated randomly ignoring the internal
aspects of the tools under tests. As a result, these only provide a rough idea
of the behaviour of the tools with average problems and are not sufficient
to reveal their real strengths and weaknesses.
In this technical report, we model the problem of finding computationally–
hard feature models as an optimization problem and we solve it using a
novel evolutionary algorithm. Given a tool and an analysis operation, our
algorithm generates input models of a predefined size maximizing aspects
as the execution time or the memory consumption of the tool when per forming the operation over the model. This allows users and developers to
know the behaviour of tools in pessimistic cases providing a better idea of
their real power. Experiments using our evolutionary algorithm on a num ber of analysis operations and tools have successfully identified input mod els causing much longer executions times and higher memory consumption
than random models of identical or even larger size. Our solution is generic
and applicable to a variety of optimization problems on feature models, not
only those involving analysis operations. In view of the positive results, we
expect this work to be the seed for a new wave of research contributions
exploiting the benefit of evolutionary programming in the field of feature
modelling
Supporting distributed product configuration by integrating heterogeneous variability modeling approaches
Context
In industrial settings products are developed by more than one organization. Software vendors and suppliers commonly typically maintain their own product lines, which contribute to a larger (multi) product line or software ecosystem. It is unrealistic to assume that the participating organizations will agree on using a specific variability modeling technique—they will rather use different approaches and tools to manage the variability of their systems.
Objective
We aim to support product configuration in software ecosystems based on several variability models with different semantics that have been created using different notations.
Method
We present an integrative approach that provides a unified perspective to users configuring products in multi product line environments, regardless of the different modeling methods and tools used internally. We also present a technical infrastructure and a prototype implementation based on web services.
Results
We show the feasibility of the approach and its implementation by using it with the three most widespread types of variability modeling approaches in the product line community, i.e., feature-based, OVM-style, and decision-oriented modeling. To demonstrate the feasibility and flexibility of our approach, we present an example derived from industrial experience in enterprise resource planning. We further applied the approach to support the configuration of privacy settings in the Android ecosystem based on multiple variability models. We also evaluated the performance of different model enactment strategies used in our approach.
Conclusions
Tools and techniques allowing stakeholders to handle variability in a uniform manner can considerably foster the initiation and growth of software ecosystems from the perspective of software reuse and configuration.Ministerio de Economía y Competitividad TIN2012-32273Junta de Andalucía TIC-186
- …