915 research outputs found
MROS: Runtime Adaptation For Robot Control Architectures
Known attempts to build autonomous robots rely on complex control
architectures, often implemented with the Robot Operating System platform
(ROS). Runtime adaptation is needed in these systems, to cope with component
failures and with contingencies arising from dynamic environments-otherwise,
these affect the reliability and quality of the mission execution. Existing
proposals on how to build self-adaptive systems in robotics usually require a
major re-design of the control architecture and rely on complex tools
unfamiliar to the robotics community. Moreover, they are hard to reuse across
applications.
This paper presents MROS: a model-based framework for run-time adaptation of
robot control architectures based on ROS. MROS uses a combination of
domain-specific languages to model architectural variants and captures mission
quality concerns, and an ontology-based implementation of the MAPE-K and
meta-control visions for run-time adaptation. The experiment results obtained
applying MROS in two realistic ROS-based robotic demonstrators show the
benefits of our approach in terms of the quality of the mission execution, and
MROS' extensibility and re-usability across robotic applications
ETHOM: An Evolutionary Algorithm for Optimized Feature Models Generation (v. 1.2): Technical Report ISA-2012-TR-05
A feature model defines the valid combinations of features in a domain.
The automated extraction of information from feature models is a thriving
topic involving numerous analysis operations, techniques and tools.
The progress of this discipline is leading to an increasing concern to test
and compare the performance of analysis solutions using tough input models
that show the behaviour of the tools in extreme situations (e.g. those
producing longest execution times or highest memory consumption). Currently,
these feature models are generated randomly ignoring the internal
aspects of the tools under tests. As a result, these only provide a rough idea
of the behaviour of the tools with average problems and are not sufficient
to reveal their real strengths and weaknesses.
In this technical report, we model the problem of finding computationally–
hard feature models as an optimization problem and we solve it using a
novel evolutionary algorithm. Given a tool and an analysis operation, our
algorithm generates input models of a predefined size maximizing aspects
as the execution time or the memory consumption of the tool when performing
the operation over the model. This allows users and developers to
know the behaviour of tools in pessimistic cases providing a better idea of
their real power. Experiments using our evolutionary algorithm on a number
of analysis operations and tools have successfully identified input models
causing much longer executions times and higher memory consumption
than random models of identical or even larger size. Our solution is generic
and applicable to a variety of optimization problems on feature models, not
only those involving analysis operations. In view of the positive results, we
expect this work to be the seed for a new wave of research contributions
exploiting the benefit of evolutionary programming in the field of feature
modelling
Reverse Engineering Feature Models with Evolutionary Algorithms: An Exploratory Study
Successful software evolves, more and more commonly, from
a single system to a set of system variants tailored to meet the similiar
and yet di erent functionality required by the distinct clients and
users. Software Product Line Engineering (SPLE) is a software development
paradigm that has proven e ective for coping with this scenario. At
the core of SPLE is variability modeling which employs Feature Models
(FMs) as the de facto standard to represent the combinations of features
that distinguish the systems variants. Reverse engineering FMs consist in
constructing a feature model from a set of products descriptions. This research
area is becoming increasingly active within the SPLE community,
where the problem has been addressed with di erent perspectives and
approaches ranging from analysis of con guration scripts, use of propositional
logic or natural language techniques, to ad hoc algorithms. In
this paper, we explore the feasibility of using Evolutionary Algorithms
(EAs) to synthesize FMs from the feature sets that describe the system
variants. We analyzed 59 representative case studies of di erent characteristics
and complexity. Our exploratory study found that FMs that
denote proper supersets of the desired feature sets can be obtained with a
small number of generations. However, reducing the di erences between
these two sets with an e ective and scalable tness function remains an
open question.We believe that this work is a rst step towards leveraging
the extensive wealth of Search-Based Software Engineering techniques to
address this and other variability management challenges.CICYT TIN2009- 07366Junta de Andalucía TIC-590
Learning Very Large Configuration Spaces: What Matters for Linux Kernel Sizes
Linux kernels are used in a wide variety of appliances, many of them having strong requirements on the kernel size due to constraints such as limited memory or instant boot. With more than ten thousands of configuration options to choose from, obtaining a suitable trade off between kernel size and functionality is an extremely hard problem. Developers, contributors, and users actually spend significant effort to document, understand, and eventually tune (combinations of) options for meeting a kernel size. In this paper, we investigate how machine learning can help explain what matters for predicting a given Linux kernel size. Unveiling what matters in such very large configuration space is challenging for two reasons: (1) whatever the time we spend on it, we can only build and measure a tiny fraction of possible kernel configurations; (2) the prediction model should be both accurate and interpretable. We compare different machine learning algorithms and demonstrate the benefits of specific feature encoding and selection methods to learn an accurate model that is fast to compute and simple to interpret. Our results are validated over 95,854 kernel configurations and show that we can achieve low prediction errors over a reduced set of options. We also show that we can extract interpretable information for refining documentation and experts' knowledge of Linux, or even assigning more sensible default values to options
Automated Generation of Computationally Hard Feature Models Using Evolutionary Algorithms
A feature model is a compact representation of the products of a software product line. The automated extraction
of information from feature models is a thriving topic involving numerous analysis operations, techniques and tools.
Performance evaluations in this domain mainly rely on the use of random feature models. However, these only provide
a rough idea of the behaviour of the tools with average problems and are not sufficient to reveal their real strengths and
weaknesses. In this article, we propose to model the problem of finding computationally hard feature models as an
optimization problem and we solve it using a novel evolutionary algorithm for optimized feature models (ETHOM).
Given a tool and an analysis operation, ETHOM generates input models of a predefined size maximizing aspects such
as the execution time or the memory consumption of the tool when performing the operation over the model. This
allows users and developers to know the performance of tools in pessimistic cases providing a better idea of their
real power and revealing performance bugs. Experiments using ETHOM on a number of analyses and tools have
successfully identified models producing much longer executions times and higher memory consumption than those
obtained with random models of identical or even larger size.CICYT TIN2009-07366CICYT TIN2012-32273Junta de Andalucía TIC-5906Junta de Andalucía P12-TIC-186
- …