43,831 research outputs found
Recommended from our members
Automated generation of computationally hard feature models using evolutionary algorithms
This is the post-print version of the final paper published in Expert Systems with Applications. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2014 Elsevier B.V.A feature model is a compact representation of the products of a software product line. The automated extraction of information from feature models is a thriving topic involving numerous analysis operations, techniques and tools. Performance evaluations in this domain mainly rely on the use of random feature models. However, these only provide a rough idea of the behaviour of the tools with average problems and are not sufficient to reveal their real strengths and weaknesses. In this article, we propose to model the problem of finding computationally hard feature models as an optimization problem and we solve it using a novel evolutionary algorithm for optimized feature models (ETHOM). Given a tool and an analysis operation, ETHOM generates input models of a predefined size maximizing aspects such as the execution time or the memory consumption of the tool when performing the operation over the model. This allows users and developers to know the performance of tools in pessimistic cases providing a better idea of their real power and revealing performance bugs. Experiments using ETHOM on a number of analyses and tools have successfully identified models producing much longer executions times and higher memory consumption than those obtained with random models of identical or even larger size.European Commission (FEDER), the Spanish Government and
the Andalusian Government
Progress in AI Planning Research and Applications
Planning has made significant progress since its inception in the 1970s, in terms both of the efficiency and sophistication of its algorithms and representations and its potential for application to real problems. In this paper we sketch the foundations of planning as a sub-field of Artificial Intelligence and the history of its development over the past three decades. Then some of the recent achievements within the field are discussed and provided some experimental data demonstrating the progress that has been made in the application of general planners to realistic and complex problems. The paper concludes by identifying some of the open issues that remain as important challenges for future research in planning
Identifying Security-Critical Cyber-Physical Components in Industrial Control Systems
In recent years, Industrial Control Systems (ICS) have become an appealing
target for cyber attacks, having massive destructive consequences. Security
metrics are therefore essential to assess their security posture. In this
paper, we present a novel ICS security metric based on AND/OR graphs that
represent cyber-physical dependencies among network components. Our metric is
able to efficiently identify sets of critical cyber-physical components, with
minimal cost for an attacker, such that if compromised, the system would enter
into a non-operational state. We address this problem by efficiently
transforming the input AND/OR graph-based model into a weighted logical formula
that is then used to build and solve a Weighted Partial MAX-SAT problem. Our
tool, META4ICS, leverages state-of-the-art techniques from the field of logical
satisfiability optimisation in order to achieve efficient computation times.
Our experimental results indicate that the proposed security metric can
efficiently scale to networks with thousands of nodes and be computed in
seconds. In addition, we present a case study where we have used our system to
analyse the security posture of a realistic water transport network. We discuss
our findings on the plant as well as further security applications of our
metric.Comment: Keywords: Security metrics, industrial control systems,
cyber-physical systems, AND-OR graphs, MAX-SAT resolutio
Optimal fault-tolerant placement of relay nodes in a mission critical wireless network
The operations of many critical infrastructures (e.g., airports) heavily depend on proper functioning of the radio communication network supporting operations. As a result, such a communication network is indeed a mission-critical communication network that needs adequate protection from external electromagnetic interferences. This is usually done through radiogoniometers. Basically, by using at least three suitably deployed radiogoniometers and a gateway gathering information from them, sources of electromagnetic emissions that are not supposed to be present in the monitored area can be localised. Typically, relay nodes are used to connect radiogoniometers to the gateway. As a result, some degree of fault-tolerance for the network of relay nodes is essential in order to offer a reliable monitoring. On the other hand, deployment of relay nodes is typically quite expensive. As a result, we have two conflicting requirements: minimise costs while guaranteeing a given fault-tolerance. In this paper address the problem of computing a deployment for relay nodes that minimises the relay node network cost while at the same time guaranteeing proper working of the network even when some of the relay nodes (up to a given maximum number) become faulty (fault-tolerance). We show that the above problem can be formulated as a Mixed Integer Linear Programming (MILP) as well as a Pseudo-Boolean Satisfiability (PB-SAT) optimisation problem and present experimental results com- paring the two approaches on realistic scenarios
A Comparison of State-Based Modelling Tools for Model Validation
In model-based testing, one of the biggest decisions taken before modelling is the modelling language and the model analysis tool to be used to model the system under investigation. UML, Alloy and Z are examples of popular state-based modelling languages. In the literature, there has been research about the similarities and the differences between modelling languages. However, we believe that, in addition to recognising the expressive power of modelling languages, it is crucial to detect the capabilities and the weaknesses of analysis tools that parse and analyse models written in these languages. In order to explore this area, we have chosen four model analysis tools: USE, Alloy Analyzer, ZLive and ProZ and observed how modelling and validation stages of MBT are handled by these tools for the same system. Through this experiment, we not only concretise the tasks that form the modelling and validation stages of MBT process, but also reveal how efficiently these tasks are carried out in different tools
Positional Games and QBF: The Corrective Encoding
Positional games are a mathematical class of two-player games comprising
Tic-tac-toe and its generalizations. We propose a novel encoding of these games
into Quantified Boolean Formulas (QBF) such that a game instance admits a
winning strategy for first player if and only if the corresponding formula is
true. Our approach improves over previous QBF encodings of games in multiple
ways. First, it is generic and lets us encode other positional games, such as
Hex. Second, structural properties of positional games together with a careful
treatment of illegal moves let us generate more compact instances that can be
solved faster by state-of-the-art QBF solvers. We establish the latter fact
through extensive experiments. Finally, the compactness of our new encoding
makes it feasible to translate realistic game problems. We identify a few such
problems of historical significance and put them forward to the QBF community
as milestones of increasing difficulty.Comment: Accepted for publication in the 23rd International Conference on
Theory and Applications of Satisfiability Testing (SAT2020
Galaxy shear estimation from stacked images
Statistics of the weak lensing of galaxies can be used to constrain cosmology
if the galaxy shear can be estimated accurately. In general this requires
accurate modelling of unlensed galaxy shapes and the point spread function
(PSF). I discuss suboptimal but potentially robust methods for estimating
galaxy shear by stacking images such that the stacked image distribution is
closely Gaussian by the central limit theorem. The shear can then be determined
by radial fitting, requiring only an accurate model of the PSF rather than also
needing to model each galaxy accurately. When noise is significant asymmetric
errors in the centroid must be corrected, but the method may ultimately be able
to give accurate un-biased results when there is a high galaxy density with
constant shear. It provides a useful baseline for more optimal methods, and a
test-case for estimating biases, though the method is not directly applicable
to realistic data. I test stacking methods on the simple toy simulations with
constant PSF and shear provided by the GREAT08 project, on which most other
existing methods perform significantly more poorly, and briefly discuss
generalizations to more realistic cases. In the appendix I discuss a simple
analytic galaxy population model where stacking gives optimal errors in a
perfect ideal case.Comment: 7 pages, 1 figure. Updated to match MNRAS-accepted version; added
appendix describing an analytic galaxy population mode
- ā¦