159 research outputs found

    A systematic literature review on the semi-automatic configuration of extended product lines

    Get PDF
    Product line engineering has become essential in mass customisation given its ability to reduce production costs and time to market, and to improve product quality and customer satisfaction. In product line literature, mass customisation is known as product configuration. Currently, there are multiple heterogeneous contributions in the product line configuration domain. However, a secondary study that shows an overview of the progress, trends, and gaps faced by researchers in this domain is still missing. In this context, we provide a comprehensive systematic literature review to discover which approaches exist to support the configuration process of extended product lines and how these approaches perform in practice. Extend product lines consider non-functional properties in the product line modelling. We compare and classify a total of 66 primary studies from 2000 to 2016. Mainly, we give an in-depth view of techniques used by each work, how these techniques are evaluated and their main shortcomings. As main results, our review identified (i) the need to improve the quality of the evaluation of existing approaches, (ii) a lack of hybrid solutions to support multiple configuration constraints, and (iii) a need to improve scalability and performance conditions

    Automated analysis of feature models: Quo vadis?

    Get PDF
    Feature models have been used since the 90's to describe software product lines as a way of reusing common parts in a family of software systems. In 2010, a systematic literature review was published summarizing the advances and settling the basis of the area of Automated Analysis of Feature Models (AAFM). From then on, different studies have applied the AAFM in different domains. In this paper, we provide an overview of the evolution of this field since 2010 by performing a systematic mapping study considering 423 primary sources. We found six different variability facets where the AAFM is being applied that define the tendencies: product configuration and derivation; testing and evolution; reverse engineering; multi-model variability-analysis; variability modelling and variability-intensive systems. We also confirmed that there is a lack of industrial evidence in most of the cases. Finally, we present where and when the papers have been published and who are the authors and institutions that are contributing to the field. We observed that the maturity is proven by the increment in the number of journals published along the years as well as the diversity of conferences and workshops where papers are published. We also suggest some synergies with other areas such as cloud or mobile computing among others that can motivate further research in the future.Ministerio de Economía y Competitividad TIN2015-70560-RJunta de Andalucía TIC-186

    Identifying Vulnerabilities of Industrial Control Systems using Evolutionary Multiobjective Optimisation

    Full text link
    In this paper we propose a novel methodology to assist in identifying vulnerabilities in a real-world complex heterogeneous industrial control systems (ICS) using two evolutionary multiobjective optimisation (EMO) algorithms, NSGA-II and SPEA2. Our approach is evaluated on a well known benchmark chemical plant simulator, the Tennessee Eastman (TE) process model. We identified vulnerabilities in individual components of the TE model and then made use of these to generate combinatorial attacks to damage the safety of the system, and to cause economic loss. Results were compared against random attacks, and the performance of the EMO algorithms were evaluated using hypervolume, spread and inverted generational distance (IGD) metrics. A defence against these attacks in the form of a novel intrusion detection system was developed, using a number of machine learning algorithms. Designed approach was further tested against the developed detection methods. Results demonstrate that EMO algorithms are a promising tool in the identification of the most vulnerable components of ICS, and weaknesses of any existing detection systems in place to protect the system. The proposed approach can be used by control and security engineers to design security aware control, and test the effectiveness of security mechanisms, both during design, and later during system operation.Comment: 25 page

    Artificial Intelligence and Machine Learning Approaches to Energy Demand-Side Response: A Systematic Review

    Get PDF
    Recent years have seen an increasing interest in Demand Response (DR) as a means to provide flexibility, and hence improve the reliability of energy systems in a cost-effective way. Yet, the high complexity of the tasks associated with DR, combined with their use of large-scale data and the frequent need for near real-time de-cisions, means that Artificial Intelligence (AI) and Machine Learning (ML) — a branch of AI — have recently emerged as key technologies for enabling demand-side response. AI methods can be used to tackle various challenges, ranging from selecting the optimal set of consumers to respond, learning their attributes and pref-erences, dynamic pricing, scheduling and control of devices, learning how to incentivise participants in the DR schemes and how to reward them in a fair and economically efficient way. This work provides an overview of AI methods utilised for DR applications, based on a systematic review of over 160 papers, 40 companies and commercial initiatives, and 21 large-scale projects. The papers are classified with regards to both the AI/ML algorithm(s) used and the application area in energy DR. Next, commercial initiatives are presented (including both start-ups and established companies) and large-scale innovation projects, where AI methods have been used for energy DR. The paper concludes with a discussion of advantages and potential limitations of reviewed AI techniques for different DR tasks, and outlines directions for future research in this fast-growing area

    Simulation-based testing of highly configurable cyber-physical systems: automation, optimization and debugging

    Get PDF
    Sistema Ziber-Fisikoek sistema ziber digitalak sistema fisikoekin uztartzen dituzte. Sistema hauen aldakortasuna handitzen ari da erabiltzaileen hainbat behar betetzeko. Ondorioz, sistema ziber-fisikoa aldakorrak edota produktu lerroak ari dira garatzen eta sistema hauek milaka edo milioika konfiguraziotan konfiguratu daitezke. Sistema ziber-fisiko aldakorren test eta balidazioa prozesua garestia da, batez ere probatu beharreko konfigurazio kopuruaren ondorioz. Konfigurazio kopuru altuak sistemaren prototipo bat erabiltzea ezinezkoa egiten du. Horregatik, sistema ziber-fisiko aldagarriak simulazio modeloak erabilita probatzen dira. Hala ere, simulazio bidez sistema ziber-fisikoak probatzea erronka izaten jarraitzen du. Hasteko, simulazio denbora altua izaten da normalki, software-az aparte, sistema fisikoa simulatu behar delako. Sistema fisiko hau normalean modelo matematiko konplexuen bitartez modelatzen da, konputazionalki garestia delarik. Jarraitzeko, sistema ziber-fisikoek ingeniaritzaren domeinu ezberdinak dituzte tartean, adibidez mekanika edo elektronika. Domeinu bakoitzak bere simulazio erremienta erabiltzen du, eta erremienta guzti hauek interkonektatzeko ko-simulazioa erabiltzen da. Nahiz eta ko-simulazioa abantaila bat izan ematen duen flexibilitateagatik, simulagailu ezberdinen erabilerak simulazio denbora handiagotzen du. Azkenik, sistema ziber-fisikoak simulaziopean probatzean, probak maila ezberdinetan egin behar dira (adb., Model, Software eta Hardware-in-the-Loop mailak), eta honek, proba-kasuak exekutatzeko denbora handitzen du. Tesi honen helburua sistema ziber-fisiko aldakorren test jardunbideak hobetzea da, horretarako automatizazio, optimizazio eta arazketa metodoak proposatzen ditu. Automatizazioari dagokionez, lehenengo, erremienta-bidezko metodologia bat proposatzen da. Metodologia hau test sistema instantziak automatikoki sortzeko gai da, test sistema hauek sistema ziber-fisiko aldagarrien konfigurazioak automatikoki probatzeko gai dira (adb., test orakuluen bitartez). Bigarren, test frogak automatikoki sortzeko planteamendu bat proposatzen da helburu anitzeko bilaketa algoritmoak erabilita. Optimizazioari dagokionez, test frogen aukeraketarako planteamendu bat eta test frogen priorizaziorako beste planteamendu bat proposatzen dira, biak bilaketa alix goritmoak erabiliz, sistema ziber-fisiko aldakorrak test maila ezberdinetan probatzeko helburuarekin. Arazketari dagokionez, “espektroan oinarritutako falten lokalizazioa” izeneko teknika bat produktu lerroen testuingurura adaptatu da, eta faltak isolatzeko metodo bat proposatzen da. Honek, falta ezberdinak lokalizatzea errezten du ez bakarrik sistema ziber-fisiko aldakorretan, baizik eta edozein produktu lerrotan non “feature model” delako modeloak erabiltzen diren aldakortasuna kudeatzeko.Los sistemas cyber-físicos (CPSs) integran tecnologías digitales con procesos físicos. La variabilidad de estos sistemas está creciendo para responder a la demanda de diferentes clientes. Como consecuencia de ello, los CPSs están volviéndose configurables e incluso líneas de producto, lo que significa que pueden ser configurados en miles y millones de configuraciones. El testeo de sistemas cyber-físicos configurables es un proceso costoso, en general debido a la cantidad de configuraciones que han de ser testeadas. El número de configuraciones a testear hace imposible el uso de un prototipo del sistema. Por ello, los sistemas CPSs configurables están siendo testeadas utilizando modelos de simulación. Sin embargo, el testeo de sistemas cyber-físicos bajo simulación sigue siendo un reto. Primero, el tiempo de simulación es normalmente largo, ya que, además del software, la capa física del CPS ha de ser testeada. Esta capa física es típicamente modelada con modelos matemáticos complejos, lo cual es computacionalmente caro. Segundo, los sistemas cyber-físicos implican el uso de diferentes dominios de la ingeniería, como por ejemplo la mecánica o la electrónica. Por ello, para interconectar diferentes herramientas de modelado y simulación hace falta el uso de la co-simulación. A pesar de que la co-simulación es una ventaja en términos de flexibilidad para los ingenieros, el uso de diferentes simuladores hace que el tiempo de simulación sea más largo. Por último, al testear sistemas cyberfísicos haciendo uso de simulación, existen diferentes niveles (p.ej., Model, Software y Hardware-in-the-Loop), lo cual incrementa el tiempo para ejecutar casos de test. Esta tesis tiene como objetivo avanzar en la práctica actual del testeo de sistemas cyber-físicos configurables, proponiendo métodos para la automatización, optimización y depuración. En cuanto a la automatización, primero, se propone una metodología soportada por una herramienta para generar automáticamente instancias de sistemas de test que permiten testear automáticamente configuraciones del sistema CPS configurable (p.ej., haciendo uso de oráculos de test). Segundo, se propone un enfoque para generación de casos de test basado en algoritmos de búsqueda multiobjetivo, los cuales generan un conjunto de casos de test. En cuanto a la optimización, se propone un enfoque para selección y otro para priorización de casos de test, ambos basados en algoritmos de búsqueda, de cara a testear eficientemente sistemas cyberfísicos configurables en diferentes niveles de test. En cuanto a la depuración, se adapta una técnica llamada “Localización de Fallos Basada en Espectro” al contexto de líneas de productos y proponemos un método de aislamiento de fallos. Esto permite localizar bugs no solo en sistemas cyber-físicos configurables sino también en cualquier línea de producto donde se utilicen modelos de características para gestionar la variabilidad.Cyber-Physical Systems (CPSs) integrate digital cyber technologies with physical processes. The variability of these systems is increasing in order to give solution to the different customers demands. As a result, CPSs are becoming configurable or even product lines, which means that they can be set into thousands or millions of configurations. Testing configurable CPSs is a time consuming process, mainly due to the large amount of configurations that need to be tested. The large amount of configurations that need to be tested makes it infeasible to use a prototype of the system. As a result, configurable CPSs are being tested using simulation. However, testing CPSs under simulation is still challenging. First, the simulation time is usually long, since apart of the software, the physical layer needs to be simulated. This physical layer is typically modeled with complex mathematical models, which is computationally very costly. Second, CPSs involve different domains, such as, mechanical and electrical. Engineers of different domains typically employ different tools for modeling their subsystems. As a result, co-simulation is being employed to interconnect different modeling and simulation tools. Despite co-simulation being an advantage in terms of engineers flexibility, the use of different simulation tools makes the simulation time longer. Lastly, when testing CPSs employing simulation, different test levels exist (i.e., Model, Software and Hardware-in-the-Loop), what increases the time for executing test cases. This thesis aims at advancing the current practice on testing configurable CPSs by proposing methods for automation, optimization and debugging. Regarding automation, first, we propose a tool supported methodology to automatically generate test system instances that permit automatically testing configurations of the configurable CPS (e.g., by employing test oracles). Second, we propose a test case generation approach based on multi-objective search algorithms that generate cost-effective test suites. As for optimization, we propose a test case selection and a test case prioritization approach, both of them based on search algorithms, to cost-effectively test configurable CPSs at different test levels. Regarding debugging, we adapt a technique named Spectrum-Based Fault Localization to the product line engineering context and propose a fault isolation method. This permits localizing bugs not only in configurable CPSs but also in any product line where feature models are employed to model variability

    An interactive metaheuristic search framework for software serviceidentification from business process models

    Get PDF
    In recent years, the Service-Oriented Architecture (SOA) model of computing has become widely used and has provided efficient and agile business solutions in response to inevitable and rapid changes in business requirements. Software service identification is a crucial component in the production of a service-oriented architecture and subsequent successful software development, yet current service identification methods have limitations. For example, service identification methods are either not sufficiently comprehensive to handle the totality of service identification activities, or they lack computational support, or they pay insufficient attention to quality checks of resulting services. To address these limitations, comprehensive computationally intelligent support for software engineers when deriving software services from an organisation’s business process models shows great potential, especially when the impact of human preference on the quality of the resulting solutions can be incorporated. Accordingly, this research attempts to apply interactive metaheuristic search to effectively bridge the gap between business and SOA technology and so increase business agility.A novel, comprehensive framework is introduced that is driven by domain independent role-based business process models, and uses an interactive metaheuristic search-based service identification approach based on a genetic algorithm, while adhering to SOA principles. Termed BPMiSearch, the framework is composed of three main layers. The first layer is concerned with processing inputs from business process models into search space elements by modelling input data and presenting them at an appropriate level of granularity. The second layer focuses on identifying software services from the specified search space. The third layer refines the resulting services to map the business elements in the resulting candidate services to the corresponding service components. The proposed BPMiSearch framework has been evaluated by applying it to a healthcare domain case study, specifically, Cancer Care and Registration (CCR) business processes at the King Hussein Cancer Centre, Amman, Jordan.Experiments show that the impact of software engineer interaction on the quality of the outcomes in terms of search effectiveness, efficiency, and level of user satisfaction, is assessed. Results show that BPMiSearch has rapid search performance to positively support software engineers in the identification of services from role-based business process models while adhering to SOA principles. High-quality services are identified that might not have been arrived at manually by software engineers. Furthermore, it is found that BPMiSearch is sensitive and responsive to software engineer interaction resulting in a positive level of user trust, acceptance, and satisfaction with the candidate services

    Identifying vulnerabilities of industrial control systems using evolutionary multiobjective optimisation

    Get PDF
    In this paper, we propose a novel methodology to assist in identifying vulnerabilities in real-world complex heterogeneous industrial control systems (ICS) using two Evolutionary Multiobjective Optimisation (EMO) algorithms, NSGA-II and SPEA2. Our approach is evaluated on a well-known benchmark chemical plant simulator, the Tennessee Eastman (TE) process model. We identified vulnerabilities in individual components of the TE model and then made use of these vulnerabilities to generate combinatorial attacks. The generated attacks were aimed at compromising the safety of the system and inflicting economic loss. Results were compared against random attacks, and the performance of the EMO algorithms was evaluated using hypervolume, spread, and inverted generational distance (IGD) metrics. A defence against these attacks in the form of a novel intrusion detection system was developed, using machine learning algorithms. The designed approach was further tested against the developed detection methods. The obtained results demonstrate that the developed EMO approach is a promising tool in the identification of the vulnerable components of ICS, and weaknesses of any existing detection systems in place to protect the system. The proposed approach can serve as a proactive defense tool for control and security engineers to identify and prioritise vulnerabilities in the system. The approach can be employed to design resilient control strategies and test the effectiveness of security mechanisms, both in the design stage and during the operational phase of the system

    Handling High-Level Model Changes Using Search Based Software Engineering

    Full text link
    Model-Driven Engineering (MDE) considers models as first-class artifacts during the software lifecycle. The number of available tools, techniques, and approaches for MDE is increasing as its use gains traction in driving quality, and controlling cost in evolution of large software systems. Software models, defined as code abstractions, are iteratively refined, restructured, and evolved. This is due to many reasons such as fixing defects in design, reflecting changes in requirements, and modifying a design to enhance existing features. In this work, we focus on four main problems related to the evolution of software models: 1) the detection of applied model changes, 2) merging parallel evolved models, 3) detection of design defects in merged model, and 4) the recommendation of new changes to fix defects in software models. Regarding the first contribution, a-posteriori multi-objective change detection approach has been proposed for evolved models. The changes are expressed in terms of atomic and composite refactoring operations. The majority of existing approaches detects atomic changes but do not adequately address composite changes which mask atomic operations in intermediate models. For the second contribution, several approaches exist to construct a merged model by incorporating all non-conflicting operations of evolved models. Conflicts arise when the application of one operation disables the applicability of another one. The essence of the problem is to identify and prioritize conflicting operations based on importance and context – a gap in existing approaches. This work proposes a multi-objective formulation of model merging that aims to maximize the number of successfully applied merged operations. For the third and fourth contributions, the majority of existing works focuses on refactoring at source code level, and does not exploit the benefits of software design optimization at model level. However, refactoring at model level is inherently more challenging due to difficulty in assessing the potential impact on structural and behavioral features of the software system. This requires analysis of class and activity diagrams to appraise the overall system quality, feasibility, and inter-diagram consistency. This work focuses on designing, implementing, and evaluating a multi-objective refactoring framework for detection and fixing of design defects in software models.Ph.D.Information Systems Engineering, College of Engineering and Computer ScienceUniversity of Michigan-Dearbornhttp://deepblue.lib.umich.edu/bitstream/2027.42/136077/1/Usman Mansoor Final.pdfDescription of Usman Mansoor Final.pdf : Dissertatio

    Search-based system architecture development using a holistic modeling approach

    Get PDF
    This dissertation presents an innovative approach to system architecting where search algorithms are used to explore design trade space for good architecture alternatives. Such an approach is achieved by integrating certain model construction, alternative generation, simulation, and assessment processes into a coherent and automated framework. This framework is facilitated by a holistic modeling approach that combines the capabilities of Object Process Methodology (OPM), Colored Petri Net (CPN), and feature model. The resultant holistic model can not only capture the structural, behavioral, and dynamic aspects of a system, allowing simulation and strong analysis methods to be applied, it can also specify the architectural design space. Both object-oriented analysis and design (OOA/D) and domain engineering were exploited to capture design variables and their domains and define architecture generation operations. A fully realized framework (with genetic algorithms as the search algorithm) was developed. Both the proposed framework and its suggested implementation, including the proposed holistic modeling approach and architecture alternative generation operations, are generic. They are targeted at systems that can be specified using object-oriented or process-oriented paradigm. The broad applicability of the proposed approach is demonstrated on two examples. One is the configuration of reconfigurable manufacturing systems (RMSs) under multi-objective optimization and the other is the architecture design of a manned lunar landing system for the Apollo program. The test results show that the proposed approach can cover a huge number of architecture alternatives and support the assessment of several performance measures. A set of quality results was obtained after running the optimization algorithm following the proposed framework --Abstract, page iii

    Antecipação na tomada de decisão com múltiplos critérios sob incerteza

    Get PDF
    Orientador: Fernando José Von ZubenTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: A presença de incerteza em resultados futuros pode levar a indecisões em processos de escolha, especialmente ao elicitar as importâncias relativas de múltiplos critérios de decisão e de desempenhos de curto vs. longo prazo. Algumas decisões, no entanto, devem ser tomadas sob informação incompleta, o que pode resultar em ações precipitadas com consequências imprevisíveis. Quando uma solução deve ser selecionada sob vários pontos de vista conflitantes para operar em ambientes ruidosos e variantes no tempo, implementar alternativas provisórias flexíveis pode ser fundamental para contornar a falta de informação completa, mantendo opções futuras em aberto. A engenharia antecipatória pode então ser considerada como a estratégia de conceber soluções flexíveis as quais permitem aos tomadores de decisão responder de forma robusta a cenários imprevisíveis. Essa estratégia pode, assim, mitigar os riscos de, sem intenção, se comprometer fortemente a alternativas incertas, ao mesmo tempo em que aumenta a adaptabilidade às mudanças futuras. Nesta tese, os papéis da antecipação e da flexibilidade na automação de processos de tomada de decisão sequencial com múltiplos critérios sob incerteza é investigado. O dilema de atribuir importâncias relativas aos critérios de decisão e a recompensas imediatas sob informação incompleta é então tratado pela antecipação autônoma de decisões flexíveis capazes de preservar ao máximo a diversidade de escolhas futuras. Uma metodologia de aprendizagem antecipatória on-line é então proposta para melhorar a variedade e qualidade dos conjuntos futuros de soluções de trade-off. Esse objetivo é alcançado por meio da previsão de conjuntos de máximo hipervolume esperado, para a qual as capacidades de antecipação de metaheurísticas multi-objetivo são incrementadas com rastreamento bayesiano em ambos os espaços de busca e dos objetivos. A metodologia foi aplicada para a obtenção de decisões de investimento, as quais levaram a melhoras significativas do hipervolume futuro de conjuntos de carteiras financeiras de trade-off avaliadas com dados de ações fora da amostra de treino, quando comparada a uma estratégia míope. Além disso, a tomada de decisões flexíveis para o rebalanceamento de carteiras foi confirmada como uma estratégia significativamente melhor do que a de escolher aleatoriamente uma decisão de investimento a partir da fronteira estocástica eficiente evoluída, em todos os mercados artificiais e reais testados. Finalmente, os resultados sugerem que a antecipação de opções flexíveis levou a composições de carteiras que se mostraram significativamente correlacionadas com as melhorias observadas no hipervolume futuro esperado, avaliado com dados fora das amostras de treinoAbstract: The presence of uncertainty in future outcomes can lead to indecision in choice processes, especially when eliciting the relative importances of multiple decision criteria and of long-term vs. near-term performance. Some decisions, however, must be taken under incomplete information, what may result in precipitated actions with unforeseen consequences. When a solution must be selected under multiple conflicting views for operating in time-varying and noisy environments, implementing flexible provisional alternatives can be critical to circumvent the lack of complete information by keeping future options open. Anticipatory engineering can be then regarded as the strategy of designing flexible solutions that enable decision makers to respond robustly to unpredictable scenarios. This strategy can thus mitigate the risks of strong unintended commitments to uncertain alternatives, while increasing adaptability to future changes. In this thesis, the roles of anticipation and of flexibility on automating sequential multiple criteria decision-making processes under uncertainty are investigated. The dilemma of assigning relative importances to decision criteria and to immediate rewards under incomplete information is then handled by autonomously anticipating flexible decisions predicted to maximally preserve diversity of future choices. An online anticipatory learning methodology is then proposed for improving the range and quality of future trade-off solution sets. This goal is achieved by predicting maximal expected hypervolume sets, for which the anticipation capabilities of multi-objective metaheuristics are augmented with Bayesian tracking in both the objective and search spaces. The methodology has been applied for obtaining investment decisions that are shown to significantly improve the future hypervolume of trade-off financial portfolios for out-of-sample stock data, when compared to a myopic strategy. Moreover, implementing flexible portfolio rebalancing decisions was confirmed as a significantly better strategy than to randomly choosing an investment decision from the evolved stochastic efficient frontier in all tested artificial and real-world markets. Finally, the results suggest that anticipating flexible choices has lead to portfolio compositions that are significantly correlated with the observed improvements in out-of-sample future expected hypervolumeDoutoradoEngenharia de ComputaçãoDoutor em Engenharia Elétric
    corecore