198 research outputs found

    Modelling, Reverse Engineering, and Learning Software Variability

    Get PDF
    The society expects software to deliver the right functionality, in a short amount of time and with fewer resources, in every possible circumstance whatever are the hardware, the operating systems, the compilers, or the data fed as input. For fitting such a diversity of needs, it is common that software comes in many variants and is highly configurable through configuration options, runtime parameters, conditional compilation directives, menu preferences, configuration files, plugins, etc. As there is no one-size-fits-all solution, software variability ("the ability of a software system or artifact to be efficiently extended, changed, customized or configured for use in a particular context") has been studied the last two decades and is a discipline of its own. Though highly desirable, software variability also introduces an enormous complexity due to the combinatorial explosion of possible variants. For example, the Linux kernel has 15000+ options and most of them can have 3 values: "yes", "no", or "module". Variability is challenging for maintaining, verifying, and configuring software systems (Web applications, Web browsers, video tools, etc.). It is also a source of opportunities to better understand a domain, create reusable artefacts, deploy performance-wise optimal systems, or find specialized solutions to many kinds of problems. In many scenarios, a model of variability is either beneficial or mandatory to explore, observe, and reason about the space of possible variants. For instance, without a variability model, it is impossible to establish a sampling strategy that would satisfy the constraints among options and meet coverage or testing criteria. I address a central question in this HDR manuscript: How to model software variability? I detail several contributions related to modelling, reverse engineering, and learning software variability. I first contribute to support the persons in charge of manually specifying feature models, the de facto standard for modeling variability. I develop an algebra together with a language for supporting the composition, decomposition, diff, refactoring, and reasoning of feature models. I further establish the syntactic and semantic relationships between feature models and product comparison matrices, a large class of tabular data. I then empirically investigate how these feature models can be used to test in the large configurable systems with different sampling strategies. Along this effort, I report on the attempts and lessons learned when defining the "right" variability language. From a reverse engineering perspective, I contribute to synthesize variability information into models and from various kinds of artefacts. I develop foundations and methods for reverse engineering feature models from satisfiability formulae, product comparison matrices, dependencies files and architectural information, and from Web configurators. I also report on the degree of automation and show that the involvement of developers and domain experts is beneficial to obtain high-quality models. Thirdly, I contribute to learning constraints and non-functional properties (performance) of a variability-intensive system. I describe a systematic process "sampling, measuring, learning" that aims to enforce or augment a variability model, capturing variability knowledge that domain experts can hardly express. I show that supervised, statistical machine learning can be used to synthesize rules or build prediction models in an accurate and interpretable way. This process can even be applied to huge configuration space, such as the Linux kernel one. Despite a wide applicability and observed benefits, I show that each individual line of contributions has limitations. I defend the following answer: a supervised, iterative process (1) based on the combination of reverse engineering, modelling, and learning techniques; (2) capable of integrating multiple variability information (eg expert knowledge, legacy artefacts, dynamic observations). Finally, this work opens different perspectives related to so-called deep software variability, security, smart build of configurations, and (threats to) science

    Adaptive Choice-Based Conjoint Analysis

    Get PDF
    Conjoint analysis (CA) has emerged as an important approach to the assessment of health service preferences. This article examines Adaptive Choice-Based Conjoint Analysis (ACBC) and reviews available evidence comparing ACBC with conventional approaches to CA. ACBC surveys more closely approximate the decision-making processes that influence real-world choices. Informants begin ACBC surveys by completing a build-your-own (BYO) task identifying the level of each attribute that they prefer. The ACBC software composes a series of attribute combinations clustering around each participant’s BYO choices. During the Screener section, informants decide whether each of these concepts is a possibility or not. Probe questions determine whether attribute levels consistently included in or excluded from each informant’s Screener section choices reflect ‘Unacceptable’ or ‘Must Have’ simplifying heuristics. Finally, concepts identified as possibilities during the Screener section are carried forward to a Choice Tournament. The winning concept in each Choice Tournament set advances to the next choice set until a winner is determined. A review of randomized trials and cross-over studies suggests that, although ACBC surveys require more time than conventional approaches to CA, informants find ACBC surveys more engaging. In most studies, ACBC surveys yield lower standard errors, improved prediction of hold-out task choices, and better estimates of real-world product decisions than conventional choice-based CA surveys

    Sistema Inteligente de Manutenção Preditiva

    Get PDF
    Maintenance Tasks in a shopfloor are one of the most critical tasks regarding the direct effect on production costs and, consequently, profit. Up until now, maintenance tasks were based on both Run-To-Failure and Reactive paradigms, fixing a machine only when it breaks or at a regular time intervals, regardless of the assets needed the maintenance or not. However, with the Industry 4.0 Paradigm and the Smart Factories concept, machines are now equipped with sensors that monitor a large number of different and varied variables which are afterwards stored. This data can be used to predict machine failures, called Predictive Maintenance, with the aid of the manual registries of asset breakdowns. This project, carried out in the scope of the subject TMDEI of the Master in Informatics Engineering (MEI), aims to conceive and build a system capable of doing Predictive Maintenance, by combining sensors and manual inputted data on ERP systems. PrediMain employs different Machine Learning techniques, with a special emphasis on Ensemble Methods, making the generated machine learning models more robust and accurate, by not using a single algorithm for the predictions. For sensor predictions, before classifying them as failure or not, PrediMain uses the auto-ARIMA technique, being an autoparemetrized method generating more accurate predictions. In the end, the system correctly classifies a set of observations with an estimated 90% of accuracy. This system is also developed to be served as a Software-as-a-Service, allowing multiple Data Sources, and therefore shopfloors, to use the same software instance, consequently not compromising the performance of the existing systems.As tarefas de manutenção, num contexto de chão de fábrica, são uma das tarefas mais críticas relativamente ao efeito direto nos custos de produção e consequentes lucros. Tradicionalmente, estas tarefas eram baseadas em técnicas rudimentares, seja a manutenção quando a máquina tem uma avariar ou então manutenções regulares no tempo, independentemente de a máquina necessitar ou não. No entanto, com o paradigma da Indústria 4.0 e Smart Factories, as máquinas estão cada vez mais equipadas com sensores que monitorizam um grande conjunto de variáveis e estatísticas que posteriormente são guardadas. Estes dados, em conjunto com os dados introduzidos manualmente nos sistemas ERP e MIS dos chão-de-fábrica, podem ser utilizados para prever falhas, utilizando técnicas de Machine Learning. Este projecto, PrediMain, desenvolvido no âmbito da unidade curricular TMDEI, do Mestrado de Engenharia Informática (MEI), tem como objectivo conceber um sistema capaz de realizar Manutenção Preditiva, dando previsões ao departamento de manutenção de quando é que uma determinada máquina irá ter algum tipo de falha. O PrediMain, tem como suporte técnicas de machine learning, com especial ênfase em técnicas de Ensemble, misturando diferentes algoritmos e técnicas, obtendo assim uma previsão mais fiável e precisa, contrariamente a utilizando apenas um tipo de algoritmo. Para a previsão dos valores de sensores, ainda antes de classificar uma determinada observação como possível falha, é utilizado um método auto-parametrizável e auto-ajustável, autoARIMA, gerando previsões mais fiáveis. No final, o sistema é capaz de classificar um conjunto de observações com uma taxa de acerto rondando os 90%. Por fim, este sistema foi concebido para ser servido a partir da Cloud, com as fontes de dados configuráveis, dando assim uma maior flexibilidade aos potenciais utilizadores e prevenir falhas ou diminuições de performance nos sistemas existentes

    Stage Configuration for Capital Goods:Supporting Order Capturing in Mass Customization

    Get PDF

    Special Topics in Information Technology

    Get PDF
    This open access book presents thirteen outstanding doctoral dissertations in Information Technology from the Department of Electronics, Information and Bioengineering, Politecnico di Milano, Italy. Information Technology has always been highly interdisciplinary, as many aspects have to be considered in IT systems. The doctoral studies program in IT at Politecnico di Milano emphasizes this interdisciplinary nature, which is becoming more and more important in recent technological advances, in collaborative projects, and in the education of young researchers. Accordingly, the focus of advanced research is on pursuing a rigorous approach to specific research topics starting from a broad background in various areas of Information Technology, especially Computer Science and Engineering, Electronics, Systems and Control, and Telecommunications. Each year, more than 50 PhDs graduate from the program. This book gathers the outcomes of the thirteen best theses defended in 2020-21 and selected for the IT PhD Award. Each of the authors provides a chapter summarizing his/her findings, including an introduction, description of methods, main achievements and future work on the topic. Hence, the book provides a cutting-edge overview of the latest research trends in Information Technology at Politecnico di Milano, presented in an easy-to-read format that will also appeal to non-specialists

    Special Topics in Information Technology

    Get PDF
    This open access book presents thirteen outstanding doctoral dissertations in Information Technology from the Department of Electronics, Information and Bioengineering, Politecnico di Milano, Italy. Information Technology has always been highly interdisciplinary, as many aspects have to be considered in IT systems. The doctoral studies program in IT at Politecnico di Milano emphasizes this interdisciplinary nature, which is becoming more and more important in recent technological advances, in collaborative projects, and in the education of young researchers. Accordingly, the focus of advanced research is on pursuing a rigorous approach to specific research topics starting from a broad background in various areas of Information Technology, especially Computer Science and Engineering, Electronics, Systems and Control, and Telecommunications. Each year, more than 50 PhDs graduate from the program. This book gathers the outcomes of the thirteen best theses defended in 2020-21 and selected for the IT PhD Award. Each of the authors provides a chapter summarizing his/her findings, including an introduction, description of methods, main achievements and future work on the topic. Hence, the book provides a cutting-edge overview of the latest research trends in Information Technology at Politecnico di Milano, presented in an easy-to-read format that will also appeal to non-specialists
    • …
    corecore