2,404 research outputs found

    Automated analysis of feature models: Quo vadis?

    Get PDF
    Feature models have been used since the 90's to describe software product lines as a way of reusing common parts in a family of software systems. In 2010, a systematic literature review was published summarizing the advances and settling the basis of the area of Automated Analysis of Feature Models (AAFM). From then on, different studies have applied the AAFM in different domains. In this paper, we provide an overview of the evolution of this field since 2010 by performing a systematic mapping study considering 423 primary sources. We found six different variability facets where the AAFM is being applied that define the tendencies: product configuration and derivation; testing and evolution; reverse engineering; multi-model variability-analysis; variability modelling and variability-intensive systems. We also confirmed that there is a lack of industrial evidence in most of the cases. Finally, we present where and when the papers have been published and who are the authors and institutions that are contributing to the field. We observed that the maturity is proven by the increment in the number of journals published along the years as well as the diversity of conferences and workshops where papers are published. We also suggest some synergies with other areas such as cloud or mobile computing among others that can motivate further research in the future.Ministerio de Economía y Competitividad TIN2015-70560-RJunta de Andalucía TIC-186

    Functional Requirements-Based Automated Testing for Avionics

    Full text link
    We propose and demonstrate a method for the reduction of testing effort in safety-critical software development using DO-178 guidance. We achieve this through the application of Bounded Model Checking (BMC) to formal low-level requirements, in order to generate tests automatically that are good enough to replace existing labor-intensive test writing procedures while maintaining independence from implementation artefacts. Given that existing manual processes are often empirical and subjective, we begin by formally defining a metric, which extends recognized best practice from code coverage analysis strategies to generate tests that adequately cover the requirements. We then formulate the automated test generation procedure and apply its prototype in case studies with industrial partners. In review, the method developed here is demonstrated to significantly reduce the human effort for the qualification of software products under DO-178 guidance

    An Autonomous Engine for Services Configuration and Deployment.

    Full text link
    The runtime management of the infrastructure providing service-based systems is a complex task, up to the point where manual operation struggles to be cost effective. As the functionality is provided by a set of dynamically composed distributed services, in order to achieve a management objective multiple operations have to be applied over the distributed elements of the managed infrastructure. Moreover, the manager must cope with the highly heterogeneous characteristics and management interfaces of the runtime resources. With this in mind, this paper proposes to support the configuration and deployment of services with an automated closed control loop. The automation is enabled by the definition of a generic information model, which captures all the information relevant to the management of the services with the same abstractions, describing the runtime elements, service dependencies, and business objectives. On top of that, a technique based on satisfiability is described which automatically diagnoses the state of the managed environment and obtains the required changes for correcting it (e.g., installation, service binding, update, or configuration). The results from a set of case studies extracted from the banking domain are provided to validate the feasibility of this propos

    AHEAD: Automatic Holistic Energy-Aware Design Methodology for MLP Neural Network Hardware Generation in Proactive BMI Edge Devices

    Get PDF
    The prediction of a high-level cognitive function based on a proactive brain–machine interface (BMI) control edge device is an emerging technology for improving the quality of life for disabled people. However, maintaining the stability of multiunit neural recordings is made difficult by the nonstationary nature of neurons and can affect the overall performance of proactive BMI control. Thus, it requires regular recalibration to retrain a neural network decoder for proactive control. However, retraining may lead to changes in the network parameters, such as the network topology. In terms of the hardware implementation of the neural decoder for real-time and low-power processing, it takes time to modify or redesign the hardware accelerator. Consequently, handling the engineering change of the low-power hardware design requires substantial human resources and time. To address this design challenge, this work proposes AHEAD: an automatic holistic energy-aware design methodology for multilayer perceptron (MLP) neural network hardware generation in proactive BMI edge devices. By taking a holistic analysis of the proactive BMI design flow, the approach makes judicious use of the intelligent bit-width identification (BWID) and configurable hardware generation, which autonomously integrate to generate the low-power hardware decoder. The proposed AHEAD methodology begins with the trained MLP parameters and golden datasets and produces an efficient hardware design in terms of performance, power, and area (PPA) with the least loss of accuracy. The results show that the proposed methodology is up to a 4X faster in performance, 3X lower in terms of power consumption, and achieves a 5X reduction in area resources, with exact accuracy, compared to floating-point and half-floating-point design on a field-programmable gate array (FPGA), which makes it a promising design methodology for proactive BMI edge devices

    Towards Statistical Prioritization for Software Product Lines Testing

    Get PDF
    Software Product Lines (SPL) are inherently difficult to test due to the combinatorial explosion of the number of products to consider. To reduce the number of products to test, sampling techniques such as combinatorial interaction testing have been proposed. They usually start from a feature model and apply a coverage criterion (e.g. pairwise feature interaction or dissimilarity) to generate tractable, fault-finding, lists of configurations to be tested. Prioritization can also be used to sort/generate such lists, optimizing coverage criteria or weights assigned to features. However, current sampling/prioritization techniques barely take product behavior into account. We explore how ideas of statistical testing, based on a usage model (a Markov chain), can be used to extract configurations of interest according to the likelihood of their executions. These executions are gathered in featured transition systems, compact representation of SPL behavior. We discuss possible scenarios and give a prioritization procedure illustrated on an example.Comment: Extended version published at VaMoS '14 (http://dx.doi.org/10.1145/2556624.2556635

    Tuotemallien tarkistuksen metriikan kehitys ja automaatio

    Get PDF
    A lot of interest and research has been focused on product quality and it is recognized as a crucial aspect of engineering. The quality of product models can also be seen as essential in engineering workflow especially in systems based on downstream data. Model quality effects not only the models accuracy and modifiability but also the agility of the whole engineering systems. Careful and thorough verification plays an important part in effecting product model quality. Verifying product models and designs manually can be laborious and time-consuming process. By automating parts of the verification process, benefits can be seen in the time frame and end results of the verification. The goal of the thesis is to develop metrics and automation for product model verification. Development of metrics is executed by researching literature for model quality metrics and construct a set of metrics for the company. Furthermore, the possibilities of product model verification automation are studied and a working automated model verification tool shall be created based on the metrics. The tool is intended be used in the current modeling environment. The outcomes of this thesis are a list of product quality dimensions with their corresponding metrics and a customized PTC ModelCHECK check that can automatically identify issues in product models. Quality dimensions were identified based on company needs and literature research. ModelCHECK platform was chosen for verification tool development as the software is readily available for the company which means it is a cost-effective way of utilizing automated product model verification in current design environment.Tuotteiden laatuun on jo pidemmän aikaa kiinnitetty paljon huomiota insinööriprosesseissa ja tutkimuksessa. Myös tuotemallien laatu voidaan nähdä insinöörityön kannalta elintärkeässä asemassa, erityisesti systeemeissä jotka perustuvat alaspäin virtaavaan tietoon. Mallien laatu vaikuttaa muun muassa sen tarkkuuteen ja muokattavuuteen sekä koko mallinnus- ja suunnittelujärjestelmän ketteryyteen. Huolellinen ja läpikotainen tarkistus on tärkeä osa tuotemallien laadun kehittämistä. Mallien manuaalinen tarkastaminen voi olla työlästä ja aikaavievää. Käyttämällä automaatiota tarkistuksen apuna, voidaan saavuttaa etuja tarkistuksen nopeudessa ja lopputuloksessa. Tämän diplomityön tavoitteena on kehittää tuotemallien tarkastuksen metriikkaa ja automaatiota. Metriikan kehitys perustuu kirjallisuustutkimukseen sekä muun muassa haastatteluissa kartoitettuihin yrityksen tarpeisiin. Tavoitteena on luoda tuotemalleille metriikkaa, joita vasten niiden ominaisuuksia voidaan arvioida. Myös tarkistuksen automaatiota tutkitaan ja tavoitteena on luoda automaattinen työkalu, jota voidaan käyttää yrityksen tämän hetkisessä suunnittelujärjestelmässä. Tutkimuksen lopputuloksena syntyi lista tuotemallien laadun ulottuvuuksista niihin liitetyillä metriikoilla ja metriikan mukainen PTC ModelCHECK tarkistuspohja 3D-malleille, joka löytyy automaattisesti virheitä malleista. ModelCHECK valittiin työkaluksi, koska se on valmiiksi saatavilla yrityksen nykyisessä mallinnusjärjestelmässä, joilloin automatisointi on erittäin kustannustehokasta

    Grand Challenges of Traceability: The Next Ten Years

    Full text link
    In 2007, the software and systems traceability community met at the first Natural Bridge symposium on the Grand Challenges of Traceability to establish and address research goals for achieving effective, trustworthy, and ubiquitous traceability. Ten years later, in 2017, the community came together to evaluate a decade of progress towards achieving these goals. These proceedings document some of that progress. They include a series of short position papers, representing current work in the community organized across four process axes of traceability practice. The sessions covered topics from Trace Strategizing, Trace Link Creation and Evolution, Trace Link Usage, real-world applications of Traceability, and Traceability Datasets and benchmarks. Two breakout groups focused on the importance of creating and sharing traceability datasets within the research community, and discussed challenges related to the adoption of tracing techniques in industrial practice. Members of the research community are engaged in many active, ongoing, and impactful research projects. Our hope is that ten years from now we will be able to look back at a productive decade of research and claim that we have achieved the overarching Grand Challenge of Traceability, which seeks for traceability to be always present, built into the engineering process, and for it to have "effectively disappeared without a trace". We hope that others will see the potential that traceability has for empowering software and systems engineers to develop higher-quality products at increasing levels of complexity and scale, and that they will join the active community of Software and Systems traceability researchers as we move forward into the next decade of research
    corecore