62 research outputs found

    DMN for Data Quality Measurement and Assessment

    Get PDF
    Data Quality assessment is aimed at evaluating the suitability of a dataset for an intended task. The extensive literature on data quality describes the various methodologies for assessing data quality by means of data profiling techniques of the whole datasets. Our investigations are aimed to provide solutions to the need of automatically assessing the level of quality of the records of a dataset, where data profiling tools do not provide an adequate level of information. As most of the times, it is easier to describe when a record has quality enough than calculating a qualitative indicator, we propose a semi-automatically business rule-guided data quality assessment methodology for every record. This involves first listing the business rules that describe the data (data requirements), then those describing how to produce measures (business rules for data quality measurements), and finally, those defining how to assess the level of data quality of a data set (business rules for data quality assessment). The main contribution of this paper is the adoption of the OMG standard DMN (Decision Model and Notation) to support the data quality requirement description and their automatic assessment by using the existing DMN engines.Ministerio de Ciencia y Tecnología RTI2018-094283-B-C33Ministerio de Ciencia y Tecnología RTI2018-094283-B-C31European Regional Development Fund SBPLY/17/180501/00029

    Integrating the common variability language with multilanguage annotations for web engineering

    Get PDF
    Web applications development involves managing a high diversity of files and resources like code, pages or style sheets, implemented in different languages. To deal with the automatic generation of custom-made configurations of web applications, industry usually adopts annotation-based approaches even though the majority of studies encourage the use of composition-based approaches to implement Software Product Lines. Recent work tries to combine both approaches to get the complementary benefits. However, technological companies are reticent to adopt new development paradigms such as feature-oriented programming or aspect-oriented programming. Moreover, it is extremely difficult, or even impossible, to apply these programming models to web applications, mainly because of their multilingual nature, since their development involves multiple types of source code (Java, Groovy, JavaScript), templates (HTML, Markdown, XML), style sheet files (CSS and its variants, such as SCSS), and other files (JSON, YML, shell scripts). We propose to use the Common Variability Language as a composition-based approach and integrate annotations to manage fine grained variability of a Software Product Line for web applications. In this paper, we (i) show that existing composition and annotation-based approaches, including some well-known combinations, are not appropriate to model and implement the variability of web applications; and (ii) present a combined approach that effectively integrates annotations into a composition-based approach for web applications. We implement our approach and show its applicability with an industrial real-world system.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    WeaFQAs: A Software Product Line Approach for Customizing and Weaving Efficient Functional Quality Attributes

    Get PDF
    Fecha de Lectura de Tesis: 10 de julio de 2018Los atributos de calidad funcionales (FQA) son aquellos que tienen una clara implicación en la funcionalidad del sistema, es decir, existen unos componentes específicos que deben ser incorporados a la arquitectura software del sistema para satisfacer sus requisitos de atributos de calidad. Ejemplos de FQAs son seguridad, usabilidad, o persistencia. Modelar estos atributos es una tarea compleja. Por un lado, se componen de muchas características relacionadas, por ejemplo seguridad está compuesto, entre otros, por autenticación, confidencialidad y encriptación. Tienen dependencias entre ellos, por ejemplo, seguridad puede ser requerido por usabilidad o persistencia. Por otro lado, tienen muchos puntos de variabilidad: una aplicación concreta puede requerir autenticación y control de acceso mientras que otra puede necesitar sólo encriptación. Además, su funcionalidad suele estar dispersa afectando a varios componentes del sistema en desarrollo. El objetivo de esta tesis es definir una línea de productos software orientada a aspectos que permita: (1) modelar las similitudes y la variabilidad de los FQAs desde las primeras etapas del proceso de desarrollo, (2) gestionar las dependencias existentes entre los FQAs, (3) independizar el modelado de los FQAs de la arquitectura de la aplicación afectada, (4) configurar los FQAs en base a los requisitos de cada aplicación teniendo además en cuenta propiedades no funcionales como el rendimiento y el consumo energético de cada solución, (5) incorporar las configuraciones a la arquitectura de la aplicación de manera automática; y (6) gestionar la evolución de los FQAs cuando los requisitos cambien en el futuro. Como resultado se ha definido WeaFQAs, un proceso software para gestionar los FQAs que cubre todos los puntos mencionados. Se han realizado y comparado dos instanciaciones de WeaFQAs usando diferentes lenguajes de variabilidad y de modelado, además de proporcionar soporte con una herramienta basada en el lenguaje CVL

    Software product line engineering: a practical experience

    Get PDF
    The lack of mature tool support is one of the main reasons that make the industry to be reluctant to adopt Software Product Line (SPL) approaches. A number of systematic literature reviews exist that identify the main characteristics offered by existing tools and the SPL phases in which they can be applied. However, these reviews do not really help to understand if those tools are offering what is really needed to apply SPLs to complex projects. These studies are mainly based on information extracted from the tool documentation or published papers. In this paper, we follow a different approach, in which we firstly identify those characteristics that are currently essential for the development of an SPL, and secondly analyze whether the tools provide or not support for those characteristics. We focus on those tools that satisfy certain selection criteria (e.g., they can be downloaded and are ready to be used). The paper presents a state of practice with the availability and usability of the existing tools for SPL, and defines different roadmaps that allow carrying out a complete SPL process with the existing tool support.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech. Magic P12-TIC1814, HADAS TIN2015-64841-R (cofinanciado con fondos FEDER), MEDEA RTI2018-099213-B-I00 (cofinanciado con fondos FEDER), TASOVA MCIU-AEI TIN2017-90644-RED

    A document based traceability model for test management

    Get PDF
    Software testing has became more complicated in the emergence of distributed network, real-time environment, third party software enablers and the need to test system at multiple integration levels. These scenarios have created more concern over the quality of software testing. The quality of software has been deteriorating due to inefficient and ineffective testing activities. One of the main flaws is due to ineffective use of test management to manage software documentations. In documentations, it is difficult to detect and trace bugs in some related documents of which traceability is the major concern. Currently, various studies have been conducted on test management, however very few have focused on document traceability in particular to support the error propagation with respect to documentation. The objective of this thesis is to develop a new traceability model that integrates software engineering documents to support test management. The artefacts refer to requirements, design, source code, test description and test result. The proposed model managed to tackle software traceability in both forward and backward propagations by implementing multi-bidirectional pointer. This platform enabled the test manager to navigate and capture a set of related artefacts to support test management process. A new prototype was developed to facilitate observation of software traceability on all related artefacts across the entire documentation lifecycle. The proposed model was then applied to a case study of a finished software development project with a complete set of software documents called the On-Board Automobile (OBA). The proposed model was evaluated qualitatively and quantitatively using the feature analysis, precision and recall, and expert validation. The evaluation results proved that the proposed model and its prototype were justified and significant to support test management

    Koostööäriprotsesside läbiviimine plokiahelal: süsteem

    Get PDF
    Tänapäeval peavad organisatsioonid tegema omavahel koostööd, et kasutada ära üksteise täiendavaid võimekusi ning seeläbi pakkuda oma klientidele parimaid tooteid ja teenuseid. Selleks peavad organisatsioonid juhtima äriprotsesse, mis ületavad nende organisatsioonilisi piire. Selliseid protsesse nimetatakse koostööäriprotsessideks. Üks peamisi takistusi koostööäriprotsesside elluviimisel on osapooltevahelise usalduse puudumine. Plokiahel loob detsentraliseeritud pearaamatu, mida ei saa võltsida ning mis toetab nutikate lepingute täitmist. Nii on võimalik teha koostööd ebausaldusväärsete osapoolte vahel ilma kesksele asutusele tuginemata. Paraku on aga äriprotsesside läbiviimine selliseid madala taseme plokiahela elemente kasutades tülikas, veaohtlik ja erioskusi nõudev. Seevastu juba väljakujunenud äriprotsesside juhtimissüsteemid (Business Process Management System – BPMS) pakuvad käepäraseid abstraheeringuid protsessidele orienteeritud rakenduste kiireks arendamiseks. Käesolev doktoritöö käsitleb koostööäriprotsesside automatiseeritud läbiviimist plokiahela tehnoloogiat kasutades, kombineerides traditsioonliste BPMS- ide arendusvõimalused plokiahelast tuleneva suurendatud usaldusega. Samuti käsitleb antud doktoritöö küsimust, kuidas pakkuda tuge olukordades, milles uued osapooled võivad jooksvalt protsessiga liituda, mistõttu on vajalik tagada paindlikkus äriprotsessi marsruutimisloogika muutmise osas. Doktoritöö uurib tarkvaraarhitektuurilisi lähenemisviise ja modelleerimise kontseptsioone, pakkudes välja disainipõhimõtteid ja nõudeid, mida rakendatakse uudsel plokiahela baasil loodud äriprotsessi juhtimissüsteemil CATERPILLAR. CATERPILLAR-i süsteem toetab kahte lähenemist plokiahelal põhinevate protsesside rakendamiseks, läbiviimiseks ja seireks: kompileeritud ja tõlgendatatud. Samuti toetab see kahte kontrollitud paindlikkuse mehhanismi, mille abil saavad protsessis osalejad ühiselt otsustada, kuidas protsessi selle täitmise ajal uuendada ning anda ja eemaldada osaliste juurdepääsuõigusi.Nowadays, organizations are pressed to collaborate in order to take advantage of their complementary capabilities and to provide best-of-breed products and services to their customers. To do so, organizations need to manage business processes that span beyond their organizational boundaries. Such processes are called collaborative business processes. One of the main roadblocks to implementing collaborative business processes is the lack of trust between the participants. Blockchain provides a decentralized ledger that cannot be tamper with, that supports the execution of programs called smart contracts. These features allow executing collaborative processes between untrusted parties and without relying on a central authority. However, implementing collaborative business processes in blockchain can be cumbersome, error-prone and requires specialized skills. In contrast, established Business Process Management Systems (BPMSs) provide convenient abstractions for rapid development of process-oriented applications. This thesis addresses the problem of automating the execution of collaborative business processes on top of blockchain technology in a way that takes advantage of the trust-enhancing capabilities of this technology while offering the development convenience of traditional BPMSs. The thesis also addresses the question of how to support scenarios in which new parties may be onboarded at runtime, and in which parties need to have the flexibility to change the default routing logic of the business process. We explore architectural approaches and modelling concepts, formulating design principles and requirements that are implemented in a novel blockchain-based BPMS named CATERPILLAR. The CATERPILLAR system supports two methods to implement, execute and monitor blockchain-based processes: compiled and interpreted. It also supports two mechanisms for controlled flexibility; i.e., participants can collectively decide on updating the process during its execution as well as granting and revoking access to parties.https://www.ester.ee/record=b536494

    Data-driven conceptual modeling: how some knowledge drivers for the enterprise might be mined from enterprise data

    Get PDF
    As organizations perform their business, they analyze, design and manage a variety of processes represented in models with different scopes and scale of complexity. Specifying these processes requires a certain level of modeling competence. However, this condition does not seem to be balanced with adequate capability of the person(s) who are responsible for the task of defining and modeling an organization or enterprise operation. On the other hand, an enterprise typically collects various records of all events occur during the operation of their processes. Records, such as the start and end of the tasks in a process instance, state transitions of objects impacted by the process execution, the message exchange during the process execution, etc., are maintained in enterprise repositories as various logs, such as event logs, process logs, effect logs, message logs, etc. Furthermore, the growth rate in the volume of these data generated by enterprise process execution has increased manyfold in just a few years. On top of these, models often considered as the dashboard view of an enterprise. Models represents an abstraction of the underlying reality of an enterprise. Models also served as the knowledge driver through which an enterprise can be managed. Data-driven extraction offers the capability to mine these knowledge drivers from enterprise data and leverage the mined models to establish the set of enterprise data that conforms with the desired behaviour. This thesis aimed to generate models or knowledge drivers from enterprise data to enable some type of dashboard view of enterprise to provide support for analysts. The rationale for this has been started as the requirement to improve an existing process or to create a new process. It was also mentioned models can also serve as a collection of effectors through which an organization or an enterprise can be managed. The enterprise data refer to above has been identified as process logs, effect logs, message logs, and invocation logs. The approach in this thesis is to mine these logs to generate process, requirement, and enterprise architecture models, and how goals get fulfilled based on collected operational data. The above a research question has been formulated as whether it is possible to derive the knowledge drivers from the enterprise data, which represent the running operation of the enterprise, or in other words, is it possible to use the available data in the enterprise repository to generate the knowledge drivers? . In Chapter 2, review of literature that can provide the necessary background knowledge to explore the above research question has been presented. Chapter 3 presents how process semantics can be mined. Chapter 4 suggest a way to extract a requirements model. The Chapter 5 presents a way to discover the underlying enterprise architecture and Chapter 6 presents a way to mine how goals get orchestrated. Overall finding have been discussed in Chapter 7 to derive some conclusions

    Model-based integration testing technique using formal finite state behavioral models for component-based software

    Get PDF
    Many issues and challenges could be identified when considering integration testing of Component-Based Software Systems (CBSS). Consequently, several research have appeared in the literature, aimed at facilitating the integration testing of CBSS. Unfortunately, they suffer from a number of drawbacks and limitations such as difficulty of understanding and describing the behavior of integrated components, lack of effective formalism for test information, difficulty of analyzing and validating the integrated components, and exposing the components implementation by providing semi-formal models. Hence, these problems have made it in effective to test today’s modern complex CBSS. To address these problems, a model-based approach such as Model-Based Testing (MBT) tends to be a suitable mechanism and could be a potential solution to be applied in the context of integration testing of CBSS. Accordingly, this thesis presents a model-based integration testing technique for CBSS. Firstly, a method to extract the formal finite state behavioral models of integrated software components using Mealy machine models was developed. The extracted formal models were used to detect faulty interactions (integration bugs) or compositional problems between integrated components in the system. Based on the experimental results, the proposed method had significant impact in reducing the number of output queries required to extract the formal models of integrated software components and its performance was 50% better compared to the existing methods. Secondly, based on the extracted formal models, an effective model-based integration testing technique (MITT) for CBSS was developed. Finally, the effectiveness of the MITT was demonstrated by employing it in the air gourmet and elevator case studies, using three evaluation parameters. The experimental results showed that the MITT was effective and outperformed Shahbaz technique on the air gourmet and elevator case studies. In terms of learned components for air gourmet and elevator case studies respectively, the MITT results were better by 98.14% and 100%, output queries based on performance were 42.13% and 25.01%, and error detection capabilities were 70.62% and 75% for each of the case study

    Combining Multiple Granularity Variability in a Software Product Line Approach for Web Engineering

    Get PDF
    [Abstract] Context: Web engineering involves managing a high diversity of artifacts implemented in different languages and with different levels of granularity. Technological companies usually implement variable artifacts of Software Product Lines (SPLs) using annotations, being reluctant to adopt hybrid, often complex, approaches combining composition and annotations despite their benefits. Objective: This paper proposes a combined approach to support fine and coarse-grained variability for web artifacts. The proposal allows web developers to continue using annotations to handle fine-grained variability for those artifacts whose variability is very difficult to implement with a composition-based approach, but obtaining the advantages of the composition-based approach for the coarse-grained variable artifacts. Methods: A combined approach based on feature modeling that integrates annotations into a generic composition-based approach. We propose the definition of compositional and annotative variation points with custom-defined semantics, which is resolved by a scaffolding-based derivation engine. The approach is evaluated on a real-world web-based SPL by applying a set of variability metrics, as well as discussing its quality criteria in comparison with annotations, compositional, and combined existing approaches. Results: Our approach effectively handles both fine and coarse-grained variability. The mapping between the feature model and the web artifacts promotes the traceability of the features and the uniformity of the variation points regardless of the granularity of the web artifacts. Conclusions: Using well-known techniques of SPLs from an architectural point of view, such as feature modeling, can improve the design and maintenance of variable web artifacts without the need of introducing complex approaches for implementing the underlying variability.The work of the authors from the Universidad de Málaga is supported by the projects Magic P12-TIC1814 (post-doctoral research grant), MEDEA RTI2018-099213-B-I00 (co-financed by FEDER funds), Rhea P18-FR-1081 (MCI/AEI/FEDER, UE), LEIA UMA18-FEDERIA-157, TASOVA MCIU-AEI TIN2017-90644-REDT and, European Union’s H2020 research and innovation program under grant agreement DAEMON 101017109. The work of the authors from the Universidade da Coruña has been funded by MCIN/AEI/10.13039/501100011033, NextGenerationEU/PRTR, FLATCITY-POC: PDC2021-121239-C31 ; MCIN/AEI/10.13039/501100011033 EXTRACompact: PID2020-114635RB-I00 ; GAIN/Xunta de Galicia/ERDF CEDCOVID: COV20/00604 ; Xunta de Galicia/FEDER-UE GRC: ED431C 2021/53 ; MICIU/FEDER-UE BIZDEVOPSGLOBAL: RTI-2018-098309-B-C32 ; MCIN/AEI/10.13039/501100011033 MAGIST: PID2019-105221RB-C41Junta de Andalucía; P12-TIC-1814Universidad de Málaga; UMA18-FEDERIA-157Xunta de Galicia; COV20/00604Xunta de Galicia; ED431C 2021/53Junta de Andalucía; P18-FR-108
    corecore