108 research outputs found

    A systematic literature review on the code smells datasets and validation mechanisms

    Full text link
    The accuracy reported for code smell-detecting tools varies depending on the dataset used to evaluate the tools. Our survey of 45 existing datasets reveals that the adequacy of a dataset for detecting smells highly depends on relevant properties such as the size, severity level, project types, number of each type of smell, number of smells, and the ratio of smelly to non-smelly samples in the dataset. Most existing datasets support God Class, Long Method, and Feature Envy while six smells in Fowler and Beck's catalog are not supported by any datasets. We conclude that existing datasets suffer from imbalanced samples, lack of supporting severity level, and restriction to Java language.Comment: 34 pages, 10 figures, 12 tables, Accepte

    LASSO – an observatorium for the dynamic selection, analysis and comparison of software

    Full text link
    Mining software repositories at the scale of 'big code' (i.e., big data) is a challenging activity. As well as finding a suitable software corpus and making it programmatically accessible through an index or database, researchers and practitioners have to establish an efficient analysis infrastructure and precisely define the metrics and data extraction approaches to be applied. Moreover, for analysis results to be generalisable, these tasks have to be applied at a large enough scale to have statistical significance, and if they are to be repeatable, the artefacts need to be carefully maintained and curated over time. Today, however, a lot of this work is still performed by human beings on a case-by-case basis, with the level of effort involved often having a significant negative impact on the generalisability and repeatability of studies, and thus on their overall scientific value. The general purpose, 'code mining' repositories and infrastructures that have emerged in recent years represent a significant step forward because they automate many software mining tasks at an ultra-large scale and allow researchers and practitioners to focus on defining the questions they would like to explore at an abstract level. However, they are currently limited to static analysis and data extraction techniques, and thus cannot support (i.e., help automate) any studies which involve the execution of software systems. This includes experimental validations of techniques and tools that hypothesise about the behaviour (i.e., semantics) of software, or data analysis and extraction techniques that aim to measure dynamic properties of software. In this thesis a platform called LASSO (Large-Scale Software Observatorium) is introduced that overcomes this limitation by automating the collection of dynamic (i.e., execution-based) information about software alongside static information. It features a single, ultra-large scale corpus of executable software systems created by amalgamating existing Open Source software repositories and a dedicated DSL for defining abstract selection and analysis pipelines. Its key innovations are integrated capabilities for searching for selecting software systems based on their exhibited behaviour and an 'arena' that allows their responses to software tests to be compared in a purely data-driven way. We call the platform a 'software observatorium' since it is a place where the behaviour of large numbers of software systems can be observed, analysed and compared

    Automated Black-box Testing of Mass Assignment Vulnerabilities in RESTful APIs

    Get PDF
    Mass assignment is one of the most prominent vulnerabilities in RESTful APIs that originates from a misconfiguration in common web frameworks. This allows attackers to exploit naming convention and automatic binding to craft malicious requests that (massively) override data supposed to be read-only.In this paper, we adopt a black-box testing perspective to automatically detect mass assignment vulnerabilities in RESTful APIs. Indeed, execution scenarios are generated purely based on the OpenAPI specification, that lists the available operations and their message format. Clustering is used to group similar operations and reveal read-only fields, the latter are candidates for mass assignment. Then, test interaction sequences are automatically generated by instantiating abstract testing templates, with the aim of trying to use the found read-only fields to carry out a mass assignment attack. Test interactions are run, and their execution is assessed by a specific oracle, in order to reveal whether the vulnerability could be successfully exploited.The proposed novel approach has been implemented and evaluated on a set of case studies written in different programming languages. The evaluation highlights that the approach is quite effective in detecting seeded vulnerabilities, with a remarkably high accuracy

    Detection of microservice smells through static analysis

    Get PDF
    A arquitetura de microsserviços é um modelo arquitetural promissor na área de software, atraindo desenvolvedores e empresas para os seus princípios convincentes. As suas vantagens residem no potencial para melhorar a escalabilidade, a flexibilidade e a agilidade, alinhando se com as exigências em constante evolução da era digital. No entanto, navegar entre as complexidades dos microsserviços pode ser uma tarefa desafiante, especialmente à medida que este campo continua a evoluir. Um dos principais desafios advém da complexidade inerente aos microsserviços, em que o seu grande número e interdependências podem introduzir novas camadas de complexidade. Além disso, a rápida expansão dos microsserviços, juntamente com a necessidade de aproveitar as suas vantagens de forma eficaz, exige uma compreensão mais profunda das potenciais ameaças e problemas que podem surgir. Para tirar verdadeiramente partido das vantagens dos microsserviços, é essencial enfrentar estes desafios e garantir que o desenvolvimento e a adoção de microsserviços sejam bem-sucedidos. O presente documento pretende explorar a área dos smells da arquitetura de microsserviços que desempenham um papel tão importante na dívida técnica dirigida à área dos microsserviços. Embarca numa exploração de investigação abrangente, explorando o domínio dos smells de microsserviços. Esta investigação serve como base para melhorar um catálogo de smells de microsserviços. Esta investigação abrangente obtém dados de duas fontes primárias: systematic mapping study e um questionário a profissionais da área. Este último envolveu 31 profissionais experientes com uma experiência substancial no domínio dos microsserviços. Além disso, são descritos o desenvolvimento e o aperfeiçoamento de uma ferramenta especificamente concebida para identificar e resolver problemas relacionados com os microsserviços. Esta ferramenta destina-se a melhorar o desempenho dos programadores durante o desenvolvimento e a implementação da arquitetura de microsserviços. Por último, o documento inclui uma avaliação do desempenho da ferramenta. Trata-se de uma análise comparativa efetuada antes e depois das melhorias introduzidas na ferramenta. A eficácia da ferramenta será avaliada utilizando o mesmo benchmarking de microsserviços utilizado anteriormente, para além de outro benchmarking para garantir uma avaliação abrangente.The microservices architecture stands as a beacon of promise in the software landscape, drawing developers and companies towards its compelling principles. Its appeal lies in the potential for improved scalability, flexibility, and agility, aligning with the ever-evolving demands of the digital age. However, navigating the intricacies of microservices can be a challenging task, especially as this field continues to evolve. A key challenge arises from the inherent complexity of microservices, where their sheer number and interdependencies can introduce new layers of intricacy. Furthermore, the rapid expansion of microservices, coupled with the need to harness their advantages effectively, demands a deeper understanding of the potential pitfalls and issues that may emerge. To truly unlock the benefits of microservices, it is essential to address these challenges head-on and ensure a successful journey in the world of microservices development and adoption. The present document intends to explore the area of microservice architecture smells that play such an important role in the technical debt directed to the area of microservices. It embarks on a comprehensive research exploration, delving into the realm of microservice smells. This research serves as the cornerstone for enhancing a microservice smell catalogue. This comprehensive research draws data from two primary sources: a systematic mapping research and an industry survey. The latter involves 31 seasoned professionals with substantial experience in the field of microservices. Moreover, the development and enhancement of a tool specifically designed to identify and address issues related to microservices is described. This tool is aimed at improving developers' performance throughout the development and implementation of microservices architecture. Finally, the document includes an evaluation of the tool's performance. This involves a comparative analysis conducted before and after the tool's enhancements. The tool's effectiveness will be assessed using the same microservice benchmarking as previously employed, in addition to another benchmark to ensure a comprehensive evaluation

    Resource Allocation Modeling Framework to Refactor Software Design Smells

    Get PDF
    The domain to study design flaws in the software environment has created enough opportunity for the researchers. These design flaws i.e., code smells, were seen hindering the quality aspects of the software in many ways. Once detected, the segment of the software which was found to be infected with such a flaw has to be passed through some refactoring steps in order to remove it. To know about their working phenomenon in a better way, authors have innovatively talked about the smell detection mechanism using the NHPP modeling framework. Further the authors have also chosen to investigate about the amount of resources/efforts which should be allotted to various code smell categories. The authors have developed an optimization problem for the said purpose which is being validated on the real-life smell data set belonging to an open-source software system. The obtained results are in acceptable range and are justifying the applicability of the model

    A Reference Structure for Modular Model-based Analyses

    Get PDF
    Kontext: In dieser Arbeit haben wir die Evolvierbarkeit, Verständlichkeit und Wiederverwendbarkeit von modellbasierten Analysen untersucht. Darum untersuchten wir die Wechselbeziehungen zwischen Modellen und Analysen, insbesondere die Struktur und Abhängigkeiten von Artefakten und die Dekomposition und Komposition von modellbasierten Analysen. Herausforderungen: Softwareentwickler verwenden Modelle von Softwaresystemen, um die Evolvierbarkeit und Wiederverwendbarkeit eines Architekturentwurfs zu bestimmen. Diese Modelle ermöglichen die Softwarearchitektur zu analysieren, bevor die erste Zeile Code geschreiben wird. Aufgrund evolutionärer Veränderungen sind modellbasierte Analysen jedoch auch anfällig für eine Verschlechterung der Evolvierbarkeit, Verständlichkeit und Wiederverwendbarkeit. Diese Probleme lassen sich auf die Ko-Evolution von Modellierungssprache und Analyse zurückführen. Der Zweck einer Analyse ist die systematische Untersuchung bestimmter Eigenschaften eines zu untersuchenden Systems. Nehmen wir zum Beispiel an, dass Softwareentwickler neue Eigenschaften eines Softwaresystems analysieren wollen. In diesem Fall müssen sie Merkmale der Modellierungssprache und die entsprechenden modellbasierten Analysen anpassen, bevor sie neue Eigenschaften analysieren können. Merkmale in einer modellbasierten Analyse sind z.\,B. eine Analysetechnik, die eine solche Qualitätseigenschaft analysiert. Solche Änderungen führen zu einer erhöhten Komplexität der modellbasierten Analysen und damit zu schwer zu pflegenden modellbasierten Analysen. Diese steigende Komplexität verringert die Verständlichkeit der modellbasierten Analysen. Infolgedessen verlängern sich die Entwicklungszyklen, und die Softwareentwickler benötigen mehr Zeit, um das Softwaresystem an veränderte Anforderungen anzupassen. Stand der Technik: Derzeitige Ansätze ermöglichen die Kopplung von Analysen auf einem System oder über verteilte Systeme hinweg. Diese Ansätze bieten die technische Struktur für die Kopplung von Simulationen, nicht aber eine Struktur wie Komponenten (de)komponiert werden können. Eine weitere Herausforderung beim Komponieren von Analysen ist der Verhaltensaspekt, der sich darin äußert, wie sich die Analysekomponenten gegenseitig beeinflussen. Durch die Synchronisierung jeder beteiligten Simulation erhöht die Modularisierung von Simulationen den Kommunikationsbedarf. Derzeitige Ansätze erlauben es, den Kommunikationsaufwand zu reduzieren; allerdings werden bei diesen Ansätzen die Dekomposition und Komposition dem Benutzer überlassen. Beiträge: Ziel dieser Arbeit ist es, die Evolvierbarkeit, Verständlichkeit und Wiederverwendbarkeit von modellbasierten Analysen zu verbessern. Zu diesem Zweck wird die Referenzarchitektur für domänenspezifische Modellierungssprachen als Grundlage genommen und die Übertragbarkeit der Struktur der Referenzarchitektur auf modellbasierte Analysen untersucht. Die geschichtete Referenzarchitektur bildet die Abhängigkeiten der Analysefunktionen und Analysekomponenten ab, indem sie diese bestimmten Schichten zuordnet. Wir haben drei Prozesse für die Anwendung der Referenzarchitektur entwickelt: (i) Refactoring einer bestehenden modellbasierten Analyse, (ii) Entwurf einer neuen modellbasierten Analyse und (iii) Erweiterung einer bestehenden modellbasierten Analyse. Zusätzlich zur Referenzarchitektur für modellbasierte Analysen haben wir wiederkehrende Strukturen identifiziert, die zu Problemen bei der Evolvierbarkeit, Verständlichkeit und Wiederverwendbarkeit führen; in der Literatur werden diese wiederkehrenden Strukturen auch als Bad Smells bezeichnet. Wir haben etablierte modellbasierte Analysen untersucht und dreizehn Bad Smells identifiziert und spezifiziert. Neben der Spezifizierung der Bad Smells bieten wir einen Prozess zur automatischen Identifizierung dieser Bad Smells und Strategien für deren Refactoring, damit Entwickler diese Bad Smells vermeiden oder beheben können. In dieser Arbeit haben wir auch eine Modellierungssprache zur Spezifikation der Struktur und des Verhaltens von Simulationskomponenten entwickelt. Simulationen sind Analysen, um ein System zu untersuchen, wenn das Experimentieren mit dem bestehenden System zu zeitaufwändig, zu teuer, zu gefährlich oder einfach unmöglich ist, weil das System (noch) nicht existiert. Entwickler können die Spezifikation nutzen, um Simulationskomponenten zu vergleichen und so identische Komponenten zu identifizieren. Validierung: Die Referenzarchitektur für modellbasierte Analysen, haben wir evaluiert, indem wir vier modellbasierte Analysen in die Referenzarchitektur überführt haben. Wir haben eine szenariobasierte Evaluierung gewählt, die historische Änderungsszenarien aus den Repositories der modellbasierten Analysen ableitet. In der Auswertung können wir zeigen, dass sich die Evolvierbarkeit und Verständlichkeit durch die Bestimmung der Komplexität, der Kopplung und der Kohäsion verbessert. Die von uns verwendeten Metriken stammen aus der Informationstheorie, wurden aber bereits zur Bewertung der Referenzarchitektur für DSMLs verwendet. Die Bad Smells, die durch die Co-Abhängigkeit von modellbasierten Analysen und ihren entsprechenden DSMLs entstehen, haben wir evaluiert, indem wir vier modellbasierte Analysen nach dem Auftreten unserer schlechten Gerüche durchsucht und dann die gefundenen Bad Smells behoben haben. Wir haben auch eine szenariobasierte Auswertung gewählt, die historische Änderungsszenarien aus den Repositories der modellbasierten Analysen ableitet. Wir können zeigen, dass die Bad Smells die Evolvierbarkeit und Verständlichkeit negativ beeinflussen, indem wir die Komplexität, Kopplung und Kohäsion vor und nach der Refaktorisierung bestimmen. Den Ansatz zum Spezifizieren und Finden von Komponenten modellbasierter Analysen haben wir evaluiert, indem wir Komponenten von zwei modellbasierten Analysen spezifizieren und unseren Suchalgorithmus verwenden, um ähnliche Analysekomponenten zu finden. Die Ergebnisse der Evaluierung zeigen, dass wir in der Lage sind, ähnliche Analysekomponenten zu finden und dass unser Ansatz die Suche nach Analysekomponenten mit ähnlicher Struktur und ähnlichem Verhalten und damit die Wiederverwendung solcher Komponenten ermöglicht. Nutzen: Die Beiträge unserer Arbeit unterstützen Architekten und Entwickler bei ihrer täglichen Arbeit, um wartbare und wiederverwendbare modellbasierte Analysen zu entwickeln. Zu diesem Zweck stellen wir eine Referenzarchitektur bereit, die die modellbasierte Analyse und die domänenspezifische Modellierungssprache aufeinander abstimmt und so die Koevolution erleichtert. Zusätzlich zur Referenzarchitektur bieten wir auch Refaktorisierungsoperationen an, die es Architekten und Entwicklern ermöglichen, eine bestehende modellbasierte Analyse an die Referenzarchitektur anzupassen. Zusätzlich zu diesem technischen Aspekt haben wir drei Prozesse identifiziert, die es Architekten und Entwicklern ermöglichen, eine neue modellbasierte Analyse zu entwickeln, eine bestehende modellbasierte Analyse zu modularisieren und eine bestehende modellbasierte Analyse zu erweitern. Dies geschieht natürlich so, dass die Ergebnisse mit der Referenzarchitektur konform sind. Darüber hinaus ermöglicht unsere Spezifikation den Entwicklern, bestehende Simulationskomponenten zu vergleichen und sie bei Bedarf wiederzuverwenden. Dies erspart den Entwicklern die Neuimplementierung von Komponenten

    A leadership model for DevOps adoption within software intensive organisations

    Get PDF
    The research, undertaken in organisational environments within IT-oriented culture and highly structured processes, outlines challenges and benefits associated to the adoption of Agile, Lean, and DevOps practices and principles. Realizing the adoption of DevOps practices and principles is no longer restricted to technology-specific skills. Studies indicate that successful DevOps adoption is part of continuous organisational transformation at various levels, and that includes a shift in cultural and behavioural patterns, process-driven perspectives, and toolchain usage readiness. There are also DevOps models to suggest an adoption roadmap for organisations to follow through the transitional path from existing highly structured processes to agile and lean approaches. However, there is a considerable lack of validated adoption models which are inclusive of leadership styles, traits, characteristics and the connection to adoption success or failure. This thesis details the explanation of product development approaches and using a mixed methods approach aims to provide proof and evidence to support the answers towards three research questions. The approach collected data through thirty interviews with industry practitioners, who were from ten countries working in nine different industry sectors. Almost two-thirds of interviewees had practiced DevOps. A set of agile, lean and DevOps practices and principles, which organisations choose to include in their DevOps adoption journeys were identified. The most frequently adopted structured service management practices, contributing to DevOps practice adoption success, indicate that those with software development and operation roles in DevOps-oriented organisations benefit from existence of highly structured service management approaches such as ITIL® . Furthermore, coded themes were generated based on the thirty interviews to expand understanding of relevant factors and produce the structure of an online survey. The analysis and evaluation of the online survey (n=250) confirmed some of the initial findings of the interviews and expanded viewpoints on other perceived outcomes. Out of the total 250 participants, 81% had 10+ years of professional experience and two-thirds were practicing DevOps. 73% of participants were from Europe and 76% had held previous leadership positions. The aim of the survey was to unveil leadership-specific observations on characteristics and factors that would indicate certain reasoning behind challenges faced by organisations while transitioning to DevOps. The research questions which evolved around (R1) an understanding of how productivity can be improved for software product development teams, indicated that there is a specific set of service management, project management and product management practices and principles to take into account. Furthermore, evidence produced from the qualitative and quantitative studies confirmed (R2) that DevOps-oriented organisations have mainly preferred to extend the structured approaches previously adopted such as ITIL® v3. The online survey produced significant evidence (R3.a) of industry practitioners’ desire to have a leadership role for the purposes of DevOps practice and principle adoption. (R3.b) The emergent leadership style pertinent to the transition of IT-focused organisations to DevOps condensed to the linkage of transformational and servant leadership. The observations from the confirmatory study of the online survey (n=250) contributed to the design and development of a conceptual model which emphasizes. The conceptual model was validated using PLS-SEM to improve understanding of significance and predictive power of construct validity and corresponding manifest variables. The final, model evaluation research stage of three focus group interviews (n=19), indicated industry practitioner consensus on the validated model in a range of 70% - 79%. The thesis outcomes formulate a leadership model towards the fulfilment of DevOps adoption within organisations. The thesis outcomes also aim to support the transitional efforts to DevOps, and commitment of software-intensive i.e. enterprise with an IT organisation. In this way, it become possible to enhance the competence level of an organisation’s adoption capability, guide DevOps adoption leadership through its upskilling journey, and achieve the cultural shift of mindset to enable continuous and habitual change

    Using Word Embedding and Convolution Neural Network for Bug Triaging by Considering Design Flaws

    Full text link
    Resolving bugs in the maintenance phase of software is a complicated task. Bug assignment is one of the main tasks for resolving bugs. Some Bugs cannot be fixed properly without making design decisions and have to be assigned to designers, rather than programmers, to avoid emerging bad smells that may cause subsequent bug reports. Hence, it is important to refer some bugs to the designer to check the possible design flaws. Based on our best knowledge, there are a few works that have considered referring bugs to designers. Hence, this issue is considered in this work. In this paper, a dataset is created, and a CNN-based model is proposed to predict the need for assigning a bug to a designer by learning the peculiarities of bug reports effective in creating bad smells in the code. The features of each bug are extracted from CNN based on its textual features, such as a summary and description. The number of bad samples added to it in the fixing process using the PMD tool determines the bug tag. The summary and description of the new bug are given to the model and the model predicts the need to refer to the designer. The accuracy of 75% (or more) was achieved for datasets with a sufficient number of samples for deep learning-based model training. A model is proposed to predict bug referrals to the designer. The efficiency of the model in predicting referrals to the designer at the time of receiving the bug report was demonstrated by testing the model on 10 projects

    Road to Microservices

    Get PDF
    This document intends to elucidate the complexity of microservices decomposition and this its inherent process of implementation. Developments in technology and design, achieving higher performance, reliability, or lowering costs are valid reasons to consider controlling the product’s quality by guaranteeing its conformance with established characteristics and standards. Control is made possible by adding quality control, inspection routines, and trend analysis to a manufacturing process. These techniques are established in the Quality field academically and business-wise. Repeat Part Management (RPM) is software that allows users to apply these techniques efficiently and has brought value to the company. However, RPM has been growing, and issues have emerged due to customer needs - accumulated technical debt. These ever-growing modifications are common through different business areas, and architectures’ research evolution has accompanied them. This demand for highly modifiable, rapid development, and independent systems has resulted in the adoption of microservices. There is a concern for existing systems for decomposition due to the characteristics of microservices, which encourages approach/technique research. This architecture promotes legacy system analysis to map current functionalities and dependencies between components. Furthermore, critical concepts in a microservices architecture are researched and implemented in a new system that encompasses Repeat Part Management’s functionalities. This thesis explores the microservices’ architecture evolution with an already defined mature domain in quality assessment.Este documento pretende especificar a complexidade do processo de decomposição em microsserviços e do seu processo de implementação. Avanços na tecnologia e design, alcançar melhor performance, ou a confiabilidade do produto, ou diminuir custos são justificações válidas para considerar controlar a qualidade do produto e, inerentemente, garantir a sua conformidade com as características previamente definidas e com padrões da indústria. É possível garantir controlo sob os produtos ao acrescentar, ao processo produtivo, métodos de controlo de qualidade, rotinas de inspeção e análise de tendências (de desvio). Estas técnicas estão bem estabelecidas academicamente e, de um ponto de vista do mercado, na área da Qualidade e garantia da qualidade. O Repeat Part Management (RPM) é um software que permite aos seus utilizadores a utilização eficiente destas técnicas de qualidade, o que resulta numa adição de valor para a empresa. Porém, devido às crescentes necessidades dos clientes, alguns problemas têm sido identificados - relacionados com o conceito de acumulação de technical debt. Esta crescente necessidade de alteração é comum em diversas áreas de negócio, e a investigação de soluções arquiteturais tem acompanhado esta pesquisa. Esta solução arquitetura pode ser caracterizada pela facilidade de sistemas facilmente modificáveis, pelo rápido desenvolvimento e implementação, e pela independência dos serviços decompostos. Aquando de uma migração para microsserviços num sistema maturo, existe uma maior preocupação na decomposição da aplicação e definição dos serviços dada a característica dos microsserviços, o que incentiva a uma pesquisa detalhada sobre as técnicas de decomposição. Pela mesma razão, esta arquitetura incentiva ao mapeamento e documentação das dependências entre serviços e componentes e o estudo da aplicação legacy. Para além disto, estes conceitos, e a sua implementação devem ser planeados, justificados e documentados, o que explica a complexidade do processo de migração e implementação, que deve ter em consideração as funcionalidades existentes no Repeat Part Management. Dessa forma, esta tese explora a implementação desta arquitetura numa aplicação matura na área de Garantia de Qualidade

    Data Science Techniques for Modelling Execution Tracing Quality

    Get PDF
    This research presents how to handle a research problem when the research variables are still unknown, and no quantitative study is possible; how to identify the research variables, to be able to perform a quantitative research, how to collect data by means of the research variables identified, and how to carry out modelling with the considerations of the specificities of the problem domain. In addition, validation is also encompassed in the scope of modelling in the current study. Thus, the work presented in this thesis comprises the typical stages a complex data science problem requires, including qualitative and quantitative research, data collection, modelling of vagueness and uncertainty, and the leverage of artificial intelligence to gain such insights, which are impossible with traditional methods. The problem domain of the research conducted encompasses software product quality modelling, and assessment, with particular focus on execution tracing quality. The terms execution tracing quality and logging are used interchangeably throughout the thesis. The research methods and mathematical tools used allow considering uncertainty and vagueness inherently associated with the quality measurement and assessment process through which reality can be approximated more appropriately in comparison to plain statistical modelling techniques. Furthermore, the modelling approach offers direct insights into the problem domain by the application of linguistic rules, which is an additional advantage. The thesis reports (1) an in-depth investigation of all the identified software product quality models, (2) a unified summary of the identified software product quality models with their terminologies and concepts, (3) the identification of the variables influencing execution tracing quality, (4) the quality model constructed to describe execution tracing quality, and (5) the link of the constructed quality model to the quality model of the ISO/IEC 25010 standard, with the possibility of tailoring to specific project needs. Further work, outside the frames of this PhD thesis, would also be useful as presented in the study: (1) to define application-project profiles to assist tailoring the quality model for execution tracing to specific application and project domains, and (2) to approximate the present quality model for execution tracing, within defined bounds, by simpler mathematical approaches. In conclusion, the research contributes to (1) supporting the daily work of software professionals, who need to analyse execution traces; (2) raising awareness that execution tracing quality has a huge impact on software development, software maintenance and on the professionals involved in the different stages of the software development life-cycle; (3) providing a framework in which the present endeavours for log improvements can be placed, and (4) suggesting an extension of the ISO/IEC 25010 standard by linking the constructed quality model to that. In addition, in the scope of the qualitative research methodology, the current PhD thesis contributes to the knowledge of research methods with determining a saturation point in the course of the data collection process
    corecore