42 research outputs found
On the real world practice of Behaviour Driven Development
Surveys of industry practice over the last decade suggest that Behaviour Driven Development is a popular Agile practice. For example, 19% of respondents to the 14th State of Agile annual survey reported using BDD, placing it in the top 13 practices reported. As well as potential benefits, the adoption of BDD necessarily involves an additional cost of writing and maintaining Gherkin features and scenarios, and (if used for acceptance testing,) the associated step functions. Yet there is a lack of published literature exploring how BDD is used in practice and the challenges experienced by real world software development efforts. This gap is significant because without understanding current real world practice, it is hard to identify opportunities to address and mitigate challenges. In order to address this research gap concerning the challenges of using BDD, this thesis reports on a research project which explored: (a) the challenges of applying agile and undertaking requirements engineering in a real world context; (b) the challenges of applying BDD specifically and (c) the application of BDD in open-source projects to understand challenges in this different context.
For this purpose, we progressively conducted two case studies, two series of interviews, four iterations of action research, and an empirical study. The first case study was conducted in an avionics company to discover the challenges of using an agile process in a large scale safety critical project environment. Since requirements management was found to be one of the biggest challenges during the case study, we decided to investigate BDD because of its reputation for requirements management. The second case study was conducted in the company with an aim to discover the challenges of using BDD in real life. The case study was complemented with an empirical study of the practice of BDD in open source projects, taking a study sample from the GitHub open source collaboration site.
As a result of this Ph.D research, we were able to discover: (i) challenges of using an agile process in a large scale safety-critical organisation, (ii) current state of BDD in practice, (iii) technical limitations of Gherkin (i.e., the language for writing requirements in BDD), (iv) challenges of using BDD in a real project, (v) bad smells in the Gherkin specifications of open source projects on GitHub. We also presented a brief comparison between the theoretical description of BDD and BDD in practice. This research, therefore, presents the results of lessons learned from BDD in practice, and serves as a guide for software practitioners planning on using BDD in their projects
Requirements Traceability: Recovering and Visualizing Traceability Links Between Requirements and Source Code of Object-oriented Software Systems
Requirements traceability is an important activity to reach an effective
requirements management method in the requirements engineering.
Requirement-to-Code Traceability Links (RtC-TLs) shape the relations between
requirement and source code artifacts. RtC-TLs can assist engineers to know
which parts of software code implement a specific requirement. In addition,
these links can assist engineers to keep a correct mental model of software,
and decreasing the risk of code quality degradation when requirements change
with time mainly in large sized and complex software. However, manually
recovering and preserving of these TLs puts an additional burden on engineers
and is error-prone, tedious, and costly task. This paper introduces YamenTrace,
an automatic approach and implementation to recover and visualize RtC-TLs in
Object-Oriented software based on Latent Semantic Indexing (LSI) and Formal
Concept Analysis (FCA). The originality of YamenTrace is that it exploits all
code identifier names, comments, and relations in TLs recovery process.
YamenTrace uses LSI to find textual similarity across software code and
requirements. While FCA employs to cluster similar code and requirements
together. Furthermore, YamenTrace gives a visualization of recovered TLs. To
validate YamenTrace, it applied on three case studies. The findings of this
evaluation prove the importance and performance of YamenTrace proposal as most
of RtC-TLs were correctly recovered and visualized.Comment: 17 pages, 14 figure
Understanding, Analysis, and Handling of Software Architecture Erosion
Architecture erosion occurs when a software system's implemented architecture diverges from the intended architecture over time. Studies show erosion impacts development, maintenance, and evolution since it accumulates imperceptibly. Identifying early symptoms like architectural smells enables managing erosion through refactoring. However, research lacks comprehensive understanding of erosion, unclear which symptoms are most common, and lacks detection methods. This thesis establishes an erosion landscape, investigates symptoms, and proposes identification approaches. A mapping study covers erosion definitions, symptoms, causes, and consequences. Key findings: 1) "Architecture erosion" is the most used term, with four perspectives on definitions and respective symptom types. 2) Technical and non-technical reasons contribute to erosion, negatively impacting quality attributes. Practitioners can advocate addressing erosion to prevent failures. 3) Detection and correction approaches are categorized, with consistency and evolution-based approaches commonly mentioned.An empirical study explores practitioner perspectives through communities, surveys, and interviews. Findings reveal associated practices like code review and tools identify symptoms, while collected measures address erosion during implementation. Studying code review comments analyzes erosion in practice. One study reveals architectural violations, duplicate functionality, and cyclic dependencies are most frequent. Symptoms decreased over time, indicating increased stability. Most were addressed after review. A second study explores violation symptoms in four projects, identifying 10 categories. Refactoring and removing code address most violations, while some are disregarded.Machine learning classifiers using pre-trained word embeddings identify violation symptoms from code reviews. Key findings: 1) SVM with word2vec achieved highest performance. 2) fastText embeddings worked well. 3) 200-dimensional embeddings outperformed 100/300-dimensional. 4) Ensemble classifier improved performance. 5) Practitioners found results valuable, confirming potential.An automated recommendation system identifies qualified reviewers for violations using similarity detection on file paths and comments. Experiments show common methods perform well, outperforming a baseline approach. Sampling techniques impact recommendation performance
Large-Scale Identification and Analysis of Factors Impacting Simple Bug Resolution Times in Open Source Software Repositories
One of the most prominent issues the ever-growing open-source software community faces is the abundance of buggy code. Well-established version control systems and repository hosting services such as GitHub and Maven provide a checks-and-balances structure to minimize the amount of buggy code introduced. Although these platforms are effective in mitigating the problem, it still remains. To further the efforts toward a more effective and quicker response to bugs, we must understand the factors that affect the time it takes to fix one. We apply a custom traversal algorithm to commits made for open source repositories to determine when “simple stupid bugs” were first introduced to projects and explore the factors that drive the time it takes to fix them. Using the commit history from the main development branch, we are able to identify the commit that first introduced 13 different types of simple stupid bugs in 617 of the top Java projects on GitHub. Leveraging a statistical survival model and other non-parametric statistical tests, we found that there were two main categories of categorical variables that affect a bug’s life; Time Factors and Author Factors. We find that bugs are fixed quicker if they are introduced and resolved by the same developer. Further, we discuss how the day of the week and time of day a buggy code was written and fixed affects its resolution time. These findings will provide vital insight to help the open-source community mitigate the abundance of code and can be used in future research to aid in bug-finding programs
API Usage Recommendation via Multi-View Heterogeneous Graph Representation Learning
Developers often need to decide which APIs to use for the functions being
implemented. With the ever-growing number of APIs and libraries, it becomes
increasingly difficult for developers to find appropriate APIs, indicating the
necessity of automatic API usage recommendation. Previous studies adopt
statistical models or collaborative filtering methods to mine the implicit API
usage patterns for recommendation. However, they rely on the occurrence
frequencies of APIs for mining usage patterns, thus prone to fail for the
low-frequency APIs. Besides, prior studies generally regard the API call
interaction graph as homogeneous graph, ignoring the rich information (e.g.,
edge types) in the structure graph. In this work, we propose a novel method
named MEGA for improving the recommendation accuracy especially for the
low-frequency APIs. Specifically, besides call interaction graph, MEGA
considers another two new heterogeneous graphs: global API co-occurrence graph
enriched with the API frequency information and hierarchical structure graph
enriched with the project component information. With the three multi-view
heterogeneous graphs, MEGA can capture the API usage patterns more accurately.
Experiments on three Java benchmark datasets demonstrate that MEGA
significantly outperforms the baseline models by at least 19% with respect to
the Success Rate@1 metric. Especially, for the low-frequency APIs, MEGA also
increases the baselines by at least 55% regarding the Success Rate@1
Assessing Comment Quality in Object-Oriented Languages
Previous studies have shown that high-quality code comments support developers in software maintenance and program comprehension tasks. However, the semi-structured nature of comments, several conventions to write comments, and the lack of quality assessment tools for all aspects of comments make comment evaluation and maintenance a non-trivial problem. To understand the specification of high-quality comments to build effective assessment tools, our thesis emphasizes acquiring a multi-perspective view of the comments, which can be approached by analyzing (1) the academic support for comment quality assessment, (2) developer commenting practices across languages, and (3) developer concerns about comments.
Our findings regarding the academic support for assessing comment quality showed that researchers primarily focus on Java in the last decade even though the trend of using polyglot environments in software projects is increasing. Similarly, the trend of analyzing specific types of code comments (method comments, or inline comments) is increasing, but the studies rarely analyze class comments. We found 21 quality attributes that researchers consider to assess comment quality, and manual assessment is still the most commonly used technique to assess various quality attributes. Our analysis of developer commenting practices showed that developers embed a mixed level of details in class comments, ranging from high-level class overviews to low-level implementation details across programming languages. They follow style guidelines regarding what information to write in class comments but violate the structure and syntax guidelines. They primarily face problems locating relevant guidelines to write consistent and informative comments, verifying the adherence of their comments to the guidelines, and evaluating the overall state of comment quality.
To help researchers and developers in building comment quality assessment tools, we contribute: (i) a systematic literature review (SLR) of ten years (2010–2020) of research on assessing comment quality, (ii) a taxonomy of quality attributes used to assess comment quality, (iii) an empirically validated taxonomy of class comment information types from three programming languages, (iv) a multi-programming-language approach to automatically identify the comment information types, (v) an empirically validated taxonomy of comment convention-related questions and recommendation from various Q&A forums, and (vi) a tool to gather discussions from multiple developer sources, such as Stack Overflow, and mailing lists.
Our contributions provide various kinds of empirical evidence of the developer’s interest in reducing efforts in the software documentation process, of the limited support developers get in automatically assessing comment quality, and of the challenges they face in writing high-quality comments. This work lays the foundation for future effective comment quality assessment tools and techniques
Fundamental Approaches to Software Engineering
This open access book constitutes the proceedings of the 25th International Conference on Fundamental Approaches to Software Engineering, FASE 2022, which was held during April 4-5, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 17 regular papers presented in this volume were carefully reviewed and selected from 64 submissions. The proceedings also contain 3 contributions from the Test-Comp Competition. The papers deal with the foundations on which software engineering is built, including topics like software engineering as an engineering discipline, requirements engineering, software architectures, software quality, model-driven development, software processes, software evolution, AI-based software engineering, and the specification, design, and implementation of particular classes of systems, such as (self-)adaptive, collaborative, AI, embedded, distributed, mobile, pervasive, cyber-physical, or service-oriented applications
Definition of Descriptive and Diagnostic Measurements for Model Fragment Retrieval
Tesis por compendio[ES] Hoy en día, el software existe en casi todo. Las empresas a menudo desarrollan y mantienen colecciones de sistemas de software personalizados que comparten algunas características entre ellos, pero que también tienen otras características particulares. Conforme el número de características y el número de variantes de un producto crece, el mantenimiento del software se vuelve cada vez más complejo. Para hacer frente a esta situación la Comunidad de Ingeniería del Software basada en Modelos está abordando una actividad clave: la Localización de Fragmentos de Modelo. Esta actividad consiste en la identificación de elementos del modelo que son relevantes para un requisito, una característica o un bug. Durante los últimos años se han propuesto muchos enfoques para abordar la identificación de los elementos del modelo que corresponden a una funcionalidad en particular. Sin embargo, existe una carencia a la hora de cómo se reportan las medidas del espacio de búsqueda, así como las medidas de la solución a encontrar. El objetivo de nuestra tesis radica en proporcionar a la comunidad dedicada a la actividad de localización de fragmentos de modelo una serie de medidas (tamaño, volumen, densidad, multiplicidad y dispersión) para reportar los problemas de localización de fragmentos de modelo. El uso de estas novedosas medidas ayuda a los investigadores durante la creación de nuevos enfoques, así como la mejora de aquellos enfoques ya existentes. Mediante el uso de dos casos de estudio reales e industriales, esta tesis pone en valor la importancia de estas medidas para comparar resultados de diferentes enfoques de una manera precisa. Los resultados de este trabajo han sido redactados y publicados en foros, conferencias y revistas especializadas en los temas y contexto de la investigación. Esta tesis se presenta como un compendio de artículos acorde a la regulación de la Universitat Politècnica de València. Este documento de tesis presenta los temas, el contexto y los objetivos de la investigación. Presenta las publicaciones académicas que se han publicado como resultado del trabajo y luego analiza los resultados de la investigación.[CA] Hui en dia, el programari existix en quasi tot. Les empreses sovint desenrotllen i mantenen col·leccions de sistemes de programari personalitzats que compartixen algunes característiques entre ells, però que també tenen altres característiques particulars. Conforme el nombre de característiques i el nombre de variants d'un producte creix, el manteniment del programari es torna cada vegada més complex. Per a fer front a esta situació la Comunitat d'Enginyeria del Programari basada en Models està abordant una activitat clau: la Localització de Fragments de Model. Esta activitat consistix en la identificació d'elements del model que són rellevants per a un requisit, una característica o un bug. Durant els últims anys s'han proposat molts enfocaments per a abordar la identificació dels elements del model que corresponen a una funcionalitat en particular. No obstant això, hi ha una carència a l'hora de com es reporten les mesures de l'espai de busca, així com les mesures de la solució a trobar. L'objectiu de la nostra tesi radica a proporcionar a la comunitat dedicada a l'activitat de localització de fragments de model una sèrie de mesures (grandària, volum, densitat, multiplicitat i dispersió) per a reportar els problemes de localització de fragments de model. L'ús d'estes noves mesures ajuda als investigadors durant la creació de nous enfocaments, així com la millora d'aquells enfocaments ja existents. Per mitjà de l'ús de dos casos d'estudi reals i industrials, esta tesi posa en valor la importància d'estes mesures per a comparar resultats de diferents enfocaments d'una manera precisa. Els resultats d'este treball han sigut redactats i publicats en fòrums, conferències i revistes especialitzades en els temes i context de la investigació. Esta tesi es presenta com un compendi d'articles d'acord amb la regulació de la Universitat Politècnica de València. Este document de tesi presenta els temes, el context i els objectius de la investigació. Presenta les publicacions acadèmiques que s'han publicat com resultat del treball i després analitza els resultats de la investigació.[EN] Nowadays, software exists in almost everything. Companies often develop and maintain a collection of custom-tailored software systems that share some common features but also support customer-specific ones. As the number of features and the number of product variants grows, software maintenance is becoming more and more complex. To keep pace with this situation, Model-Based Software Engineering Community is addressing a key-activity: Model Fragment Location (MFL). MFL aims at identifying model elements that are relevant to a requirement, feature, or bug. Many MFL approaches have been introduced in the last few years to address the identification of the model elements that correspond to a specific functionality. However, there is a lack of detail when the measurements about the search space (models) and the measurements about the solution to be found (model fragment) are reported. The goal of this thesis is to provide insights to MFL Research Community of how to improve the report of location problems. We propose using five measurements (size, volume, density, multiplicity, and dispersion) to report the location problems during MFL. The usage of these novel measurements support researchers during the creation of new MFL approaches and during the improvement of those existing ones. Using two different case studies, both real and industrial, we emphasize the importance of these measurements in order to compare results in a deeply way. The results of the research have been redacted and published in forums, conferences, and journals specialized in the topics and context of the research.
This thesis is presented as compendium of articles according the regulations in Universitat Politècnica de València. This thesis document introduces the topics, context, and objectives of the research, presents the academic publications that have been published as a result of the work, and then discusses the outcomes of the investigation.Ballarin Naya, M. (2021). Definition of Descriptive and Diagnostic Measurements for Model Fragment Retrieval [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/171604TESISCompendi
Fundamental Approaches to Software Engineering
This open access book constitutes the proceedings of the 23rd International Conference on Fundamental Approaches to Software Engineering, FASE 2020, which took place in Dublin, Ireland, in April 2020, and was held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2020. The 23 full papers, 1 tool paper and 6 testing competition papers presented in this volume were carefully reviewed and selected from 81 submissions. The papers cover topics such as requirements engineering, software architectures, specification, software quality, validation, verification of functional and non-functional properties, model-driven development and model transformation, software processes, security and software evolution