118 research outputs found

    The cloudification perspectives of search-based software testing

    Get PDF
    To promote and sustain the future of our society, the most critical challenge of contemporary software engineering and cloud computing experts are related to the efficient integration of emerging cloudification and DevOps practices in the development and testing processes of modern systems. In this context, we argue that SBST can play a critical role in improving testing practices and automating the verification and validation (V&V) of cloudification properties of Cloud Native Applications (CNA). Hence, in this paper, we focus on the untouched side of SBST in the cloud field, by discussing (1) the testing challenges in the cloud research field and (2) summarizing the recent contributions of SBST in supporting development practices of CNA. Finally, we discuss the emerging research topics characterizing the cloudification perspectives of SBST in the cloud field

    User Review-Based Change File Localization for Mobile Applications

    Get PDF
    In the current mobile app development, novel and emerging DevOps practices (e.g., Continuous Delivery, Integration, and user feedback analysis) and tools are becoming more widespread. For instance, the integration of user feedback (provided in the form of user reviews) in the software release cycle represents a valuable asset for the maintenance and evolution of mobile apps. To fully make use of these assets, it is highly desirable for developers to establish semantic links between the user reviews and the software artefacts to be changed (e.g., source code and documentation), and thus to localize the potential files to change for addressing the user feedback. In this paper, we propose RISING (Review Integration via claSsification, clusterIng, and linkiNG), an automated approach to support the continuous integration of user feedback via classification, clustering, and linking of user reviews. RISING leverages domain-specific constraint information and semi-supervised learning to group user reviews into multiple fine-grained clusters concerning similar users' requests. Then, by combining the textual information from both commit messages and source code, it automatically localizes potential change files to accommodate the users' requests. Our empirical studies demonstrate that the proposed approach outperforms the state-of-the-art baseline work in terms of clustering and localization accuracy, and thus produces more reliable results.Comment: 15 pages, 3 figures, 8 table

    An empirical investigation of relevant changes and automation needs in modern code review

    Get PDF
    SUMMARY of the PAPER: This paper investigates the approaches and tools that, from a "developer's point of view", are still needed to facilitate Modern Code Review (MCR) activities. To that end, we empirically elicited a taxonomy of recurrent review change types that characterize MCR. This by (i) qualitatively and quantitatively analyzing review changes/commits of ten open-source projects; (ii) integrating MCR change types from existing taxonomies available from the literature; and (iii) surveying 52 developers to integrate eventually missing change types in the taxonomy. The results of our study highlight that the availability of new emerging development technologies (e.g., cloud-based technologies) and practices (e.g., continuous delivery) has pushed developers to perform additional activities during MCR and that additional types of feedback are expected by reviewers. Our participants provided also recommendations, specified techniques to employ, and highlighted the data to analyze for building recommender systems able to automate the code review activities composing our taxonomy. In summary, this study sheds some more light on the approaches and tools that are still needed to facilitate MCR activities, confirming the feasibility and usefulness of using summarization techniques during MCR activities. We believe that the results of our work represent an essential step for meeting the expectations of developers and supporting the vision of full or partial automation in MCR. REPLICATION PACKAGE: https://zenodo.org/record/3679402#.XxgSgy17Hxg PREPRINT: https://spanichella.github.io/img/EMSE-MCR-2020.pdfRecent research has shown that available tools for Modern Code Review (MCR) are still far from meeting the current expectations of developers. The objective of this paper is to investigate the approaches and tools that, from a developer's point of view, are still needed to facilitate MCR activities. To that end, we first empirically elicited a taxonomy of recurrent review change types that characterize MCR. The taxonomy was designed by performing three steps: (i) we generated an initial version of the taxonomy by qualitatively and quantitatively analyzing 211 review changes/commits and 648 review comments of ten open-source projects; then (ii) we integrated into this initial taxonomy, topics, and MCR change types of an existing taxonomy available from the literature; finally, (iii) we surveyed 52 developers to integrate eventually missing change types in the taxonomy. Results of our study highlight that the availability of new emerging development technologies (e.g., cloud-based technologies) and practices (e.g., continuous delivery) has pushed developers to perform additional activities during MCR and that additional types of feedback are expected by reviewers. Our participants provided recommendations, specified techniques to employ, and highlighted the data to analyze for building recommender systems able to automate the code review activities composing our taxonomy. We surveyed 14 additional participants (12 developers and 2 researchers), not involved in the previous survey, to qualitatively assess the relevance and completeness of the identified MCR change types as well as assess how critical and feasible to implement are some of the identified techniques to support MCR activities. Thus, with a study involving 21 additional developers, we qualitatively assess the feasibility and usefulness of leveraging natural language feedback (automation considered critical/feasible to implement) in supporting developers during MCR activities. In summary, this study sheds some more light on the approaches and tools that are still needed to facilitate MCR activities, confirming the feasibility and usefulness of using summarization techniques during MCR activities. We believe that the results of our work represent an essential step for meeting the expectations of developers and supporting the vision of full or partial automation in MC

    Fuzzing vs SBST : intersections & differences

    Get PDF
    Search-Based Software Testing (SBST) is the application of SBSE for solving hard software testing problems. SBST has been the subject of discussion of our International SBST Workshop, colocated with the International Conference on Software Engineering (ICSE). In 2022 we hosted the 15th edition of our SBST Workshop, which brought together researchers and industrial practitioners to encourage the use of search-based and fuzz testing techniques and tools for addressing software engineering-specific challenges. In this 2022 edition, SBST held, among other exciting events, a discussion panel on the similarities and differences between SBST and Fuzzing. As it implies, the goal of the panel was to have a common ground for discussion on the main similarities and differences between the Fuzzing and SBST fields, focusing on how both communities can collaborate to advance the stateof-the-art on automated testing. This strong panel composed of researchers from both academia and industry was the highlight of SBST’22 and allowed the chairs of the workshop to make substantial changes for 2023. In this paper, we present the main take away messages from that seminal panel and highlight exciting new challenges in the field

    Guest Editorial: Special issue on software engineering for mobile applications

    Get PDF
    Erworben im Rahmen der Schweizer Nationallizenzen (http://www.nationallizenzen.ch

    Diversity-guided Search Exploration for Self-driving Cars Test Generation through Frenet Space Encoding

    Full text link
    The rise of self-driving cars (SDCs) presents important safety challenges to address in dynamic environments. While field testing is essential, current methods lack diversity in assessing critical SDC scenarios. Prior research introduced simulation-based testing for SDCs, with Frenetic, a test generation approach based on Frenet space encoding, achieving a relatively high percentage of valid tests (approximately 50%) characterized by naturally smooth curves. The "minimal out-of-bound distance" is often taken as a fitness function, which we argue to be a sub-optimal metric. Instead, we show that the likelihood of leading to an out-of-bound condition can be learned by the deep-learning vanilla transformer model. We combine this "inherently learned metric" with a genetic algorithm, which has been shown to produce a high diversity of tests. To validate our approach, we conducted a large-scale empirical evaluation on a dataset comprising over 1,174 simulated test cases created to challenge the SDCs behavior. Our investigation revealed that our approach demonstrates a substantial reduction in generating non-valid test cases, increased diversity, and high accuracy in identifying safety violations during SDC test execution

    SensoDat: Simulation-based Sensor Dataset of Self-driving Cars

    Full text link
    Developing tools in the context of autonomous systems [22, 24 ], such as self-driving cars (SDCs), is time-consuming and costly since researchers and practitioners rely on expensive computing hardware and simulation software. We propose SensoDat, a dataset of 32,580 executed simulation-based SDC test cases generated with state-of-the-art test generators for SDCs. The dataset consists of trajectory logs and a variety of sensor data from the SDCs (e.g., rpm, wheel speed, brake thermals, transmission, etc.) represented as a time series. In total, SensoDat provides data from 81 different simulated sensors. Future research in the domain of SDCs does not necessarily depend on executing expensive test cases when using SensoDat. Furthermore, with the high amount and variety of sensor data, we think SensoDat can contribute to research, particularly for AI development, regression testing techniques for simulation-based SDC testing, flakiness in simulation, etc. Link to the dataset: https://doi.org/10.5281/zenodo.1030747
    • …
    corecore