20 research outputs found

    Industry-academia collaborations in software testing: experience and success stories from Canada and Turkey : Special Issue Industry Academia Collaborations in Software Testing

    Get PDF
    Collaboration between industry and academia supports improvement and innovation in industry and helps to ensure industrial relevance in academic research. However, many researchers and practitioners believe that the level of joint industry–academia collaborations (IAC) in software engineering (SE) is still relatively very low, compared to the amount of activity in each of the two communities. The goal of the empirical study reported in this paper is to characterize a set of collaborative industry–academia R&D projects in the area of software testing conducted by the authors (based in Canada and Turkey) with respect to a set of challenges, patterns and anti-patterns identified by a recent Systematic Literature Review study, with the aim of contributing to the body of evidence in the area of IAC, for the benefit of SE researchers and practitioners in conducting successful IAC projects in software testing and in software engineering in general. To address the above goal, a pool of ten IAC projects (six completed, two failed and two ongoing) all in the area of software testing, which the authors have led or have had active roles in, were selected as objects of study and were analyzed (both quantitatively and qualitatively) with respect to the set of selected challenges, patterns and anti-patterns. As outputs, the study presents a set of empirical findings and evidence-based recommendations, e.g.: it has been observed that even if an IAC project may seem perfect from many aspects, one single major challenge (e.g., disagreement in confidentiality agreements) can lead to its failure. Thus, we recommend that both parties (academics and practitioners) consider all the challenges early on and proactively work together to eliminate the risk of challenges in IAC projects. We furthermore report correlation and interrelationship of challenges, patterns and anti-patterns with project success measures. This study hopes to encourage and benefit other SE researchers and practitioners in conducting successful IAC projects in software testing and in software engineering in general in the future

    An Action Research for Improving the Sustainability Assessment Framework Instruments

    Get PDF
    [Abstract] In the last years, software engineering researchers have defined sustainability as a quality requirement of software, but not enough effort has been devoted to develop new methods/techniques to support the analysis and assessment of software sustainability. In this study, we present the Sustainability Assessment Framework (SAF) that consists of two instruments: the software sustainability–quality model, and the architectural decision map. Then, we use participatory and technical action research in close collaboration with the software industry to validate the SAF regarding its applicability in specific cases. The unit of analysis of our study is a family of software products (Geographic Information System- and Mobile-based Workforce Management Systems) that aim to address sustainability goals (e.g., efficient collection of dead animals to mitigate social and environmental sustainability risks). The results show that the sustainability–quality model integrated with the architectural decision maps can be used to identify sustainability–quality requirements as design concerns because most of its quality attributes (QAs) have been either addressed in the software project or acknowledged as relevant (i.e., creating awareness on the relevance of the multidimensional sustainability nature of certain QAs). Moreover, the action–research method has been helpful to enrich the sustainability–quality model, by identifying missing QAs (e.g., regulation compliance, data privacy). Finally, the architectural decision maps have been found as useful to guide software architects/designers in their decision-making process.This work has received partial support by Xunta de Galicia/FEDER-UE CSI (ED431G/01, ED431C 2017/58); Xunta de Galicia/FEDER-UE, ConectaPeme, GEMA (IN852A 2018/14); MINECO-AEI/FEDER-UE with the projects: Datos 4.0 (TIN2016-78011-C4-1-R), BIZDEVOPS (RTI2018-098309-B-C32), FLATCITY (TIN2016-77158-C4-3-R.); and FONDECYT with the Kusisqa project (014-2019-FONDECYTBM-INC.INV)Xunta de Galicia; ED431G/01Xunta de Galicia; ED431C 2017/58Xunta de Galicia; IN852A 2018/14Gobierno de Chile; 014-2019-FONDECYTBM-INC.IN

    Building Stronger Bridges: Strategies for Improving Communication and Collaboration Between Industry and Academia in Software Engineering

    Get PDF
    Background: The software engineering community has expressed growing concern regarding the need for more connections between research and practice. Despite the large amount of knowledge researchers generate, its impact on real-world practice is uncertain. Meanwhile, practitioners in industry often struggle to access and utilize relevant research outcomes that could inform and enhance their work. Collaboration between industry and academia is seen as a potential solution to bridge this gap, ensuring that research remains relevant and applicable in real-world contexts.Objective: This research aims to explore challenges in communication and collaboration between industry and to design, evaluate, and implement strategies that foster this collaboration. Methodology: The design science paradigm inspires this research, as we aim to obtain knowledge about industry-academia communication and collaboration by studying challenges and solutions in context. The thesis includes case studies; some are exploratory, while others focus on evaluating specific strategies.Results: In terms of problem understanding, we identified challenges that impact communication and collaboration, such as different expectations, perspectives, and ways of working. Furthermore, we pinpointed factors facilitating communication, including long-term projects, research relevance, and practitioners' involvement.Regarding how to improve communication and collaboration, we investigated two strategies. The first strategy involves using the SERP-taxonomy approach in a project on software vulnerability management in IoT systems. The second strategy involves the proposal of interactive rapid reviews, conducted in close collaboration with practitioners. We share the lessons from conducting two reviews (one in testing machine learning systems and the other in software component selection). The benefits of conducting interactive rapid reviews include mutual understanding, the development of networks, and increased motivation for further studies.Conclusion: The thesis emphasizes the importance of industry-academia collaboration as a key aspect in closing gaps between research and practice. The strategies discussed provide tools to understand industry-academia partnerships better and support future collaborations

    Trustworthiness in Mobile Cyber Physical Systems

    Get PDF
    Computing and communication capabilities are increasingly embedded in diverse objects and structures in the physical environment. They will link the ‘cyberworld’ of computing and communications with the physical world. These applications are called cyber physical systems (CPS). Obviously, the increased involvement of real-world entities leads to a greater demand for trustworthy systems. Hence, we use "system trustworthiness" here, which can guarantee continuous service in the presence of internal errors or external attacks. Mobile CPS (MCPS) is a prominent subcategory of CPS in which the physical component has no permanent location. Mobile Internet devices already provide ubiquitous platforms for building novel MCPS applications. The objective of this Special Issue is to contribute to research in modern/future trustworthy MCPS, including design, modeling, simulation, dependability, and so on. It is imperative to address the issues which are critical to their mobility, report significant advances in the underlying science, and discuss the challenges of development and implementation in various applications of MCPS

    Modelling, Reverse Engineering, and Learning Software Variability

    Get PDF
    The society expects software to deliver the right functionality, in a short amount of time and with fewer resources, in every possible circumstance whatever are the hardware, the operating systems, the compilers, or the data fed as input. For fitting such a diversity of needs, it is common that software comes in many variants and is highly configurable through configuration options, runtime parameters, conditional compilation directives, menu preferences, configuration files, plugins, etc. As there is no one-size-fits-all solution, software variability ("the ability of a software system or artifact to be efficiently extended, changed, customized or configured for use in a particular context") has been studied the last two decades and is a discipline of its own. Though highly desirable, software variability also introduces an enormous complexity due to the combinatorial explosion of possible variants. For example, the Linux kernel has 15000+ options and most of them can have 3 values: "yes", "no", or "module". Variability is challenging for maintaining, verifying, and configuring software systems (Web applications, Web browsers, video tools, etc.). It is also a source of opportunities to better understand a domain, create reusable artefacts, deploy performance-wise optimal systems, or find specialized solutions to many kinds of problems. In many scenarios, a model of variability is either beneficial or mandatory to explore, observe, and reason about the space of possible variants. For instance, without a variability model, it is impossible to establish a sampling strategy that would satisfy the constraints among options and meet coverage or testing criteria. I address a central question in this HDR manuscript: How to model software variability? I detail several contributions related to modelling, reverse engineering, and learning software variability. I first contribute to support the persons in charge of manually specifying feature models, the de facto standard for modeling variability. I develop an algebra together with a language for supporting the composition, decomposition, diff, refactoring, and reasoning of feature models. I further establish the syntactic and semantic relationships between feature models and product comparison matrices, a large class of tabular data. I then empirically investigate how these feature models can be used to test in the large configurable systems with different sampling strategies. Along this effort, I report on the attempts and lessons learned when defining the "right" variability language. From a reverse engineering perspective, I contribute to synthesize variability information into models and from various kinds of artefacts. I develop foundations and methods for reverse engineering feature models from satisfiability formulae, product comparison matrices, dependencies files and architectural information, and from Web configurators. I also report on the degree of automation and show that the involvement of developers and domain experts is beneficial to obtain high-quality models. Thirdly, I contribute to learning constraints and non-functional properties (performance) of a variability-intensive system. I describe a systematic process "sampling, measuring, learning" that aims to enforce or augment a variability model, capturing variability knowledge that domain experts can hardly express. I show that supervised, statistical machine learning can be used to synthesize rules or build prediction models in an accurate and interpretable way. This process can even be applied to huge configuration space, such as the Linux kernel one. Despite a wide applicability and observed benefits, I show that each individual line of contributions has limitations. I defend the following answer: a supervised, iterative process (1) based on the combination of reverse engineering, modelling, and learning techniques; (2) capable of integrating multiple variability information (eg expert knowledge, legacy artefacts, dynamic observations). Finally, this work opens different perspectives related to so-called deep software variability, security, smart build of configurations, and (threats to) science

    Testing and debugging: A reality check

    Get PDF

    Cyber Security of Critical Infrastructures

    Get PDF
    Critical infrastructures are vital assets for public safety, economic welfare, and the national security of countries. The vulnerabilities of critical infrastructures have increased with the widespread use of information technologies. As Critical National Infrastructures are becoming more vulnerable to cyber-attacks, their protection becomes a significant issue for organizations as well as nations. The risks to continued operations, from failing to upgrade aging infrastructure or not meeting mandated regulatory regimes, are considered highly significant, given the demonstrable impact of such circumstances. Due to the rapid increase of sophisticated cyber threats targeting critical infrastructures with significant destructive effects, the cybersecurity of critical infrastructures has become an agenda item for academics, practitioners, and policy makers. A holistic view which covers technical, policy, human, and behavioural aspects is essential to handle cyber security of critical infrastructures effectively. Moreover, the ability to attribute crimes to criminals is a vital element of avoiding impunity in cyberspace. In this book, both research and practical aspects of cyber security considerations in critical infrastructures are presented. Aligned with the interdisciplinary nature of cyber security, authors from academia, government, and industry have contributed 13 chapters. The issues that are discussed and analysed include cybersecurity training, maturity assessment frameworks, malware analysis techniques, ransomware attacks, security solutions for industrial control systems, and privacy preservation methods

    Data Mining

    Get PDF
    Data mining is a branch of computer science that is used to automatically extract meaningful, useful knowledge and previously unknown, hidden, interesting patterns from a large amount of data to support the decision-making process. This book presents recent theoretical and practical advances in the field of data mining. It discusses a number of data mining methods, including classification, clustering, and association rule mining. This book brings together many different successful data mining studies in various areas such as health, banking, education, software engineering, animal science, and the environment

    A User-aware Intelligent Refactoring for Discrete and Continuous Software Integration

    Full text link
    Successful software products evolve through a process of continual change. However, this process may weaken the design of the software and make it unnecessarily complex, leading to significantly reduced productivity and increased fault-proneness. Refactoring improves the software design while preserving overall functionality and behavior, and is an important technique in managing the growing complexity of software systems. Most of the existing work on software refactoring uses either an entirely manual or a fully automated approach. Manual refactoring is time-consuming, error-prone and unsuitable for large-scale, radical refactoring. Furthermore, fully automated refactoring yields a static list of refactorings which, when applied, leads to a new and often hard to comprehend design. In addition, it is challenging to merge these refactorings with other changes performed in parallel by developers. In this thesis, we propose a refactoring recommendation approach that dynamically adapts and interactively suggests refactorings to developers and takes their feedback into consideration. Our approach uses Non-dominated Sorting Genetic Algorithm (NSGAII) to find a set of good refactoring solutions that improve software quality while minimizing the deviation from the initial design. These refactoring solutions are then analyzed to extract interesting common features between them such as the frequently occurring refactorings in the best non-dominated solutions. We combined our interactive approach and unsupervised learning to reduce the developer’s interaction effort when refactoring a system. The unsupervised learning algorithm clusters the different trade-off solutions, called the Pareto front, to guide the developers in selecting their region of interests and reduce the number of refactoring options to explore. To reduce the interaction effort, we propose an approach to convert multi-objective search into a mono-objective one after interacting with the developer to identify a good refactoring solution based on their preferences. Since developers may want to focus on specific code locations, the ”Decision Space” is also important. Therefore, our interactive approach enables developers to pinpoint their preference simultaneously in the objective (quality metrics) and decision (code location) spaces. Due to an urgent need for refactoring tools that can support continuous integration and some recent development processes such as DevOps that are based on rapid releases, we propose, for the first time, an intelligent software refactoring bot, called RefBot. Our bot continuously monitors the software repository and find the best sequence of refactorings to fix the quality issues in Continous Integration/Continous Development (CI/CD) environments as a set of pull-requests generated after mining previous code changes to understand the profile of developers. We quantitatively and qualitatively evaluated the performance and effectiveness of our proposed approaches via a set of studies conducted with experienced developers who used our tools on both open source and industry projects.Ph.D.College of Engineering & Computer ScienceUniversity of Michigan-Dearbornhttps://deepblue.lib.umich.edu/bitstream/2027.42/154775/1/Vahid Alizadeh Final Dissertation.pdfDescription of Vahid Alizadeh Final Dissertation.pdf : Dissertatio

    Proc. 33. Workshop Computational Intelligence, Berlin, 23.-24.11.2023

    Get PDF
    Dieser Tagungsband enthĂ€lt die BeitrĂ€ge des 33. Workshops „Computational Intelligence“ der vom 23.11. – 24.11.2023 in Berlin stattfindet. Die Schwerpunkte sind Methoden, Anwendungen und Tools fĂŒr ° Fuzzy-Systeme, ° KĂŒnstliche Neuronale Netze, ° EvolutionĂ€re Algorithmen und ° Data-Mining-Verfahren sowie der Methodenvergleich anhand von industriellen und Benchmark-Problemen.The workshop proceedings contain the contributions of the 33rd workshop "Computational Intelligence" which will take place from 23.11. - 24.11.2023 in Berlin. The focus is on methods, applications and tools for ° Fuzzy systems, ° Artificial Neural Networks, ° Evolutionary algorithms and ° Data mining methods as well as the comparison of methods on the basis of industrial and benchmark problems
    corecore