11 research outputs found

    Configuration Smells in Continuous Delivery Pipelines: A Linter and a Six-Month Study on GitLab

    Full text link
    An effective and efficient application of Continuous Integration (CI) and Delivery (CD) requires software projects to follow certain principles and good practices. Configuring such a CI/CD pipeline is challenging and error-prone. Therefore, automated linters have been proposed to detect errors in the pipeline. While existing linters identify syntactic errors, detect security vulnerabilities or misuse of the features provided by build servers, they do not support developers that want to prevent common misconfigurations of a CD pipeline that potentially violate CD principles (ā€œCD smellsā€). To this end, we propose CD-Linter, a semantic linter that can automatically identify four different smells in pipeline configuration files. We have evaluated our approach through a large-scale and long-term study that consists of (i) monitoring 145 issues (opened in as many open-source projects) over a period of 6 months, (ii) manually validating the detection precision and recall on a representative sample of issues, and (iii) assessing the magnitude of the observed smells on 5,312 open-source projects on GitLab. Our results show that CD smells are accepted and fixed by most of the developers and our linter achieves a precision of 87% and a recall of 94%. Those smells can be frequently observed in the wild, as 31% of projects with long configurations are affected by at least one smell

    Testing with state variable data-flow criteria for aspect-oriented programs

    Get PDF
    2011 Summer.Includes bibliographical references.Data-flow testing approaches have been used for procedural and object-oriented (OO) programs, and empirically shown to be effective in detecting faults. However, few such approaches have been proposed for aspect-oriented (AO) programs. In an AO program, data-flow interactions can occur between the base classes and aspects, which can affect the behavior of both. Faults resulting from such interactions are hard to detect unless the interactions are specifically targeted during testing. In this research, we propose a data-flow testing approach for AO programs. In an AO program, an aspect and a base class interact either through parameters passed from advised methods in the base class to the advice, or by the direct reading and writing of the base class state variables in the advice. We identify a group of def-use associations (DUAs) that are based on the base class state variables and propose a set of data-flow test criteria that require executing these DUAs. We identify fault types that result from incorrect data-flow interactions in AO programs and extend an existing AO fault model to include these faults. We implemented our approach in a tool that identifies the targeted DUAs by the proposed criteria, runs a test suite, and computes the coverage results. We conducted an empirical study that compares the cost and effectiveness of the proposed criteria with two control-flow criteria. The empirical study is performed using four subject programs. We seeded faults in the programs using three mutation tools, AjMutator, Proteum/AJ, and Ī¼Java. We used a test generation tool, called RANDOOP, to generate a pool of random test cases. To produce a test suite that satisfies a criterion, we randomly selected test cases from the test pool until required coverage for a criterion is reached. We evaluated three dimensions of the cost of a test criterion. The first dimension is the size of a test suite that satisfies a test criterion, which we measured by the number of test cases in the test suite. The second cost dimension is the density of a test case which we measured by the number of test cases in the test suite divided by the number of test requirements. The third cost dimension is the time needed to randomly obtain a test suite that satisfies a criterion, which we measured by (1) the number of iterations required by the test suites generator for randomly selecting test cases from a pool of test cases until a test criterion is satisfied, and (2) the number of the iterations per test requirement. Effectiveness is measured by the mutation scores of the test suites that satisfy a criterion. We evaluated effectiveness for all faults and for each fault type. Our results show that the test suites that cover all the DUAs of state variables are more effective in revealing faults than the control-flow criteria. However, they cost more in terms of test suite size and effort. The results also show that the test suites that cover state variable DUAs in advised classes are suitable for detecting most of the fault types in the revised AO fault model. Finally, we evaluated the cost-effectiveness of the test suites that cover all state variables DUAs for three coverage levels: 100%, 90%, and 80%. The results show that the test suites that cover 90% of the state variables DUAs are the most cost-effective

    GaSubtle: A New Genetic Algorithm for Generating Subtle Higher-Order Mutants

    No full text
    Mutation testing is an effective, yet costly, testing approach, as it requires generating and running large numbers of faulty programs, called mutants. Mutation testing also suffers from a fundamental problem, which is having a large percentage of equivalent mutants. These are mutants that produce the same output as the original program, and therefore, cannot be detected. Higher-order mutation is a promising approach that can produce hard-to-detect faulty programs called subtle mutants, with a low percentage of equivalent mutants. Subtle higher-order mutants contribute a small set of the large space of mutants which grows even larger as the order of mutation becomes higher. In this paper, we developed a genetic algorithm for finding subtle higher-order mutants. The proposed approach uses a new mechanism in the crossover phase and uses five selection techniques to select mutants that go to the next generation in the genetic algorithm. We implemented a tool, called GaSubtle that automates the process of creating subtle mutants. We evaluated the proposed approach by using 10 subject programs. Our evaluation shows that the proposed crossover generates more subtle mutants than the technique used in a previous genetic algorithm with less execution time. Results vary on the selection strategies, suggesting a dependency relation with the tested code

    An Evaluation Model for Social Development Environments

    No full text
    Distributed software development is becoming a common practice among developers. Factors such as the development environments improvement, their extensibility, and the emergence of social networking software are leading factors. They lead the development process (both co-located and geographically distributed) to a practice that: 1) improves the teamā€™s productivity, and 2) encourages and supports the social interaction among the teamsā€™ members. The above factors along with the distributed development emergence, Integrated Development Environments (IDEs) evolution, and the social media advances got the attention of the software development teams, and made them consider how to better assist the social nature of software developers, and the social aspects of software development, including activity awareness of team members ā€™ progress, their presence, collaboration, communication, and coordination around shared artifacts. IDEs are the most commonly used tools by developers and programmers. Integrating the most needed development tools inside the IDE, makes it a Collaborative Developmen

    Performance Evaluation of Machine Learning Approaches in Detecting IoT-Botnet Attacks

    No full text
    Botnets are today recognized as one of the most advanced vulnerability threats. Botnets control a huge percentage of network traffic and PCs. They have the ability to remotely control PCs (zombie machines) by their creator (BotMaster) via Command and Control (C&C) framework. They are the keys to a variety of Internet attacks such as spams, DDOS, and spreading malwares. This study proposes a number of machine learning techniques for detecting botnet assaults via IoT networks to help researchers in choosing the suitable ML algorithm for their applications. Using the BoT-IoT dataset, six different machine learning methods were evaluated: REPTree, RandomTree, RandomForest, J48, metaBagging, and Naive Bayes. Several measures, including accuracy, TPR, FPR, and many more, have been used to evaluate the algorithmsā€™ performance. The six algorithms were evaluated using three different testing situations. Scenario-1 tested the algorithms utilizing all of the parameters presented in the BoT-IoT dataset, scenario-2 used the IG feature reduction approach, and scenario-3 used extracted features from the attackerā€™s received packets. The results revealed that the assessed algorithms performed well in all three cases with slight differences

    This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. IEEE TRANSACTIONS ON COMPUTERS Resource Allocation in a Client/Server System for Massive Multi-

    No full text
    The creation of a Massive Multi-Player On-line Game (MMOG) has significant costs, such as maintenance of server rooms, server administration, and customer service. The capacity of servers in a client/server MMOG is hard to scale and cannot adjust quickly to peaks in demand while maintaining the required response time. To handle these peaks in demand, we propose to employ users ā€™ computers as secondary servers. The introduction of users ā€™ computers as secondary servers allows the performance of the MMOG to support an increase in users. Here, we consider two cases. First, for the minimization of the response times from the server, we develop and implement five static heuristics to implement a secondary server scheme that reduces the time taken to compute the state of the MMOG. Second, for our study on fairness, the goal of the heuristics is to provide a ā€œfair ā€ environment for all the users (in terms of similar response times), and to be ā€œrobust ā€ against the uncertainty of the number of new players that may join a given system configuration. The number of heterogeneous secondary servers, conversion of a player to a secondary server, and assignment of players to secondary servers are determined by the heuristics implemented in this study. I

    Robust Resource Allocation in a Massive Multiplayer Online Gaming Environment

    No full text
    The environment considered in this research is a massive multiplayer online gaming (MMOG) environment. Each user controls an avatar (an image that represents and is manipulated by a user) in a virtual world and interacts with other users. An important aspect of MMOG is maintaining a fair environment among users (i.e., not give an unfair advantage to users with faster connections or more powerful computers). The experience (either positive or negative) the user has with the MMOG environment is dependent on how quickly the game world responds to the userā€™s actions. This study focuses on scaling the system based on demand, while maintaining an environment that guarantees fairness. Consider an environment where there is a main server (MS) that controls the state of the virtual world. If the performance falls below acceptable standards, the MS can off-load calculations to secondary servers (SSs). An SS is a userā€™s computer that is converted into a server. Four heuristics are proposed for determining the number of SSs, which users are converted to SSs, and how users are assigned to the SSs and the MS. The goal of the heuristics is to provide a ā€œfair ā€ environment for all the users, and to be ā€œrobust ā€ against the uncertainty of the number of new players that may join a given system configuration. The heuristics are evaluated and compared by simulation
    corecore