553 research outputs found

    Weighted string distance approach based on modified clustering technique for optimizing test case prioritization

    Get PDF
    Numerous test case prioritization (TCP) approaches have been introduced to enhance the test viability in software testing activity with the goal to maximize early average percentage fault detection (APFD). String based approach had shown that applying a single string distance-based metric to differentiate the test cases can improve the APFD and coverage rate (CR) results. However, to precisely differentiate the test cases in regression testing, the string approach still requires an enhancement as it lacks priority criteria. Therefore, a study on how to effectively cluster and prioritize test cases through string-based approach is conducted. To counter the string distances problem, weighted string distances is introduced. A further enhancement was made by tuning the weighted string metric with K-Means clustering and prioritization using Firefly Algorithm (FA) technique for the TCP approach to become more flexible in manipulating available information. Then, the combination of the weighted string distances along with clustering and prioritization is executed under the designed process for a new weighted string distances-based approach for complete evaluation. The experimental results show that all the weighted string distances obtained better results compared to its single string metric with average APFD values 95.73% and CR values 61.80% in cstcas Siemen dataset. As for the proposed weighted string distances approach with clustering techniques for regression testing, the combination obtained better results and flexibility than the conventional string approach. In addition, the proposed approach also passed statistical assessment by obtaining p-value higher than 0.05 in Shapiro-Wilk’s normality test and p-value lower than 0.05 in Tukey Kramer Post Hoc tests. In conclusion, the proposed weighted string distances approach improves the overall score of APFD and CE and provides flexibility in the TCP approach for regression testing environment

    Search-Based Software Maintenance and Testing

    Get PDF
    2012 - 2013In software engineering there are many expensive tasks that are performed during development and maintenance activities. Therefore, there has been a lot of e ort to try to automate these tasks in order to signi cantly reduce the development and maintenance cost of software, since the automation would require less human resources. One of the most used way to make such an automation is the Search-Based Software Engineering (SBSE), which reformulates traditional software engineering tasks as search problems. In SBSE the set of all candidate solutions to the problem de nes the search space while a tness function di erentiates between candidate solutions providing a guidance to the optimization process. After the reformulation of software engineering tasks as optimization problems, search algorithms are used to solve them. Several search algorithms have been used in literature, such as genetic algorithms, genetic programming, simulated annealing, hill climbing (gradient descent), greedy algorithms, particle swarm and ant colony. This thesis investigates and proposes the usage of search based approaches to reduce the e ort of software maintenance and software testing with particular attention to four main activities: (i) program comprehension; (ii) defect prediction; (iii) test data generation and (iv) test suite optimiza- tion for regression testing. For program comprehension and defect prediction, this thesis provided their rst formulations as optimization problems and then proposed the usage of genetic algorithms to solve them. More precisely, this thesis investigates the peculiarity of source code against textual documents written in natural language and proposes the usage of Genetic Algorithms (GAs) in order to calibrate and assemble IR-techniques for di erent software engineering tasks. This thesis also investigates and proposes the usage of Multi-Objective Genetic Algorithms (MOGAs) in or- der to build multi-objective defect prediction models that allows to identify defect-prone software components by taking into account multiple and practical software engineering criteria. Test data generation and test suite optimization have been extensively investigated as search- based problems in literature . However, despite the huge body of works on search algorithms applied to software testing, both (i) automatic test data generation and (ii) test suite optimization present several limitations and not always produce satisfying results. The success of evolutionary software testing techniques in general, and GAs in particular, depends on several factors. One of these factors is the level of diversity among the individuals in the population, which directly a ects the exploration ability of the search. For example, evolutionary test case generation techniques that employ GAs could be severely a ected by genetic drift, i.e., a loss of diversity between solutions, which lead to a premature convergence of GAs towards some local optima. For these reasons, this thesis investigate the role played by diversity preserving mechanisms on the performance of GAs and proposed a novel diversity mechanism based on Singular Value Decomposition and linear algebra. Then, this mechanism has been integrated within the standard GAs and evaluated for evolutionary test data generation. It has been also integrated within MOGAs and empirically evaluated for regression testing. [edited by author]XII n.s

    Cybersecurity Alert Prioritization in a Critical High Power Grid With Latent Spaces

    Get PDF
    High-Power electric grid networks require extreme security in their associated telecommunication network to ensure protection and control throughout power transmission. Accordingly, supervisory control and data acquisition systems form a vital part of any critical infrastructure, and the safety of the associated telecommunication network from intrusion is crucial. Whereas events related to operation and maintenance are often available and carefully documented, only some tools have been proposed to discriminate the information dealing with the heterogeneous data from intrusion detection systems and to support the network engineers. In this work, we present the use of deep learning techniques, such as Autoencoders or conventional Multiple Correspondence Analysis, to analyze and prune the events on power communication networks in terms of categorical data types often used in anomaly and intrusion detection (such as addresses or anomaly description). This analysis allows us to quantify and statistically describe highseverity events. Overall, portions of alerts around 5-10% have been prioritized in the analysis as first to handle by managers. Moreover, probability clouds of alerts have been shown to configure explicit manifolds in latent spaces. These results offer a homogeneous framework for implementing anomaly detection prioritization in power communication networks

    Biopsychosocial Assessment and Ergonomics Intervention for Sustainable Living: A Case Study on Flats

    Get PDF
    This study proposes an ergonomics-based approach for those who are living in small housings (known as flats) in Indonesia. With regard to human capability and limitation, this research shows how the basic needs of human beings are captured and analyzed, followed by proposed designs of facilities and standard living in small housings. Ninety samples were involved during the study through in- depth interview and face-to-face questionnaire. The results show that there were some proposed of modification of critical facilities (such as multifunction ironing work station, bed furniture, and clothesline) and validated through usability testing. Overall, it is hoped that the proposed designs will support biopsychosocial needs and sustainability

    Combining SOA and BPM Technologies for Cross-System Process Automation

    Get PDF
    This paper summarizes the results of an industry case study that introduced a cross-system business process automation solution based on a combination of SOA and BPM standard technologies (i.e., BPMN, BPEL, WSDL). Besides discussing major weaknesses of the existing, custom-built, solution and comparing them against experiences with the developed prototype, the paper presents a course of action for transforming the current solution into the proposed solution. This includes a general approach, consisting of four distinct steps, as well as specific action items that are to be performed for every step. The discussion also covers language and tool support and challenges arising from the transformation

    Optimal sensor placement for sewer capacity risk management

    Get PDF
    2019 Spring.Includes bibliographical references.Complex linear assets, such as those found in transportation and utilities, are vital to economies, and in some cases, to public health. Wastewater collection systems in the United States are vital to both. Yet effective approaches to remediating failures in these systems remains an unresolved shortfall for system operators. This shortfall is evident in the estimated 850 billion gallons of untreated sewage that escapes combined sewer pipes each year (US EPA 2004a) and the estimated 40,000 sanitary sewer overflows and 400,000 backups of untreated sewage into basements (US EPA 2001). Failures in wastewater collection systems can be prevented if they can be detected in time to apply intervention strategies such as pipe maintenance, repair, or rehabilitation. This is the essence of a risk management process. The International Council on Systems Engineering recommends that risks be prioritized as a function of severity and occurrence and that criteria be established for acceptable and unacceptable risks (INCOSE 2007). A significant impediment to applying generally accepted risk models to wastewater collection systems is the difficulty of quantifying risk likelihoods. These difficulties stem from the size and complexity of the systems, the lack of data and statistics characterizing the distribution of risk, the high cost of evaluating even a small number of components, and the lack of methods to quantify risk. This research investigates new methods to assess risk likelihood of failure through a novel approach to placement of sensors in wastewater collection systems. The hypothesis is that iterative movement of water level sensors, directed by a specialized metaheuristic search technique, can improve the efficiency of discovering locations of unacceptable risk. An agent-based simulation is constructed to validate the performance of this technique along with testing its sensitivity to varying environments. The results demonstrated that a multi-phase search strategy, with a varying number of sensors deployed in each phase, could efficiently discover locations of unacceptable risk that could be managed via a perpetual monitoring, analysis, and remediation process. A number of promising well-defined future research opportunities also emerged from the performance of this research

    Cyber Security and Critical Infrastructures 2nd Volume

    Get PDF
    The second volume of the book contains the manuscripts that were accepted for publication in the MDPI Special Topic "Cyber Security and Critical Infrastructure" after a rigorous peer-review process. Authors from academia, government and industry contributed their innovative solutions, consistent with the interdisciplinary nature of cybersecurity. The book contains 16 articles, including an editorial that explains the current challenges, innovative solutions and real-world experiences that include critical infrastructure and 15 original papers that present state-of-the-art innovative solutions to attacks on critical systems

    Static Code Analysis: On Detection of Security Vulnerabilities and Classification of Warning Messages

    Get PDF
    This thesis addresses several aspects of using static code analysis tools for detection of security vulnerabilities and faults within source code. First, the performance of three widely used static code analysis tools with respect to detection of security vulnerabilities is evaluated. This is done with the help of a large benchmarking suite designed to test static code analysis tools\u27 performance regarding security vulnerabilities. The performance of the three tools is also evaluated using three open source software projects with known security vulnerabilities. The main results of the first part of this thesis showed that the three evaluated tools do not have significantly different performance in detecting security vulnerabilities. 27% of C/C++ vulnerabilities along with 11% of Java vulnerabilities were not detected by any of the three tools. Furthermore, overall recall values for all three tools were close to or below 50% indicating performance comparable or worse than random guessing. These results were corroborated by the tools\u27 performance on the three real software projects. The second part of this thesis is focused on machine-learning based classification of messages extracted from static code analysis reports. This work is based on data from five real NASA software projects. A classifier is trained on increasing percentages of labeled data in order to emulate an on-going analysis effort for each of the five datasets. Results showed that classification performance is highly dependent on the distribution of true and false positives among source code files. One of the five datasets yielded good predictive classification regarding true positives. One more dataset led to acceptable performance, while the remaining three datasets failed to yield good results. Investigating the distribution of true and false positives revealed that messages were classified successfully when either only real faults and/or only false faults were clustered in files or were flagged by the same checker. The high percentages of false positive singletons (files or checkers that produced 0 true positives and 1 false negative) were found to negatively affect the classifier\u27s performance
    corecore