6 research outputs found

    Metadata Extraction in Database Testing

    Get PDF
    The need for an automated testing tool to test the correctness of the database applications is crucial in our current day since databases play an important role in almost all organizations. Also, database’s behavior need to be verified in order to avoid costly errors and false information being extracted from them. The main aim of this paper was to create a component-based tester called DBSoft that tests the correctness of database application systems. The DBSoft toolkit consists of five tools as follows: information collection with the Parser tool, test case generation with the Input Generator tool, test case implementation with the Output Generator tool, test case validation with the Output Validator tool and report generation with the Report Generator tool

    A Comprehensive Survey on Database Management System Fuzzing: Techniques, Taxonomy and Experimental Comparison

    Full text link
    Database Management System (DBMS) fuzzing is an automated testing technique aimed at detecting errors and vulnerabilities in DBMSs by generating, mutating, and executing test cases. It not only reduces the time and cost of manual testing but also enhances detection coverage, providing valuable assistance in developing commercial DBMSs. Existing fuzzing surveys mainly focus on general-purpose software. However, DBMSs are different from them in terms of internal structure, input/output, and test objectives, requiring specialized fuzzing strategies. Therefore, this paper focuses on DBMS fuzzing and provides a comprehensive review and comparison of the methods in this field. We first introduce the fundamental concepts. Then, we systematically define a general fuzzing procedure and decompose and categorize existing methods. Furthermore, we classify existing methods from the testing objective perspective, covering various components in DBMSs. For representative works, more detailed descriptions are provided to analyze their strengths and limitations. To objectively evaluate the performance of each method, we present an open-source DBMS fuzzing toolkit, OpenDBFuzz. Based on this toolkit, we conduct a detailed experimental comparative analysis of existing methods and finally discuss future research directions.Comment: 34 pages, 22 figure

    A revisit of three studies related to random testing

    Get PDF
    Software testing is an approach that ensures the quality of software through execution, with a goal being to reveal failures and other problems as quickly as possible. Test case selection is a fundamental issue in software testing, and has generated a large body of research, especially with regards to the effectiveness of random testing (RT), where test cases are randomly selected from the software’s input domain. In this paper, we revisit three of our previous studies. The first study investigated a sufficient condition for partition testing (PT) to outperform RT, and was motivated by various controversial and conflicting results suggesting that sometimes PT performed better than RT, and sometimes the opposite. The second study aimed at enhancing RT itself, and was motivated by the fact that RT continues to be a fundamental and popular testing technique. This second study enhanced RT fault detection effectiveness by making use of the common observation that failure-causing inputs tend to cluster together, and resulted in a new family of RT techniques: adaptive random testing (ART), which is random testing with an even spread of test cases across the input domain. Following the successful use of failure-causing region contiguity insights to develop ART, we conducted a third study on how to make use of other characteristics of failure-causing inputs to develop more effective test case selection strategies. This third study revealed how best to approach testing strategies when certain characteristics of the failure-causing inputs are known, and produced some interesting and important results. In revisiting these three previous studies, we explore their unexpected commonalities, and identify diversity as a key concept underlying their effectiveness. This observation further prompted us to examine whether or not such a concept plays a role in other areas of software testing, and our conclusion is that, yes, diversity appears to be one of the most important concepts in the field of software testing

    Delayed failure of software components using stochastic testing

    Get PDF
    The present research investigates the delayed failure of software components and addresses the problem that the conventional approach to software testing is unlikely to reveal this type of failure. Delayed failure is defined as a failure that occurs some time after the condition that causes the failure, and is a consequence of long-latency error propagation. This research seeks to close a perceived gap between academic research into software testing and industrial software testing practice by showing that stochastic testing can reveal delayed failure, and supporting this conclusion by a model of error propagation and failure that has been validated by experiment. The focus of the present research is on software components described by a request-response model. Within this conceptual framework, a Markov chain model of error propagation and failure is used to derive the expected delayed failure behaviour of software components. Results from an experimental study of delayed failure of DBMS software components MySQL and Oracle XE using stochastic testing with random generation of SQL are consistent with expected behaviour based on the Markov chain model. Metrics for failure delay and reliability are shown to depend on the characteristics of the chosen experimental profile. SQL mutation is used to generate negative as well as positive test profiles. There appear to be few systematic studies of delayed failure in the software engineering literature, and no studies of stochastic testing related to delayed failure of software components, or specifically to delayed failure of DBMS. Stochastic testing is shown to be an effective technique for revealing delayed failure of software components, as well as a suitable technique for reliability and robustness testing of software components. These results provide a deeper insight into the testing technique and should lead to further research. Stochastic testing could provide a dependability benchmark for component-based software engineering
    corecore