886 research outputs found

    Prioritization of Re-executable Test Cases of Activity Diagram in Regression Testing Using Model Based Environment

    Get PDF
    As we all know, software testing is of vital importance in software development life cycle (SDLC) to validate the new versions of the software and detection of faults. Regression Testing, however concentrates on generating test cases on changed part of the software to detect faults more earlier than any other testing practices. In case of model based testing approach, testing is performed using top-down method (black box method) and design models of the software, for example, UML diagrams. UML diagrams gives us requirement level representation of the software in graphical format which is now a days a standard used in software engineering. In our proposed approach, we have derived a new technique which has never been used before to prioritize the test cases in model based environment. In this technique, we have used activity diagram as an input to the system. Activity diagram is used basically because it gives us the complete flow of each and every activity involved in the system and represents its complete working. Activity diagram is further changed as the requirement changes, each time, when the changes happen, they are recorded and test cases are generated for the changed diagram, test cases are also generated for the original diagram. Test cases for both the diagrams are compared and classified as re-usable and re-executable test cases. Re-usable test cases are those that remain unchanged during requirement changes and re-executable test cases belong to the changed part of the diagram. Then re-executable test cases are prioritized using one heuristic algorithm based on ACT(Activity Connector) table. Now, the question is why to prioritize only the re-executable test cases. Because, any how we have to execute re-usable test cases, as they remain same for both the versions of the diagram and are already tested when original diagram was made. But, re-executable test cases are never been tested and may detect faults in the modified design quickly and by prioritizing them we can also reduce the execution time of the test cases which will give us effective testing performance and will evolve a better new version of the software. All the existing prioritization techniques are either code based or are using various tool supports. Code based techniques are too complex and tedious because for a small change in code, we need to test whole application repeatedly. And in case of tool support, we have multiple assumptions and constraints to be followed. This proposed technique will surely give better results and as the type of technique has never been used before will also prove very effective. DOI: 10.17762/ijritcc2321-8169.15077

    Automating Regression Test Selection for Web Services

    Get PDF
    As Web services grow in maturity and use, so do the methods which are being used to test and maintain them. Regression Testing is a major component of most major testing systems but has only begun to be applied to Web services. The majority of the tools and techniques applying regression test to Web services are focused on test-case generation, thus ignoring the potential savings of regression test selection. Regression test selection optimizes the regression testing process by selecting a subset of all tests, while still maintaining some level of confidence about the system performing no worse than the unmodified system. A safe regression test selection technique implies that after selection, the level of confidence is as high as it would be if no tests were removed. Since safe regression test selection techniques generally involve code-based (white-box) testing, they cannot be directly applied to Web services due to their loosely-coupled, standards-based, and distributed nature. A framework which automates both the regression test selection and regression testing processes for Web services in a decentralized, end-to-end manner is proposed. As part of this approach, special consideration is given to the concurrency issues which may occur in an autonomous and decentralized system. The resulting synchronization method will be presented along with a set of algorithms which manage the regression testing and regression test selection processes throughout the system. A set of empirical results demonstrate the feasibility and benefit of the approach

    Exploring cognitive style and task-specific preferences for process representations

    Get PDF
    Process models describe someone's understanding of processes. Processes can be described using unstructured, semi-formal or diagrammatic representation forms. These representations are used in a variety of task settings, ranging from understanding processes to executing or improving processes, with the implicit assumption that the chosen representation form will be appropriate for all task settings. We explore the validity of this assumption by examining empirically the preference for different process representation forms depending on the task setting and cognitive style of the user. Based on data collected from 120 business school students, we show that preferences for process representation formats vary dependent on application purpose and cognitive styles of the participants. However, users consistently prefer diagrams over other representation formats. Our research informs a broader research agenda on task-specific applications of process modeling. We offer several recommendations for further research in this area

    Towards a Runtime Standard-Based Testing Framework for Dynamic Distributed Information Systems

    Get PDF
    International audienceIn this work, we are interested in testing dynamic distributed information systems. That is we consider a decentralized information system which can evolve over time. For this purpose we propose a runtime standard-based test execution platform. The latter is built upon the normalized TTCN-3 specification and implementation testing language. The proposed platform ensures execution of tests cases at runtime. Moreover it considers both structural and behavioral adaptations of the system under test. In addition, it is equipped with a test isolation layer that minimizes the risk of interference between business and testing processes. The platform also generates a minimal subset of test scenarios to execute after each adaptation. Finally, it proposes an optimal strategy to place the TTCN-3 test components among the system execution nodes

    A Variability-Aware Design Approach to the Data Analysis Modeling Process

    Get PDF
    The massive amount of current data has led to many different forms of data analysis processes that aim to explore this data to uncover valuable insights such as trends, anomalies and patterns. These processes support decision makers in their analysis of varied and changing data ranging from financial transactions to customer interactions and social network postings. These data analysis processes use a wide variety of methods, including machine learning, in several domains such as business, finance, health and smart cities. Several data analysis processes have been proposed by academia and industry, including CRISP-DM and SEMMA, to describe the phases that data analysis experts go through when solving their problems. Specifically, CRISP-DM has modeling as one of its phases, which involves selecting a modeling technique, generating a test design, building a model, and assessing the model. However, automating these data analysis modeling processes faces numerous challenges, from a software engineering perspective. First, software users expect increased flexibility from the software as to the possible variations in techniques, types of data, and parameter settings. The software is required to accommodate complex usage and deployment variations, which are difficult for non-experts. Second, variability in functionality or quality attributes increases the complexity of these systems and makes them harder to design and implement. There is a lack of a framework design that takes variability into account. Third, the lack of a more comprehensive analysis of variability makes it difficult to evaluate opportunities for automating data analysis modeling. This thesis proposes a variability-aware design approach to the data analysis modeling process. The approach involves: (i) the assessment of the variabilities inherent in CRISP-DM data analysis modeling and the provision of feature models that represent these variabilities; (ii) the definition of a preliminary framework design that captures the identified variabilities; and (iii) evaluation of the framework design in terms of possibilities of automation. Overall, this work presents, to the best of our knowledge, the first approach based on variability assessment to design data modeling process such as CRISP-DM. The approach advances the state of the art by offering a variability-aware design a solution that can enhance system flexibility and a novel software design framework to support data analysis modeling

    Unified System on Chip RESTAPI Service (USOCRS)

    Get PDF
    Abstract. This thesis investigates the development of a Unified System on Chip RESTAPI Service (USOCRS) to enhance the efficiency and effectiveness of SOC verification reporting. The research aims to overcome the challenges associated with the transfer, utilization, and interpretation of SoC verification reports by creating a unified platform that integrates various tools and technologies. The research methodology used in this study follows a design science approach. A thorough literature review was conducted to explore existing approaches and technologies related to SOC verification reporting, automation, data visualization, and API development. The review revealed gaps in the current state of the field, providing a basis for further investigation. Using the insights gained from the literature review, a system design and implementation plan were developed. This plan makes use of cutting-edge technologies such as FASTAPI, SQL and NoSQL databases, Azure Active Directory for authentication, and Cloud services. The Verification Toolbox was employed to validate SoC reports based on the organization’s standards. The system went through manual testing, and user satisfaction was evaluated to ensure its functionality and usability. The results of this study demonstrate the successful design and implementation of the USOCRS, offering SOC engineers a unified and secure platform for uploading, validating, storing, and retrieving verification reports. The USOCRS facilitates seamless communication between users and the API, granting easy access to vital information including successes, failures, and test coverage derived from submitted SoC verification reports. By automating and standardizing the SOC verification reporting process, the USOCRS eliminates manual and repetitive tasks usually done by developers, thereby enhancing productivity, and establishing a robust and reliable framework for report storage and retrieval. Through the integration of diverse tools and technologies, the USOCRS presents a comprehensive solution that adheres to the required specifications of the SOC schema used within the organization. Furthermore, the USOCRS significantly improves the efficiency and effectiveness of SOC verification reporting. It facilitates the submission process, reduces latency through optimized data storage, and enables meaningful extraction and analysis of report data

    Supporting inheritance hierarchy changes in model-based regression test selection

    Get PDF
    Models can be used to ease and manage the development, evolution, and runtime adaptation of a software system. When models are adapted, the resulting models must be rigorously tested. Apart from adding new test cases, it is also important to perform regression testing to ensure that the evolution or adaptation did not break existing functionality. Since regression testing is performed with limited resources and under time constraints, regression test selection (RTS) techniques are needed to reduce the cost of regression testing. Applying model-level RTS for model-based evolution and adaptation is more convenient than using code-level RTS because the test selection process happens at the same level of abstraction as that of evolution and adaptation. In earlier work, we proposed a model-based RTS approach called MaRTS to be used with a fine-grained model-based adaptation framework that targets applications implemented in Java. MaRTS uses UML models consisting of class and activity diagrams. It classifies test cases as obsolete, reusable, or retestable based on changes made to UML class and activity diagrams of the system being adapted. However, MaRTS did not take into account the changes made to the inheritance hierarchy in the class diagram and the impact of these changes on the selection of test cases. This paper extends MaRTS to support such changes, and demonstrates that the extended approach performs as well as or better than code-based RTS approaches in safely selecting regression test cases. While MaRTS can generally be used during any model-driven development or model-based evolution activity, we have developed it in the context of runtime adaptation. We evaluated the extended MaRTS on a set of applications, and compared the results with code-based RTS approaches that also support changes to the inheritance hierarchy. The results showed that the extended MaRTS selected all the test cases relevant to the inheritance hierarchy changes, and that the fault detection ability of the selected test cases was never lower than that of the baseline test cases. The extended MaRTS achieved comparable results to a graph-walk code-based RTS approach (DejaVu), and showed a higher reduction in the number of selected test cases when compared with a static analysis code-based RTS approach (ChEOPSJ)
    • …
    corecore