10 research outputs found

    Dynamic Slice of Aspect Oriented Program A Comparative Study

    Get PDF
    Aspect Oriented Programming (AOP) is a budding latest technology for separating crosscutting concerns . It is very difficult to achieve cross cutting concerns in object - oriented programming (OOP). AOP is generally suitable for the area where code scattering and code tangling arises. Due to the specific features of AOP language such as joinpoint, point - cut, advice and introduction, it is difficult to apply existing slicing algorithms of procedural or object - oriented programming directly to AOP. This paper addresses different types of program slicing approaches for AOP by considering a very simple example. Also this paper addresses a new approach to calculate the dynamic slice of AOP. The complexity of this algorithm is better as compared to some existing algorithms

    Dynamic Slicing of Object-Oriented Programs

    Get PDF
    Software maintenance activity is one of the most important part of software development cycle. Certain regions of a program cause more damage than other regions resulting in errors, if they contain bugs. So, it is important to debug and find those areas. We use slicing criteria to obtain a static backward slice of a program to find these areas. An intermediate graphical representation is obtained for an input source program such as the Program Dependence Graph, the Class Dependence Graph and the System Dependence Graph. Slicing is performed on the System Dependence Graph using a two pass graph reachability algorithm proposed by Horwitz[3], and a static backward slice is obtained. After obtaining static slice, dynamic slice is calculated for the given input variable using an algorithm where in a statement, a set of variables and the input values for these variables are taken as input and a dynamic slice is obtained

    Computing static slices using reduced graph

    Get PDF
    Presently there is a colossal interest of programming items for which numerous software are produced. Before launching software testing must be done. Anyhow after tasteful completing the testing procedure we can't ensure that a product item is mistake free and it is clear that not all the lines in the source code are answerable for the error at a specific point. We accordingly require not to take the entire source code in the testing procedure and just concentrate on those areas that are the cause of failure by then. So as to discover those high-chance territories we need to develop a intermediate representation that specifies conditions of a program exactly. From the graph we process the slices. slice is an independent piece of the system. Our program calculate the slices by not using the original graph but by using the reduced graph

    Impact analysis of database schema changes

    Get PDF
    When database schemas require change, it is typical to predict the effects of the change, first to gauge if the change is worth the expense, and second, to determine what must be reconciled once the change has taken place. Current techniques to predict the effects of schema changes upon applications that use the database can be expensive and error-prone, making the change process expensive and difficult. Our thesis is that an automated approach for predicting these effects, known as an impact analysis, can create a more informed schema change process, allowing stakeholders to obtain beneficial information, at lower costs than currently used industrial practice. This is an interesting research problem because modern data-access practices make it difficult to create an automated analysis that can identify the dependencies between applications and the database schema. In this dissertation we describe a novel analysis that overcomes these difficulties. We present a novel analysis for extracting potential database queries from a program, called query analysis. This query analysis builds upon related work, satisfying the additional requirements that we identify for impact analysis. The impacts of a schema change can be predicted by analysing the results of query analysis, using a process we call impact calculation. We describe impact calculation in detail, and show how it can be practically and efficiently implemented. Due to the level of accuracy required by our query analysis, the analysis can become expensive, so we describe existing and novel approaches for maintaining an efficient and computational tractable analysis. We describe a practical and efficient prototype implementation of our schema change impact analysis, called SUITE. We describe how SUITE was used to evaluate our thesis, using a historical case study of a large commercial software project. The results of this case study show that our impact analysis is feasible for large commercial software applications, and likely to be useful in real-world software development

    Dependence Communities in Source Code

    Get PDF
    Dependence between components in natural systems is a well studied phenomenon in the form of biological and social networks. The concept of community structure arises from the analysis of social networks and has successfully been applied to complex networks in other fields such as biology, physics and computing. We provide empirical evidence that dependence between statements in source code gives rise to community structure. This leads to the introduction of the concept of dependence communities in software and we provide evidence that they reflect the semantic concerns of a program. Current definitions of sliced-based cohesion and coupling metrics are not defined for procedures which do not have clearly defined output variables and definitions of output variable vary from study-to-study. We solve these problems by introducing corresponding new, more efficient forms of slice-based metrics in terms of maximal slices. We show that there is a strong correlation between these new metrics and the old metrics computed using output variables. We conduct an investigation into dependence clusters which are closely related to dependence communities. We undertake an empirical study using definitions of dependence clusters from previous studies and show that, while programs do contain large dependence clusters, over 75% of these are not ‘true’ dependence clusters. We bring together the main elements of the thesis in a study of software quality, investigating their interrelated nature. We show that procedures that are members of multiple communities have a low cohesion, programs with higher coupling have larger dependence communities, programs with large dependence clusters also have large dependence communities and programs with high modularity have low coupling. Dependence communities and maximal-slice-based metrics have a huge number of potential applications including program comprehension, maintenance, debugging, refactoring, testing and software protection

    Prioritizing Program Elements: A Pre-testing Effort To Improve Software Quality

    Get PDF
    Test effort prioritization is a powerful technique that enables the tester to effectively utilize the test resources by streamlining the test effort. The distribution of test effort is important to test organization. We address prioritization-based testing strategies in order to do the best possible job with limited test resources. Our proposed techniques give benefit to the tester, when applied in the case of looming deadlines and limited resources. Some parts of a system are more critical and sensitive to bugs than others, and thus should be tested thoroughly. The rationale behind this thesis is to estimate the criticality of various parts within a system and prioritize the parts for testing according to their estimated criticality. We propose several prioritization techniques at different phases of Software Development Life Cycle (SDLC). Different chapters of the thesis aim at setting test priority based on various factors of the system. The purpose is to identify and focus on the critical and strategic areas and detect the important defects as early as possible, before the product release. Focusing on the critical and strategic areas helps to improve the reliability of the system within the available resources. We present code-based and architecture-based techniques to prioritize the testing tasks. In these techniques, we analyze the criticality of a component within a system using a combination of its internal and external factors. We have conducted a set of experiments on the case studies and observed that the proposed techniques are efficient and address the challenge of prioritization. We propose a novel idea of calculating the influence of a component, where in-fluence refers to the contribution or usage of the component at every execution step. This influence value serves as a metric in test effort prioritization. We first calculate the influence through static analysis of the source code and then, refine our work by calculating it through dynamic analysis. We have experimentally proved that decreasing the reliability of an element with high influence value drastically increases the failure rate of the system, which is not true in case of an element with low influence value. We estimate the criticality of a component within a system by considering its both internal and external factors such as influence value, average execution time, structural complexity, severity and business value. We prioritize the components for testing according to their estimated criticality. We have compared our approach with a related approach, in which the components were prioritized on the basis of their structural complexity only. From the experimental results, we observed that our approach helps to reduce the failure rate at the operational environment. The consequence of the observed failures were also low compared to the related approach. Priority should be established by order of importance or urgency. As the importance of a component may vary at different points of the testing phase, we propose a multi cycle-based test effort prioritization approach, in which we assign different priorities to the same component at different test cycles. Test effort prioritization at the initial phase of SDLC has a greater impact than that made at a later phase. As the analysis and design stage is critical compared to other stages, detecting and correcting errors at this stage is less costly compared to later stages of SDLC. Designing metrics at this stage help the test manager in decision making for allocating resources. We propose a technique to estimate the criticality of a use case at the design level. The criticality is computed on the basis of complexity and business value. We evaluated the complexity of a use case analytically through a set of data collected at the design level. We experimentally observed that assigning test effort to various use cases according to their estimated criticality improves the reliability of a system under test. Test effort prioritization based on risk is a powerful technique for streamlining the test effort. The tester can exploit the relationship between risk and testing effort. We proposed a technique to estimate the risk associated with various states at the component level and risk associated with use case scenarios at the system level. The estimated risks are used for enhancing the resource allocation decision. An intermediate graph called Inter-Component State-Dependence graph (ISDG) is introduced for getting the complexity for a state of a component, which is used for risk estimation. We empirically evaluated the estimated risks. We assigned test priority to the components / scenarios within a system according to their estimated risks. We performed an experimental comparative analysis and observed that the testing team guided by our technique achieved high test efficiency compared to a related approach
    corecore