19 research outputs found

    Regression Testing of Object-Oriented Software based on Program Slicing

    Get PDF
    As software undergoes evolution through a series of changes, it is necessary to validate these changes through regression testing. Regression testing becomes convenient if we can identify the program parts that are likely to be affected by the changes made to the programs as part of maintenance activity. We propose a change impact analysis mechanism as an application of slicing. A new slicing method is proposed to decompose a Java program into affected packages, classes, methods and statements identified with respect to the modification made in the program. The decomposition is based on the hierarchical characteristic of Java programs. We have proposed a suitable intermediate representation for Java programs that shows all the possible dependences among the program parts. This intermediate representation is used to perform the necessary change impact analysis using our proposed slicing technique and identify the program parts that are possibly affected by the change made to the program. The packages, classes, methods, and statements thus affected are identified by traversing the intermediate graph, first in the forward direction and then in the backward direction. Based on the change impact analysis results, we propose a regression test selection approach to select a subset of the existing test suite. The proposed approach maps the decomposed slice (comprising of the affected program parts) with the coverage information of the existing test suite to select the appropriate test cases for regression testing. All the selected test cases in the new test suite are better suited for regression testing of the modified program as they execute the affected program parts and thus have a high probability of revealing the associated faults. The regression test case selection approach promises to reduce the size of regression test suite. However, sometimes the selected test suite can still appear enormous, and strict timing constraints can hinder execution of all the test cases in the reduced test suite. Hence, it is essential to minimize the test suite. In a scenario of constrained time and budget, it is difficult for the testers to know how many minimum test cases to choose and still ensure acceptable software quality. So, we introduce novel approaches to minimize the test suite as an integer linear programming problem with optimal results. Existing research on software metrics have proven cohesion metrics as good indicator of fault-proneness. But, none of these proposed metrics are based on change impact analysis. We propose a changebased cohesion measure to compute the cohesiveness of the affected program parts. These cohesion values form the minimization criteria for minimizing the test suite. We formulate an integer linear programming model based on the cohesion values to optimize the test suite and get optimal results. Software testers always face the dilemma of enhancing the possibility of fault detection. Regression test case prioritization promises to detect the faults early in the retesting process. Thus, finding an optimal order of execution of the selected regression test cases will maximize the error detection rates at less time and cost. We propose a novel approach to identify a prioritized order of test cases in a given regression selected test suite that has a high chance of fault exposing capability. It is very likely that some test cases execute some program parts that are more prone to errors and have a greater possibility of detecting more errors early during the testing process. We identify the fault-proneness of the affected program parts by finding their coupling values. We propose to compute a new coupling metric for the affected program parts, named affected change coupling, based on which the test cases are prioritized. Our analysis shows that the test cases executing the affected program parts with high affected change coupling have a higher potential of revealing faults early than other test cases in the test suite. Testing becomes convenient if we identify the changes that require rigorous retesting instead of laying equal focus to retest all the changes. Thus, next we propose an approach to save the effort and cost of retesting by identifying and quantifying the impact of crosscutting changes on other parts of the program. We propose some metrics in this regard that are useful to the testers to take early decision on what to test more and what to test less

    Data and Process Mining Applications on a Multi-Cell Factory Automation Testbed

    Get PDF
    This paper presents applications of both data mining and process mining in a factory automation testbed. It mainly concentrates on the Manufacturing Execution System (MES) level of production hierarchy. Unexpected failures might lead to vast losses on investment or irrecoverable damages. Predictive maintenance techniques, active/passive, have shown high potential of preventing such detriments. Condition monitoring of target pieces of equipment beside defined thresholds forms basis of the prediction. However, monitored parameters must be independent of environment changes, e.g. vibration of transportation equipments such as conveyor systems is variable to workload. This work aims to propose and demonstrate an approach to identify incipient faults of the transportation systems in discrete manufacturing settings. The method correlates energy consumption of the described devices with the workloads. At runtime, machine learning is used to classify the input energy data into two pattern descriptions. Consecutive mismatches between the output of the classifier and the workloads observed in real time indicate possibility of incipient failure at device level. Currently, as a result of high interaction between information systems and operational processes, and due to increase in the number of embedded heterogeneous resources, information systems generate unstructured and massive amount of events. Organizations have shown difficulties to deal with such an unstructured and huge amount of data. Process mining as a new research area has shown strong capabilities to overcome such problems. It applies both process modelling and data mining techniques to extract knowledge from data by discovering models from the event logs. Although process mining is recognised mostly as a business-oriented technique and recognised as a complementary of Business Process Management (BPM) systems, in this paper, capabilities of process mining are exploited on a factory automation testbed. Multiple perspectives of process mining is employed on the event logs produced by deploying Service Oriented Architecture through Web Services in a real multi-robot factory automation industrial testbed, originally used for assembly of mobile phones

    Impact-Analyse für AspectJ - Eine kritische Analyse mit werkzeuggestütztem Ansatz

    Get PDF
    Aspect-Oriented Programming (AOP) has been promoted as a solution for modularization problems known as the tyranny of the dominant decomposition in literature. However, when analyzing AOP languages it can be doubted that uncontrolled AOP is indeed a silver bullet. The contributions of the work presented in this thesis are twofold. First, we critically analyze AOP language constructs and their effects on program semantics to sensitize programmers and researchers to resulting problems. We further demonstrate that AOP—as available in AspectJ and similar languages—can easily result in less understandable, less evolvable, and thus error prone code—quite opposite to its claims. Second, we examine how tools relying on both static and dynamic program analysis can help to detect problematical usage of aspect-oriented constructs. We propose to use change impact analysis techniques to both automatically determine the impact of aspects and to deal with AOP system evolution. We further introduce an analysis technique to detect potential semantical issues related to undefined advice precedence. The thesis concludes with an overview of available open source AspectJ systems and an assessment of aspect-oriented programming considering both fundamentals of software engineering and the contents of this thesis

    Spatial pattern recognition for crop-livestock systems using multispectral data

    Get PDF
    Within the field of pattern recognition (PR) a very active area is the clustering and classification of multispectral data, which basically aims to allocate the right class of ground category to a reflectance or radiance signal. Generally, the problem complexity is related to the incorporation of spatial characteristics that are complementary to the nonlinearities of land surface process heterogeneity, remote sensing effects and multispectral features. The present research describes the application of learning machine methods to accomplish the above task by inducting a relationship between the spectral response of farms’ land cover, and their farming system typology from a representative set of instances. Such methodologies are not traditionally used in crop-livestock studies. Nevertheless, this study shows that its application leads to simple and theoretically robust classification models. The study has covered the following phases: a)geovisualization of crop-livestock systems; b)feature extraction of both multispectral and attributive data and; c)supervised farm classification. The first is a complementary methodology to represent the spatial feature intensity of farming systems in the geographical space. The second belongs to the unsupervised learning field, which mainly involves the appropriate description of input data in a lower dimensional space. The last is a method based on statistical learning theory, which has been successfully applied to supervised classification problems and to generate models described by implicit functions. In this research the performance of various kernel methods applied to the representation and classification of crop-livestock systems described by multispectral response is studied and compared. The data from those systems include linear and nonlinearly separable groups that were labelled using multidimensional attributive data. Geovisualization findings show the existence of two well-defined farm populations within the whole study area; and three subgroups in relation to the Guarico section. The existence of these groups was confirmed by both hierarchical and kernel clustering methods, and crop-livestock systems instances were segmented and labeled into farm typologies based on: a)milk and meat production; b)reproductive management; c)stocking rate; and d)crop-forage-forest land use. The minimum set of labeled examples to properly train the kernel machine was 20 instances. Models inducted by training data sets using kernel machines were in general terms better than those from hierarchical clustering methodologies. However, the size of the training data set represents one of the main difficulties to be overcome in permitting the more general application of this technique in farming system studies. These results attain important implications for large scale monitoring of crop-livestock system; particularly to the establishment of balanced policy decision, intervention plans formulation, and a proper description of target typologies to enable investment efforts to be more focused at local issues

    Empowering Materials Processing and Performance from Data and AI

    Get PDF
    Third millennium engineering address new challenges in materials sciences and engineering. In particular, the advances in materials engineering combined with the advances in data acquisition, processing and mining as well as artificial intelligence allow for new ways of thinking in designing new materials and products. Additionally, this gives rise to new paradigms in bridging raw material data and processing to the induced properties and performance. This present topical issue is a compilation of contributions on novel ideas and concepts, addressing several key challenges using data and artificial intelligence, such as:- proposing new techniques for data generation and data mining;- proposing new techniques for visualizing, classifying, modeling, extracting knowledge, explaining and certifying data and data-driven models;- processing data to create data-driven models from scratch when other models are absent, too complex or too poor for making valuable predictions;- processing data to enhance existing physic-based models to improve the quality of the prediction capabilities and, at the same time, to enable data to be smarter; and- processing data to create data-driven enrichment of existing models when physics-based models exhibit limits within a hybrid paradigm

    Volatility modeling and limit-order book analytics with high-frequency data

    Get PDF
    The vast amount of information characterizing nowadays’s high-frequency financial datasets poses both opportunities and challenges. Among the opportunities, existing methods can be employed to provide new insights and better understanding of market’s complexity under different perspectives, while new methods, capable of fully-exploit all the information embedded in high-frequency datasets and addressing new issues, can be devised. Challenges are driven by data complexity: limit-order book datasets constitute of hundreds of thousands of events, interacting with each other, and affecting the event-flow dynamics. This dissertation aims at improving our understanding over the effective applicability of machine learning methods for mid-price movement prediction, over the nature of long-range autocorrelations in financial time-series, and over the econometric modeling and forecasting of volatility dynamics in high-frequency settings. Our results show that simple machine learning methods can be successfully employed for mid-price forecasting, moreover adopting methods that rely on the natural tensorrepresentation of financial time series, inter-temporal connections captured by this convenient representation are shown to be of relevance for the prediction of future mid-price movements. Furthermore, by using ultra-high-frequency order book data over a considerably long period, a quantitative characterization of the long-range autocorrelation is achieved by extracting the so-called scaling exponent. By jointly considering duration series of both inter- and cross- events, for different stocks, and separately for the bid and ask side, long-range autocorrelations are found to be ubiquitous and qualitatively homogeneous. With respect to the scaling exponent, evidence of three cross-overs is found, and complex heterogeneous associations with a number of relevant economic variables discussed. Lastly, the use of copulas as the main ingredient for modeling and forecasting realized measures of volatility is explored. The modeling background resembles but generalizes, the well-known Heterogeneous Autoregressive (HAR) model. In-sample and out-of-sample analyses, based on several performance measures, statistical tests, and robustness checks, show forecasting improvements of copula-based modeling over the HAR benchmark

    Uncertainty in Artificial Intelligence: Proceedings of the Thirty-Fourth Conference

    Get PDF

    A Re-engineering approach for software systems complying with the utilisation of ubiquitous computing technologies.

    Get PDF
    The evident progression of ubiquitous technologies has put forward the introduction of new features which software systems can sustain. Several of the ubiquitous technologies available today are regarded as fundamental elements of many software applications in various domains. The utilisation of ubiquitous technologies has an apparent impact on business processes that can grant organisations a competitive advantage and improve their productivity. The change in the business processes in such organisations typically leads to a change in the underlying software systems. In addressing the need for change in the underlying software systems, this research is focused on establishing a general framework and methodology to facilitate the reengineering of software systems in order to allow the incorporation of new features which are introduced by the employment of ubiquitous technologies. Although this thesis aims to be general and not limited to a specific programming language or software development approach, the focus is on Object-Oriented software. The reengineering framework follows a systematic step-based approach, with greater focus on the reverse engineering aspect. The four stages of the framework are: program understanding, additional-requirement engineering, integration, and finally the testing and operation stage. In its first stage, the proposed reengineering framework regards the source code as the starting point to understand the system using a static-analysis based method. The second stage is concerned with the elicitation of the user functional requirements resulting from the introduction of ubiquitous technologies. In the third stage, the goal is to integrate the system’s components and hardware handlers using a developed integration algorithm and available integration techniques. In the fourth and final stage, which is discussed in a general manner only in this thesis, the reengineered system is tested and put in the operation phase. The proposed approach is demonstrated using a case study in Java to show that the proposed approach is feasible and promising in its domain. Conclusions are drawn based on analysis and further research directions are discussed at the end of the study
    corecore