53 research outputs found

    Software restructuring: understanding longitudinal architectural changes and refactoring

    Get PDF
    The complexity of software systems increases as the systems evolve. As the degradation of the system's structure accumulates, maintenance effort and defect-proneness tend to increase. In addition, developers often opt to employ sub-optimal solutions in order to achieve short-time goals, in a phenomenon that has been recently called technical debt. In this context, software restructuring serves as a way to alleviate and/or prevent structural degradation. Restructuring of software is usually performed in either higher or lower levels of granularity, where the first indicates broader changes in the system's structural architecture and the latter indicates refactorings performed to fewer and localised code elements. Although tools to assist architectural changes and refactoring are available, there is still no evidence these approaches are widely adopted by practitioners. Hence, an understanding of how developers perform architectural changes and refactoring in their daily basis and in the context of the software development processes they adopt is necessary. Current software development is iterative and incremental with short cycles of development and release. Thus, tools and processes that enable this development model, such as continuous integration and code review, are widespread among software engineering practitioners. Hence, this thesis investigates how developers perform longitudinal and incremental architectural changes and refactoring during code review through a wide range of empirical studies that consider different moments of the development lifecycle, different approaches, different automated tools and different analysis mechanisms. Finally, the observations and conclusions drawn from these empirical investigations extend the existing knowledge on how developers restructure software systems, in a way that future studies can leverage this knowledge to propose new tools and approaches that better fit developers' working routines and development processes

    A Modular design framework for Lab-On-a-Chips

    Full text link
    This research discusses the modular design framework for designing Lab-On-a-Chip (LoC) devices. This work will help researchers to be able to focus on their research strengths, without needing to learn details of LoCs design, and they can reuse existing LoC designs

    Evolutionary robotics in high altitude wind energy applications

    Get PDF
    Recent years have seen the development of wind energy conversion systems that can exploit the superior wind resource that exists at altitudes above current wind turbine technology. One class of these systems incorporates a flying wing tethered to the ground which drives a winch at ground level. The wings often resemble sports kites, being composed of a combination of fabric and stiffening elements. Such wings are subject to load dependent deformation which makes them particularly difficult to model and control. Here we apply the techniques of evolutionary robotics i.e. evolution of neural network controllers using genetic algorithms, to the task of controlling a steerable kite. We introduce a multibody kite simulation that is used in an evolutionary process in which the kite is subject to deformation. We demonstrate how discrete time recurrent neural networks that are evolved to maximise line tension fly the kite in repeated looping trajectories similar to those seen using other methods. We show that these controllers are robust to limited environmental variation but show poor generalisation and occasional failure even after extended evolution. We show that continuous time recurrent neural networks (CTRNNs) can be evolved that are capable of flying appropriate repeated trajectories even when the length of the flying lines are changing. We also show that CTRNNs can be evolved that stabilise kites with a wide range of physical attributes at a given position in the sky, and systematically add noise to the simulated task in order to maximise the transferability of the behaviour to a real world system. We demonstrate how the difficulty of the task must be increased during the evolutionary process to deal with this extreme variability in small increments. We describe the development of a real world testing platform on which the evolved neurocontrollers can be tested

    State-based testing - a new method for testing object-oriented programs

    Get PDF
    State-based testing is a new method for testing object-oriented programs. The information stored in the state of an object is of two kinds: control-information and data-storage. The control-information transitions are modelled as a finite state automaton. Every operation of the class under test is considered as a mapping from starting states to a finishing states dependent upon the parameters passed. The possible parameter values are analysed for significant values which combined with the invocation of an operation can be used to represent stimuli applied to an object under test. State-based testing validates the expected transformations that can occur within a class. Classes are modelled using physical values assigned to the attributes of the class. The range of physical values is reduced by the use of a technique based on equivalence partitioning. This approach has a number of advantages over the conceptual modelling of a class, in particular the ease of manipulation of physical values and the independence of each operation from the other operations provided by an object. The technique when used in conjunction with other techniques provides an adequate level of validation for object-oriented programs. A suite of prototype tools that automate the generation of state-based test cases are outlined. These tools are used in four case studies that are presented as an evaluation of the technique. The code coverage achieved with each case study is analysed for the factors that affect the effectiveness of the state-based test suite. Additionally, errors have been seeded into 2 of the classes to determine the effectiveness of the technique for detecting errors on paths that are executed by the test suite. 92.5% of the errors seeded were detected by the state-based test-suite

    Automated model-based spreadsheet debugging

    Get PDF
    Spreadsheets are interactive data organization and calculation programs that are developed in spreadsheet environments like Microsoft Excel or LibreOffice Calc. They are probably the most successful example of end-user developed software and are utilized in almost all branches and at all levels of companies. Although spreadsheets often support important decision making processes, they are, like all software, prone to error. In several cases, faults in spreadsheets have caused severe losses of money. Spreadsheet developers are usually not educated in the practices of software development. As they are thus not familiar with quality control methods like systematic testing or debugging, they have to be supported by the spreadsheet environment itself to search for faults in their calculations in order to ensure the correctness and a better overall quality of the developed spreadsheets. This thesis by publication introduces several approaches to locate faults in spreadsheets. The presented approaches are based on the principles of Model-Based Diagnosis (MBD), which is a technique to find the possible reasons why a system does not behave as expected. Several new algorithmic enhancements of the general MBD approach are combined in this thesis to allow spreadsheet users to debug their spreadsheets and to efficiently find the reason of the observed unexpected output values. In order to assure a seamless integration into the environment that is well-known to the spreadsheet developers, the presented approaches are implemented as an extension for Microsoft Excel. The first part of the thesis outlines the different algorithmic approaches that are introduced in this thesis and summarizes the improvements that were achieved over the general MBD approach. In the second part, the appendix, a selection of the author's publications are presented. These publications comprise (a) a survey of the research in the area of spreadsheet quality assurance, (b) a work describing how to adapt the general MBD approach to spreadsheets, (c) two new algorithmic improvements of the general technique to speed up the calculation of the possible reasons of an observed fault, (d) a new concept and algorithm to efficiently determine questions that a user can be asked during debugging in order to reduce the number of possible reasons for the observed unexpected output values, and (e) a new method to find faults in a set of spreadsheets and a new corpus of real-world spreadsheets containing faults that can be used to evaluate the proposed debugging approaches

    Investigations into controllers for adaptive autonomous agents based on artificial neural networks.

    Get PDF
    This thesis reports the development and study of novel architectures for the simulation of adaptive behaviour based on artificial neural networks. There are two distinct levels of enquiry. At the primary level, the initial aim was to design and implement a unified architecture integrating sensorimotor learning and overall control. This was intended to overcome shortcomings of typical behaviour-based approaches in reactive control settings. It was achieved in two stages. Initially, feedforward neural networks were used at the sensorimotor level of a modular architecture and overall control was provided by an algorithm. The algorithm was then replaced by a recurrent neural network. For training, a form of reinforcement learning was used. This posed an intriguing composite of the well-known action selection and credit assignment problems. The solution was demonstrated in two sets of simulation studies involving variants of each architecture. These studies also showed: firstly that the expected advantages over the standard behaviour-based approach were realised, and secondly that the new integrated architecture preserved these advantages, with the added value of a unified control approach. The secondary level of enquiry addressed the more foundational question of whether the choice of processing mechanism is critical if the simulation of adaptive behaviour is to progress much beyond the reactive stage in more than a trivial sense. It proceeded by way of a critique of the standard behaviourbased approach to make a positive assessment of the potential for recurrent neural networks to fill such a role. The findings were used to inform further investigations at the primary level of enquiry. These were based on a framework for the simulation of delayed response learning using supervised learning techniques. A further new architecture, based on a second-order recurrent neural network, was designed for this set of studies. It was then compared with existing architectures. Some interesting results are presented to indicate the appropriateness of the design and the potential of the approach, though limitations in the long run are not discounted

    The formal generation of models for scientific simulations

    Get PDF
    It is now commonplace for complex physical systems such as the climate system to be studied indirectly via computer simulations. Often, the equations that govern the underlying physical system are known but detailed or highresolution computer models of these equations (“governing models”) are not practical because of limited computational resources; so the models are simplified or “parameterised”. However, if the output of a simplified model is to lead to conclusions about a physical system, we must prove that these outputs reflect reality and are not merely artifacts of the simplifications. At present, simplifications are usually based on informal, ad-hoc methods making it difficult or impossible to provide such a proof rigorously. Here we introduce a set of formal methods for generating computer models. We present a newly developed computer program, “iGen”, which syntactically analyses the computer code of a high-resolution, governing model and, without executing it, automatically produces a much faster, simplified model with provable bounds on error compared to the governing model. These bounds allow scientists to rigorously distinguish real world phenomena from artifact in subsequent numerical experiments using the simplified model. Using simple physical systems as examples, we illustrate that iGen produces simplified models that execute typically orders of magnitude faster than their governing models. Finally, iGen is used to generate a model of entrainment in marine stratocumulus. The resulting simplified model is appropriate for use as part of a parameterisation of marine stratocumulus in a Global Climate Model
    • …
    corecore