931 research outputs found

    A meta-analysis approach to refactoring and XP

    Get PDF
    The mechanics of seventy-two different Java refactorings are described fully in Fowler's text. In the same text, Fowler describes seven categories of refactoring, into which each of the seventy-two refactorings can be placed. A current research problem in the refactoring and XP community is assessing the likely time and testing effort for each refactoring, since any single refactoring may use any number of other refactorings as part of its mechanics and, in turn, can be used by many other refactorings. In this paper, we draw on a dependency analysis carried out as part of our research in which we identify the 'Use' and 'Used By' relationships of refactorings in all seven categories. We offer reasons why refactorings in the 'Dealing with Generalisation' category seem to embrace two distinct refactoring sub-categories and how refactorings in the 'Moving Features between Objects' category also exhibit specific characteristics. In a wider sense, our meta-analysis provides a developer with concrete guidelines on which refactorings, due to their explicit dependencies, will prove problematic from an effort and testing perspective

    Interface refactoring in performance-constrained web services

    Get PDF
    This paper presents the development of REF-WS an approach to enable a Web Service provider to reliably evolve their service through the application of refactoring transformations. REF-WS is intended to aid service providers, particularly in a reliability and performance constrained domain as it permits upgraded ’non-backwards compatible’ services to be deployed into a performance constrained network where existing consumers depend on an older version of the service interface. In order for this to be successful, the refactoring and message mediation needs to occur without affecting functional compatibility with the services’ consumers, and must operate within the performance overhead expected of the original service, introducing as little latency as possible. Furthermore, compared to a manually programmed solution, the presented approach enables the service developer to apply and parameterize refactorings with a level of confidence that they will not produce an invalid or ’corrupt’ transformation of messages. This is achieved through the use of preconditions for the defined refactorings

    Evaluation of Kermeta for Solving Graph-based Problems

    Get PDF
    Kermeta is a meta-language for specifying the structure and behavior of graphs of interconnected objects called models. In this paper,\ud we show that Kermeta is relatively suitable for solving three graph-based\ud problems. First, Kermeta allows the specification of generic model\ud transformations such as refactorings that we apply to different metamodels\ud including Ecore, Java, and Uml. Second, we demonstrate the extensibility\ud of Kermeta to the formal language Alloy using an inter-language model\ud transformation. Kermeta uses Alloy to generate recommendations for\ud completing partially specified models. Third, we show that the Kermeta\ud compiler achieves better execution time and memory performance compared\ud to similar graph-based approaches using a common case study. The\ud three solutions proposed for those graph-based problems and their\ud evaluation with Kermeta according to the criteria of genericity,\ud extensibility, and performance are the main contribution of the paper.\ud Another contribution is the comparison of these solutions with those\ud proposed by other graph-based tools

    Educational approach of refactoring in facilitating reverse engineering

    Get PDF
    Refactoring improves software codes and design.This activity often neglected by software developers because they need time to decide tactically where and when to refactor codes.Although the concepts theoretically instilled in the developer’s mind, this activity is not easy to apply and visualize.This situation became more problematic when deals with inexperienced developers. Therefore, there is a need to develop an educational approach to comprehend refactoring activity.This activity was applied through reverse engineering tasks.The software engineering (SE) teams were required to apply reverse engineering activity in order to check the consistency between codes and design.The teams were encouraged to apply Model-View-Controller (MVC) pattern architecture in order to facilitate the activities.Findings revealed that Extreme Programming (XP) teams managed to complete reverse engineering tasks earlier than Formal teams.This study found that the approach is important to increase understanding of refactoring activities in reverse engineering process.This approach will be furthered applied for others SE teams to gain more insight and perceptions towards improving SE course

    Keeping the Cost of Process Change Low through Refactoring

    Get PDF
    With the increasing adoption of process-aware information systems (PAIS) large process model repositories have emerged. Over time respective models have to be re-aligned to the real world business processes through customization or adaptation. This bears the risk that model redundancies are introduced and complexity is increased. If no continuous investment is made in keeping models simple, changes are becoming increasingly costly and error-prone. Although refactoring techniques are widely used in software engineering to address related problems, this does not yet constitute state-of-the art in business process management. Consequently, process designers either have to refactor process models by hand or can not apply respective techniques at all. In this paper we propose a set of techniques for refactoring large process repositories, which are behaviour-preserving. The proposed refactorings enable process designers to effectively deal with model complexity by making process models easier to change, less error-prone and better understandable

    Evaluating the Impact of Critical Factors in Agile Continuous Delivery Process: A System Dynamics Approach

    Get PDF
    Continuous Delivery is aimed at the frequent delivery of good quality software in a speedy, reliable and efficient fashion – with strong emphasis on automation and team collaboration. However, even with this new paradigm, repeatability of project outcome is still not guaranteed: project performance varies due to the various interacting and inter-related factors in the Continuous Delivery 'system'. This paper presents results from the investigation of various factors, in particular agile practices, on the quality of the developed software in the Continuous Delivery process. Results show that customer involvement and the cognitive ability of the QA have the most significant individual effects on the quality of software in continuous delivery

    A heuristic-based approach to code-smell detection

    Get PDF
    Encapsulation and data hiding are central tenets of the object oriented paradigm. Deciding what data and behaviour to form into a class and where to draw the line between its public and private details can make the difference between a class that is an understandable, flexible and reusable abstraction and one which is not. This decision is a difficult one and may easily result in poor encapsulation which can then have serious implications for a number of system qualities. It is often hard to identify such encapsulation problems within large software systems until they cause a maintenance problem (which is usually too late) and attempting to perform such analysis manually can also be tedious and error prone. Two of the common encapsulation problems that can arise as a consequence of this decomposition process are data classes and god classes. Typically, these two problems occur together – data classes are lacking in functionality that has typically been sucked into an over-complicated and domineering god class. This paper describes the architecture of a tool which automatically detects data and god classes that has been developed as a plug-in for the Eclipse IDE. The technique has been evaluated in a controlled study on two large open source systems which compare the tool results to similar work by Marinescu, who employs a metrics-based approach to detecting such features. The study provides some valuable insights into the strengths and weaknesses of the two approache
    corecore