165 research outputs found

    Migrating to GraphQL: A Practical Assessment

    Full text link
    GraphQL is a novel query language proposed by Facebook to implement Web-based APIs. In this paper, we present a practical study on migrating API clients to this new technology. First, we conduct a grey literature review to gain an in-depth understanding on the benefits and key characteristics normally associated to GraphQL by practitioners. After that, we assess such benefits in practice, by migrating seven systems to use GraphQL, instead of standard REST-based APIs. As our key result, we show that GraphQL can reduce the size of the JSON documents returned by REST APIs in 94% (in number of fields) and in 99% (in number of bytes), both median results.Comment: 11 pages. Accepted at 26th International Conference on Software Analysis, Evolution and Reengineerin

    Using Cluster Analysis to Improve the Design of Component Interfaces

    Full text link

    Automated Improvement of Software Design by Search-Based Refactoring

    Get PDF
    Le coût de maintenance du logiciel est estimé à plus de 70% du coût total du système, en raison de nombreux facteurs, y compris les besoins des nouveaux utilisateurs, l’adoption de nouvelles technologies et la qualité des systèmes logiciels. De ces facteurs, la qualité est celle que nous pouvons contrôler et continuellement améliorer pour empêcher la dégradation de la performance et la réduction de l’efficacité (par exemple, la dégradation de la conception du logiciel). De plus, pour rester compétitive, l’industrie du logiciel a raccourci ses cycles de lancement afin de fournir de nouveaux produits et fonctionnalités plus rapidement, ce qui entraîne une pression accrue sur les équipes de développeurs et une accélération de l’évolution de la conception du système. Une façon d’empêcher la dégradation du logiciel est l’identification et la correction des anti-patrons qui sont des indicateurs de mauvaise qualité de conception. Pour améliorer la qualité de la conception et supprimer les anti-patrons, les développeurs effectuent de petites transformations préservant le comportement (c.-à-d., refactoring). Le refactoring manuel est coûteux, car il nécessite (1) d’identifier les entités de code qui doivent être refactorisées ; (2) générer des opérations de refactoring pour les classes identifiées à l’étape précédente ; (3) trouver le bon ordre d’application des refactorings générés, pour maximiser le bénéfice pour la qualité du code et minimiser les conflits. Ainsi, les chercheurs et les praticiens ont formulé le refactoring comme un problème d’optimisation et utilisent des techniques basées sur la recherche pour proposer des approches (semi) automatisées pour le résoudre. Dans cette thèse, je propose plusieurs méthodes pour résoudre les principaux problèmes des outils existants, afin d’aider les développeurs dans leurs activités de maintenance et d’assurance qualité. Ma thèse est qu’il est possible d’améliorer le refactoring automatisé en considérant de nouvelles dimensions : (1) le contexte de tâche du développeur pour prioriser le refactoring des classes pertinentes ; (2) l’effort du test pour réduire le coût des tests après le refactoring ; (3) l’identification de conflit entre opérations de refactoring afin de réduire le coût de refactoring ; et (4) l’efficacité énergétique pour améliorer la consommation d’énergie des applications mobiles après refactoring.----------ABSTRACT: Software maintenance cost is estimated to be more than 70% of the total cost of system, because of many factors, including new user’s requirements, the adoption of new technologies and the quality of software systems. From these factors, quality is the one that we can control and continually improved to prevent degradation of performance and reduction of effectiveness (a.k.a. design decay). Moreover, to stay competitive, the software industry has shortened its release cycles to deliver new products and features faster, which results in more pressure on developer teams and the acceleration of system’s design evolution. One way to prevent design decay is the identification and correction of anti-patterns which are indicators of poor design quality. To improve design quality and remove anti-patterns, developers perform small behavior-preserving transformations (a.k.a. refactoring). Manual refactoring is expensive, as it requires to (1) identify the code entities that need to be refactored; (2) generate refactoring operations for classes identified in the previous step; (3) find the correct order of application of the refactorings generated, to maximize the quality effect and to minimize conflicts. Hence, researchers and practitioners have formulated refactoring as an optimization problem and use search-based techniques to propose (semi)automated approaches to solve it. In this dissertation, we propose several approaches to tackle some of the major issues in existing refactoring tools, to assist developers in their maintenance and quality assurance activities

    Closing the gap between guidance and practice, an investigation of the relevance of design guidance to practitioners using object-oriented technologies

    Get PDF
    This thesis investigates if object oriented guidance is relevant in practice, and how this affects software that is produced. This is achieved by surveying practitioners and studying how constructs such as interfaces and inheritance are used in open-source systems. Surveyed practitioners framed 'good design' in terms of impact on development and maintenance. Recognition of quality requires practitioner judgement (individually and as a group), and principles are valued over rules. Time constraints heighten sensitivity to the rework cost of poor design decisions. Examination of open source systems highlights the use of interface and inheritance. There is some evidence of 'textbook' use of these structures, and much use is simple. Outliers are widespread indicating a pragmatic approach. Design is found to reflect the pressures of practice - high-level decisions justify 'designed' structures and architecture, while uncertainty leads to deferred design decisions - simpler structures, repetition, and unconsolidated design. Sub-populations of structures can be identified which may represent common trade-offs. Useful insights are gained into practitioner attitude to design guidance. Patterns of use and structure are identified which may aid in assessment and comprehension of object oriented systems.This thesis investigates if object oriented guidance is relevant in practice, and how this affects software that is produced. This is achieved by surveying practitioners and studying how constructs such as interfaces and inheritance are used in open-source systems. Surveyed practitioners framed 'good design' in terms of impact on development and maintenance. Recognition of quality requires practitioner judgement (individually and as a group), and principles are valued over rules. Time constraints heighten sensitivity to the rework cost of poor design decisions. Examination of open source systems highlights the use of interface and inheritance. There is some evidence of 'textbook' use of these structures, and much use is simple. Outliers are widespread indicating a pragmatic approach. Design is found to reflect the pressures of practice - high-level decisions justify 'designed' structures and architecture, while uncertainty leads to deferred design decisions - simpler structures, repetition, and unconsolidated design. Sub-populations of structures can be identified which may represent common trade-offs. Useful insights are gained into practitioner attitude to design guidance. Patterns of use and structure are identified which may aid in assessment and comprehension of object oriented systems

    Business rules based legacy system evolution towards service-oriented architecture.

    Get PDF
    Enterprises can be empowered to live up to the potential of becoming dynamic, agile and real-time. Service orientation is emerging from the amalgamation of a number of key business, technology and cultural developments. Three essential trends in particular are coming together to create a new revolutionary breed of enterprise, the service-oriented enterprise (SOE): (1) the continuous performance management of the enterprise; (2) the emergence of business process management; and (3) advances in the standards-based service-oriented infrastructures. This thesis focuses on this emerging three-layered architecture that builds on a service-oriented architecture framework, with a process layer that brings technology and business together, and a corporate performance layer that continually monitors and improves the performance indicators of global enterprises provides a novel framework for the business context in which to apply the important technical idea of service orientation and moves it from being an interesting tool for engineers to a vehicle for business managers to fundamentally improve their businesses

    Search-Based Scheduling of Experiments in Continuous Deployment

    Get PDF
    Continuous experimentation involves practices for testing new functionality on a small fraction of the user base in production environments. Running multiple experiments in parallel requires handling user assignments (i.e., which users are part of which experiments) carefully as experiments might overlap and influence each other. Furthermore, experiments are prone to change, get canceled, or are adjusted and restarted, and new ones are added regularly. We formulate this as an optimization problem, fostering the parallel execution of experiments and making sure that enough data is collected for every experiment avoiding overlapping experiments. We propose a genetic algorithm that is capable of (re-)scheduling experiments and compare with other search-based approaches (random sampling, local search, and simulated annealing). Our evaluation shows that our genetic implementation outperforms the other approaches by up to 19% regarding the fitness of the solutions identified and up to a factor three in execution time in our evaluation scenarios

    Autonomic behavioural framework for structural parallelism over heterogeneous multi-core systems.

    Get PDF
    With the continuous advancement in hardware technologies, significant research has been devoted to design and develop high-level parallel programming models that allow programmers to exploit the latest developments in heterogeneous multi-core/many-core architectures. Structural programming paradigms propose a viable solution for e ciently programming modern heterogeneous multi-core architectures equipped with one or more programmable Graphics Processing Units (GPUs). Applying structured programming paradigms, it is possible to subdivide a system into building blocks (modules, skids or components) that can be independently created and then used in di erent systems to derive multiple functionalities. Exploiting such systematic divisions, it is possible to address extra-functional features such as application performance, portability and resource utilisations from the component level in heterogeneous multi-core architecture. While the computing function of a building block can vary for di erent applications, the behaviour (semantic) of the block remains intact. Therefore, by understanding the behaviour of building blocks and their structural compositions in parallel patterns, the process of constructing and coordinating a structured application can be automated. In this thesis we have proposed Structural Composition and Interaction Protocol (SKIP) as a systematic methodology to exploit the structural programming paradigm (Building block approach in this case) for constructing a structured application and extracting/injecting information from/to the structured application. Using SKIP methodology, we have designed and developed Performance Enhancement Infrastructure (PEI) as a SKIP compliant autonomic behavioural framework to automatically coordinate structured parallel applications based on the extracted extra-functional properties related to the parallel computation patterns. We have used 15 di erent PEI-based applications (from large scale applications with heavy input workload that take hours to execute to small-scale applications which take seconds to execute) to evaluate PEI in terms of overhead and performance improvements. The experiments have been carried out on 3 di erent Heterogeneous (CPU/GPU) multi-core architectures (including one cluster machine with 4 symmetric nodes with one GPU per node and 2 single machines with one GPU per machine). Our results demonstrate that with less than 3% overhead, we can achieve up to one order of magnitude speed-up when using PEI for enhancing application performance

    Proceedings of the 4th International Conference on Principles and Practices of Programming in Java

    Full text link
    This book contains the proceedings of the 4th international conference on principles and practices of programming in Java. The conference focuses on the different aspects of the Java programming language and its applications
    • …
    corecore