10 research outputs found

    Backtracking Incremental Continuous Integration

    Get PDF
    Failing integration builds are show stoppers. Development activity is stalled because developers have to wait with integrating new changes until the problem is fixed and a successful build has been run. We show how backtracking can be used to mitigate the impact of build failures in the context of component-based software development. This way, even in the face of failure, development may continue and a working version is always available

    Incremental compilation and deployment for OutSystems Platform

    Get PDF
    OutSystems Platform is used to develop, deploy, and maintain enterprise web an mobile web applications. Applications are developed through a visual domain specific language, in an integrated development environment, and compiled to a standard stack of web technologies. In the platform’s core, there is a compiler and a deployment service that transform the visual model into a running web application. As applications grow, compilation and deployment times increase as well, impacting the developer’s productivity. In the previous model, a full application was the only compilation and deployment unit. When the developer published an application, even if he only changed a very small aspect of it, the application would be fully compiled and deployed. Our goal is to reduce compilation and deployment times for the most common use case, in which the developer performs small changes to an application before compiling and deploying it. We modified the OutSystems Platform to support a new incremental compilation and deployment model that reuses previous computations as much as possible in order to improve performance. In our approach, the full application is broken down into smaller compilation and deployment units, increasing what can be cached and reused. We also observed that this finer model would benefit from a parallel execution model. Hereby, we created a task driven Scheduler that executes compilation and deployment tasks in parallel. Our benchmarks show a substantial improvement of the compilation and deployment process times for the aforementioned development scenario

    The Scalability of Multicast Communication

    Get PDF
    Multicast is a communication method which operates on groups of applications. Having multiple instances of an application which are addressed collectively using a unique, multicast address, allows elegant solutions to some of the more intractable problems in distributed programming, such as providing fault tolerance. However, as multicast techniques are applied in areas such as distributed operating systems, where the operating system may span a large number of hosts, or on faster network architectures, where the problems of congestion reduce the effectiveness of the technique, then the scalability of multicast must be addressed if multicast is to gain a wider application. The main scalability issue was considered to be packet loss due to buffer overrun, the most common cause of this buffer overrun being the mismatch in packet arrival rate and packet consumption at the multicast originator, the so-called implosion problem. This issue affects positively acknowledged and transactional protocols. As these two techniques are the most common protocol designs, it was felt that an investigation into the problems of these types of protocol would be most effective. A model for implosion was developed which was simulated in order to investigate the parameters of implosion. A measure of this implosion was derived from the data, this index of implosion allowing the severity of implosion to be described as well as the location of the implosion in the model. This implosion index was derived by dividing the rate at which buffers were occupied by the rate at which packets were generated by the model. The value may then be used to predict the number of buffers required given the number of packets expected. A number of techniques were developed which may be used to offset implosion, either by artificially increasing the inter-packet gap, or by distributing replies so that no one host receives enough packets to cause an implosion. Of these alternatives, the latter offers the most promise, although requiring a large effort to maintain the resulting hierarchical structure in the presence of multiple failures

    Inverse software configuration management

    Get PDF
    Software systems are playing an increasingly important role in almost every aspect of today’s society such that they impact on our businesses, industry, leisure, health and safety. Many of these systems are extremely large and complex and depend upon the correct interaction of many hundreds or even thousands of heterogeneous components. Commensurate with this increased reliance on software is the need for high quality products that meet customer expectations, perform reliably and which can be cost-effectively and safely maintained. Techniques such as software configuration management have proved to be invaluable during the development process to ensure that this is the case. However, there are a very large number of legacy systems which were not developed under controlled conditions, but which still, need to be maintained due to the heavy investment incorporated within them. Such systems are characterised by extremely high program comprehension overheads and the probability that new errors will be introduced during the maintenance process often with serious consequences. To address the issues concerning maintenance of legacy systems this thesis has defined and developed a new process and associated maintenance model, Inverse Software Configuration Management (ISCM). This model centres on a layered approach to the program comprehension process through the definition of a number of software configuration abstractions. This information together with the set of rules for reclaiming the information is stored within an Extensible System Information Base (ESIB) via, die definition of a Programming-in-the- Environment (PITE) language, the Inverse Configuration Description Language (ICDL). In order to assist the application of the ISCM process across a wide range of software applications and system architectures, die PISCES (Proforma Identification Scheme for Configurations of Existing Systems) method has been developed as a series of defined procedures and guidelines. To underpin the method and to offer a user-friendly interface to the process a series of templates, the Proforma Increasing Complexity Series (PICS) has been developed. To enable the useful employment of these techniques on large-scale systems, the subject of automation has been addressed through the development of a flexible meta-CASE environment, the PISCES M4 (MultiMedia Maintenance Manager) system. Of particular interest within this environment is the provision of a multimedia user interface (MUI) to die maintenance process. As a means of evaluating the PISCES method and to provide feedback into die ISCM process a number of practical applications have been modelled. In summary, this research has considered a number of concepts some of which are innovative in themselves, others of which are used in an innovative manner. In combination these concepts may be considered to considerably advance the knowledge and understanding of die comprehension process during the maintenance of legacy software systems. A number of publications have already resulted from the research and several more are in preparation. Additionally a number of areas for further study have been identified some of which are already underway as funded research and development projects

    Purely top-down software rebuilding

    Get PDF
    Software rebuilding is the process of deriving a deployable software system from its primitive source objects. A build tool helps maintain consistency between the derived objects and source objects by ensuring that all necessary build steps are re-executed in the correct order after a set of changes is made to the source objects. It is imperative that derived objects accurately represent the source objects from which they were supposedly constructed; otherwise, subsequent testing and quality assurance is invalidated. This thesis aims to advance the state-of-the-art in tool support for automated software rebuilding. It surveys the body of background work, lays out a set of design considerations for build tools, and examines areas where current tools are limited. It examines the properties of a next-generation tool concept, redo, conceived by D. J. Bernstein; redo is novel because it employs a purely top-down approach to software rebuilding that promises to be simpler, more flexible, and more reliable than current approaches. The details of a redo prototype written by the author of this thesis are explained including the central algorithms and data structures. Lastly, the redo prototype is evaluated on some sample software systems with respect to migration effort between build tools as well as size, complexity, and performances aspects of the resulting build systems

    Ansatz einer entwicklungsprojektweiten Abhängigkeits-Konsistenz des Quellcodemodells zur Qualitätsverbesserung von Software-Entwicklungsprojekten

    Get PDF
    Heutige Softwareentwicklungsprojekte müssen oft eine Vielzahl von Anforderungen mit hoher Komplexität in kurzen Release-Zyklen umsetzen. Daraus ergeben sich besondere Herausforderungen an Arbeitsteilung, Dokumentation, Prozesssicherheit und Qualität. Entwicklungsarbeiten müssen parallelisiert werden und Softwareentwickler müssen sich immer wieder in den Quellcode einarbeiten. Die Entwickler brauchen schnelle und präzise Rückmeldung über die Qualität ihrer durchgeführten Änderungen am Quellcode. Über feingranulare Traceability Links in den Quellcode werden eine verbesserte Dokumentation und größere Prozesssicherheit ermöglicht. Dazu wird ein Metamodell für den Quellcode definiert und in ein Metamodell mit Anforderungsmanagement, Änderungsmanagement, Testdatenmanagement und Dokumentation eingebunden. Das gesamte Modell wird in einem Software Configuration Management (SCM) Repository abgespeichert, um die Versionierung aller Artefakte und Links zu ermöglichen. In einem Quellcode Editor können die Traceability Links erstellt und genutzt werden. Die Historie einzelner Quellcode Artefakte kann einschließlich der Traceability Links im Editor zur Anzeige gebracht werden. Durch das Vorliegen des Quellcodes als Modell wird auch ein feingranulares pessimistisches Sperren einzelner Modell Artefakte ermöglicht. Damit ist das parallele Bearbeiten einer Klasse oder einer Methode möglich, ohne dass der Quellcode verschmolzen werden muss. Es werden durch die Sperren auch Syntaxfehler im SCM Repository verhindert. Im Quellcode Editor werden die Sperren anderer Entwickler angezeigt. Continuous Integration wird dahingehend erweitert, dass durch Abspeichern von Class-Dateien im Repository ein schneller Produkt-Build und damit auch schnelleres Feedback für den Entwickler ermöglicht wird. Durch Testauswahlstrategien werden nur für den geänderten Quellcode relevante Tests ausgeführt. Eine Testauswahlstrategie verwendet dabei die Traceability Informationen zwischen geänderten Quellcode, Anforderung, Testspezifikation und dem Test-Quellcode. In großen Projekten entstehen auf Grund des Quellcodes sehr große Modelle, die eine Herausforderung bezüglich Speicherbedarfes und Performance darstellen. Es wurden Untersuchungen an Hand eines Projekts mit 6,5 Millionen Quellcode-Zeilen durchgeführt. Für diese Konzepte wurde ein Prototyp auf Basis der Eclipse Entwicklungsumgebung und für Java entwickelt

    The shared data-object model as a paradigm for programming distributed systems

    Get PDF

    Design and Implementation of Parallel Make

    No full text
    Make is the standard UNIX+ utility for maintaining programs. UNIX programmers have been using it for almost 10 years, and many UNIX programs nowadays are maintained by it. The strength of make is that it allows the user to specify how to compile program components, and that the system, after an update, is regenerated according to the specification and with minimum number of recompilations. With the appearance of multiple processor systems, we expect the time needed to "make" a program, or target, can effectively be reduced. Although the hardware provides parallelism, few tools are able to exploit this parallelism. The introduction of parallelism to make is the subject of this paper. We describe a parallel make and give an analysis of its performance
    corecore