15,131 research outputs found

    Environments to support collaborative software engineering

    Get PDF
    With increasing globalisation of software production, widespread use of software components, and the need to maintain software systems over long periods of time, there has been a recognition that better support for collaborative working is needed by software engineers. In this paper, two approaches to developing improved system support for collaborative software engineering are described: GENESIS and OPHELIA. As both projects are moving towards industrial trials and eventual publicreleases of their systems, this exercise of comparing and contrasting our approaches has provided the basis for future collaboration between our projects particularly in carrying out comparative studies of our approaches in practical use

    Coordination approaches and systems - part I : a strategic perspective

    Get PDF
    This is the first part of a two-part paper presenting a fundamental review and summary of research of design coordination and cooperation technologies. The theme of this review is aimed at the research conducted within the decision management aspect of design coordination. The focus is therefore on the strategies involved in making decisions and how these strategies are used to satisfy design requirements. The paper reviews research within collaborative and coordinated design, project and workflow management, and, task and organization models. The research reviewed has attempted to identify fundamental coordination mechanisms from different domains, however it is concluded that domain independent mechanisms need to be augmented with domain specific mechanisms to facilitate coordination. Part II is a review of design coordination from an operational perspective

    Study of Tools Interoperability

    Get PDF
    Interoperability of tools usually refers to a combination of methods and techniques that address the problem of making a collection of tools to work together. In this study we survey different notions that are used in this context: interoperability, interaction and integration. We point out relation between these notions, and how it maps to the interoperability problem. We narrow the problem area to the tools development in academia. Tools developed in such environment have a small basis for development, documentation and maintenance. We scrutinise some of the problems and potential solutions related with tools interoperability in such environment. Moreover, we look at two tools developed in the Formal Methods and Tools group1, and analyse the use of different integration techniques

    Transfer Learning for Improving Model Predictions in Highly Configurable Software

    Full text link
    Modern software systems are built to be used in dynamic environments using configuration capabilities to adapt to changes and external uncertainties. In a self-adaptation context, we are often interested in reasoning about the performance of the systems under different configurations. Usually, we learn a black-box model based on real measurements to predict the performance of the system given a specific configuration. However, as modern systems become more complex, there are many configuration parameters that may interact and we end up learning an exponentially large configuration space. Naturally, this does not scale when relying on real measurements in the actual changing environment. We propose a different solution: Instead of taking the measurements from the real system, we learn the model using samples from other sources, such as simulators that approximate performance of the real system at low cost. We define a cost model that transform the traditional view of model learning into a multi-objective problem that not only takes into account model accuracy but also measurements effort as well. We evaluate our cost-aware transfer learning solution using real-world configurable software including (i) a robotic system, (ii) 3 different stream processing applications, and (iii) a NoSQL database system. The experimental results demonstrate that our approach can achieve (a) a high prediction accuracy, as well as (b) a high model reliability.Comment: To be published in the proceedings of the 12th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS'17

    Emulating and evaluating hybrid memory for managed languages on NUMA hardware

    Get PDF
    Non-volatile memory (NVM) has the potential to become a mainstream memory technology and challenge DRAM. Researchers evaluating the speed, endurance, and abstractions of hybrid memories with DRAM and NVM typically use simulation, making it easy to evaluate the impact of different hardware technologies and parameters. Simulation is, however, extremely slow, limiting the applications and datasets in the evaluation. Simulation also precludes critical workloads, especially those written in managed languages such as Java and C#. Good methodology embraces a variety of techniques for evaluating new ideas, expanding the experimental scope, and uncovering new insights. This paper introduces a platform to emulate hybrid memory for managed languages using commodity NUMA servers. Emulation complements simulation but offers richer software experimentation. We use a thread-local socket to emulate DRAM and a remote socket to emulate NVM. We use standard C library routines to allocate heap memory on the DRAM and NVM sockets for use with explicit memory management or garbage collection. We evaluate the emulator using various configurations of write-rationing garbage collectors that improve NVM lifetimes by limiting writes to NVM, using 15 applications and various datasets and workload configurations. We show emulation and simulation confirm each other's trends in terms of writes to NVM for different software configurations, increasing our confidence in predicting future system effects. Emulation brings novel insights, such as the non-linear effects of multi-programmed workloads on NVM writes, and that Java applications write significantly more than their C++ equivalents. We make our software infrastructure publicly available to advance the evaluation of novel memory management schemes on hybrid memories
    • ā€¦
    corecore