507,016 research outputs found

    C2MS: Dynamic Monitoring and Management of Cloud Infrastructures

    Full text link
    Server clustering is a common design principle employed by many organisations who require high availability, scalability and easier management of their infrastructure. Servers are typically clustered according to the service they provide whether it be the application(s) installed, the role of the server or server accessibility for example. In order to optimize performance, manage load and maintain availability, servers may migrate from one cluster group to another making it difficult for server monitoring tools to continuously monitor these dynamically changing groups. Server monitoring tools are usually statically configured and with any change of group membership requires manual reconfiguration; an unreasonable task to undertake on large-scale cloud infrastructures. In this paper we present the Cloudlet Control and Management System (C2MS); a system for monitoring and controlling dynamic groups of physical or virtual servers within cloud infrastructures. The C2MS extends Ganglia - an open source scalable system performance monitoring tool - by allowing system administrators to define, monitor and modify server groups without the need for server reconfiguration. In turn administrators can easily monitor group and individual server metrics on large-scale dynamic cloud infrastructures where roles of servers may change frequently. Furthermore, we complement group monitoring with a control element allowing administrator-specified actions to be performed over servers within service groups as well as introduce further customized monitoring metrics. This paper outlines the design, implementation and evaluation of the C2MS.Comment: Proceedings of the The 5th IEEE International Conference on Cloud Computing Technology and Science (CloudCom 2013), 8 page

    A Change Support Model for Distributed Collaborative Work

    Full text link
    Distributed collaborative software development tends to make artifacts and decisions inconsistent and uncertain. We try to solve this problem by providing an information repository to reflect the state of works precisely, by managing the states of artifacts/products made through collaborative work, and the states of decisions made through communications. In this paper, we propose models and a tool to construct the artifact-related part of the information repository, and explain the way to use the repository to resolve inconsistencies caused by concurrent changes of artifacts. We first show the model and the tool to generate the dependency relationships among UML model elements as content of the information repository. Next, we present the model and the method to generate change support workflows from the information repository. These workflows give us the way to efficiently modify the change-related artifacts for each change request. Finally, we define inconsistency patterns that enable us to be aware of the possibility of inconsistency occurrences. By combining this mechanism with version control systems, we can make changes safely. Our models and tool are useful in the maintenance phase to perform changes safely and efficiently.Comment: 10 pages, 13 figures, 4 table

    Source File Set Search for Clone-and-Own Reuse Analysis

    Get PDF
    Clone-and-own approach is a natural way of source code reuse for software developers. To assess how known bugs and security vulnerabilities of a cloned component affect an application, developers and security analysts need to identify an original version of the component and understand how the cloned component is different from the original one. Although developers may record the original version information in a version control system and/or directory names, such information is often either unavailable or incomplete. In this research, we propose a code search method that takes as input a set of source files and extracts all the components including similar files from a software ecosystem (i.e., a collection of existing versions of software packages). Our method employs an efficient file similarity computation using b-bit minwise hashing technique. We use an aggregated file similarity for ranking components. To evaluate the effectiveness of this tool, we analyzed 75 cloned components in Firefox and Android source code. The tool took about two hours to report the original components from 10 million files in Debian GNU/Linux packages. Recall of the top-five components in the extracted lists is 0.907, while recall of a baseline using SHA-1 file hash is 0.773, according to the ground truth recorded in the source code repositories.Comment: 14th International Conference on Mining Software Repositorie

    Collaborative support for distributed design

    Get PDF
    A number of large integrated projects have been funded by the European Commission within both FP5 and FP6 that have aimed to develop distributed design solutions within the shipbuilding industry. VRShips-ROPAX was funded within FP5 and aimed to develop a platform to support distributed through-life design of a ROPAX (roll-on passenger) ferry. VIRTUE is an FP6 funded project that aims to integrate distributed virtual basins within a platform that allows a holistic Computational Fluid Dynamics (CFD) analysis of a ship to be undertaken. Finally, SAFEDOR is also an FP6 funded project that allows designers to perform distributed Risk-Based Design (RBD) and simulation of different types of vessels. The projects have a number of commonalities: the designers are either organisationally or geographically distributed; a large amount of the design and analysis work requires the use of computers, and the designers are expected to collaborate - sharing design tasks and data. In each case a Virtual Integration Platform (VIP) has been developed, building on and sharing ideas between the projects with the aim of providing collaborative support for distributed design. In each of these projects the University of Strathclyde has been primarily responsible for the development of the associated VIP. This paper describes each project in terms of their differing collaborative support requirements, and discusses the associated VIP in terms of the manner that collaborative support has been provided

    Simulation System for the Wendelstein 7-X Safety Control System

    Full text link
    The Wendelstein 7-X (W7-X) Safety Instrumented System (SIS) ensures personal safety and investment protection. The development and implementation of the SIS are based on the international safety standard for the process industry sector, IEC 61511. The SIS exhibits a distributed and hierarchical organized architecture consisting of a central Safety System (cSS) on the top and many local Safety Systems (lSS) at the bottom. Each technical component or diagnostic system potentially hazardous for the staff or for the device is equipped with an lSS. The cSS is part of the central control system of W7-X. Whereas the lSSs are responsible for the safety of each individual component, the cSS ensures safety of the whole W7-X device. For every operation phase of the W7-X experiment hard- and software updates for the SIS are mandatory. New components with additional lSS functionality and additional safety signals have to be integrated. Already established safety functions must be adapted and new safety functions have to be integrated into the cSS. Finally, the safety programs of the central and local safety systems have to be verified for every development stage and validated against the safety requirement specification. This contribution focuses on the application of a model based simulation system for the whole SIS of W7-X. A brief introduction into the development process of the SIS and its technical realization will be give followed by a description of the design and implementation of the SIS simulation system using the framework SIMIT (Siemens). Finally, first application experiences of this simulation system for the preparation of the SIS for the upcoming operation phase OP 1.2b of W7-X will be discussed

    Beyond Reuse Distance Analysis: Dynamic Analysis for Characterization of Data Locality Potential

    Get PDF
    Emerging computer architectures will feature drastically decreased flops/byte (ratio of peak processing rate to memory bandwidth) as highlighted by recent studies on Exascale architectural trends. Further, flops are getting cheaper while the energy cost of data movement is increasingly dominant. The understanding and characterization of data locality properties of computations is critical in order to guide efforts to enhance data locality. Reuse distance analysis of memory address traces is a valuable tool to perform data locality characterization of programs. A single reuse distance analysis can be used to estimate the number of cache misses in a fully associative LRU cache of any size, thereby providing estimates on the minimum bandwidth requirements at different levels of the memory hierarchy to avoid being bandwidth bound. However, such an analysis only holds for the particular execution order that produced the trace. It cannot estimate potential improvement in data locality through dependence preserving transformations that change the execution schedule of the operations in the computation. In this article, we develop a novel dynamic analysis approach to characterize the inherent locality properties of a computation and thereby assess the potential for data locality enhancement via dependence preserving transformations. The execution trace of a code is analyzed to extract a computational directed acyclic graph (CDAG) of the data dependences. The CDAG is then partitioned into convex subsets, and the convex partitioning is used to reorder the operations in the execution trace to enhance data locality. The approach enables us to go beyond reuse distance analysis of a single specific order of execution of the operations of a computation in characterization of its data locality properties. It can serve a valuable role in identifying promising code regions for manual transformation, as well as assessing the effectiveness of compiler transformations for data locality enhancement. We demonstrate the effectiveness of the approach using a number of benchmarks, including case studies where the potential shown by the analysis is exploited to achieve lower data movement costs and better performance.Comment: Transaction on Architecture and Code Optimization (2014

    An artefact repository to support distributed software engineering

    Get PDF
    The Open Source Component Artefact Repository (OSCAR) system is a component of the GENESIS platform designed to non-invasively inter-operate with work-flow management systems, development tools and existing repository systems to support a distributed software engineering team working collaboratively. Every artefact possesses a collection of associated meta-data, both standard and domain-specific presented as an XML document. Within OSCAR, artefacts are made aware of changes to related artefacts using notifications, allowing them to modify their own meta-data actively in contrast to other software repositories where users must perform all and any modifications, however trivial. This recording of events, including user interactions provides a complete picture of an artefact's life from creation to (eventual) retirement with the intention of supporting collaboration both amongst the members of the software engineering team and agents acting on their behalf

    Virtue integrated platform : holistic support for distributed ship hydrodynamic design

    Get PDF
    Ship hydrodynamic design today is often still done in a sequential approach. Tools used for the different aspects of CFD (Computational Fluid Dynamics) simulation (e.g. wave resistance, cavitation, seakeeping, and manoeuvring), and even for the different levels of detail within a single aspect, are often poorly integrated. VIRTUE (the VIRtual Tank Utility in Europe) project has the objective to develop a platform that will enable various distributed CFD and design applications to be integrated so that they may operate in a unified and holistic manner. This paper presents an overview of the VIRTUE Integrated Platform (VIP), e.g. research background, objectives, current work, user requirements, system architecture, its implementation, evaluation, and current development and future work
    corecore