170 research outputs found

    Vertical and horizontal elasticity for dynamic virtual machine reconfiguration

    Get PDF
    Today, cloud computing applications are rapidly constructed by services belonging to different cloud providers and service owners. This work presents the inter-cloud elasticity framework, which focuses on cloud load balancing based on dynamic virtual machine reconfiguration when variations on load or on user requests volume are observed. We design a dynamic reconfiguration system, called inter-cloud load balancer (ICLB), that allows scaling up or down the virtual resources (thus providing automatized elasticity), by eliminating service downtimes and communication failures. It includes an inter-cloud load balancer for distributing incoming user HTTP traffic across multiple instances of inter-cloud applications and services and we perform dynamic reconfiguration of resources according to the real time requirements. The experimental analysis includes different topologies by showing how real-time traffic variation (using real world workloads) affects resource utilization and by achieving better resource usage in inter-cloud

    The influence of the specific rehabilitation techniques „toegrindig” and „WIG remelting” in case of welded structure

    Get PDF
    The fillet welded structures, fatigue stressed, must have a concave shape in cross section and a smooth transition between weldseamand base material, without stress concentrators . In this issue for the welded structures with convex fillet welds, to improve the behavior in case of fatigue loads it will be necessary to apply rehabilitation techniques like toe grinding or WIG remelting. The paper wants to present the influence of that two rehabilitation technique and the behavior of the rehabilitated welded structures in case of static and dynamic loads

    Data Replication Strategies for Fault Tolerance and Availability on Commodity Clusters

    Get PDF
    Recent work has shown the advantages of using persistent memory for transaction processing. In particular, the Vista transaction system uses recoverable memory to avoid disk I/O, thus improving performance by several orders of magnitude. In such a system, however, the data is safe when a node fails, but unavailable until it recovers, because the data is kept in only one memory. In contrast, our work uses data replication to provide both reliability and data availability while still maintaining very high transaction throughput. We investigate four possible designs for a primary-backup system, using a cluster of commodity servers connected by a write-through capable system area network (SAN). We show that logging approaches outperform mirroring approaches, even when communicating more data, because of their better locality. Finally, we show that the best logging approach also scales well to small shared-memory multiprocessors

    Distributed Versioning: Consistent Replication for Scaling Back-end Databases of Dynamic Content Sites

    Get PDF
    Dynamic content Web sites consist of a front-end Web server, an application server and a back-end database. In this paper we introduce distributed versioning, a new method for scaling the back-end database through replication. Distributed versioning provides both the consistency guarantees of eager replication and the scaling properties of lazy replication. It does so by combining a novel concurrency control method based on explicit versions with conflict-aware query scheduling that reduces the number of lock conflicts. We evaluate distributed versioning using three dynamic content applications: the TPC-W e-commerce benchmark with its three workload mixes, an auction site benchmark, and a bulletin board benchmark. We demonstrate that distributed versioning scales better than previous methods that provide consistency. Furthermore, we demonstrate that the benefits of relaxing consistency are limited, except for the conflict-heavy TPC-W ordering mix

    Compiler and Software Distributed Shared Memory Support for Irregular Applications

    Get PDF
    We investigate the use of a software distributed shared memory (DSM) layer to support irregular computations on distributed memory machines. Software DSM supports irregular computation through demand fetching of data in response to memory access faults. With the addition of a very limited form of compiler support, namely the identification of the section of the indirection array accessed by each processor, many of these on-demand page fetches can be aggregated into a single message, and prefetched prior to the access fault. We have measured the performance of this approach for two irregular applications, moldyn and nbf, using the Tread-Marks DSM system on an 8-processor IBM SP2. We find that it has similar performance to the inspector-executor method supported by the CHAOS run-time library, while requiring much simpler compile-time support. For moldyn, it is up to 23% faster than CHAOS, depending on the input problem's characteristics; and for nbf, it is no worse than 14% slower. If we include the execution time of the inspector, the software DSM-based approach is always faster than CHAOS. The advantage of this approach increases as the frequency of changes to the indirection array increases. The disadvantage of this approach is the potential for false sharing overhead when the data set is small or has poor spatial locality

    Choosing the optimum material for making a bicycle frame

    Get PDF
    This paper presents the results obtained following the Finite Element Method (FEM) simulations on a bike frame made of 3 different materials, Al6061, Carbon Fiber and Ti6Al4V, in order to identify the optimum material for the manufacture of the respective frame. The parts of the frame made of Al6061, Ti6Al4V will be joined using the WIG method (wolfram inert gas). Considering the results obtained and following the experiments we can see that the optimum material for making the bike frame is Ti6Al4V, but the main impediment for the large-scale use of the material is the high cos

    Anthropometric indices of Gambian children after one or three annual rounds of mass drug administration with azithromycin for trachoma control.

    Get PDF
    BACKGROUND: Mass drug administration (MDA) with azithromycin, carried out for the control of blinding trachoma, has been linked to reduced mortality in children. While the mechanism behind this reduction is unclear, it may be due, in part, to improved nutritional status via a potential reduction in the community burden of infectious disease. To determine whether MDA with azithromycin improves anthropometric indices at the community level, we measured the heights and weights of children aged 1 to 4 years in communities where one (single MDA arm) or three annual rounds (annual MDA arm) of azithromycin had been distributed. METHODS: Data collection took place three years after treatment in the single MDA arm and one year after the final round of treatment in the annual MDA arm. Mean height-for-age, weight-for-age and weight-for-height z scores were compared between treatment arms. RESULTS: No significant differences in mean height-for-age, weight-for-age or weight-for-height z scores were found between the annual MDA and single MDA arms, nor was there a significant reduction in prevalence of stunting, wasting or underweight between arms. CONCLUSIONS: Our data do not provide evidence that community MDA with azithromycin improved anthropometric outcomes of children in The Gambia. This may suggest reductions in mortality associated with azithromycin MDA are due to a mechanism other than improved nutritional status
    • 

    corecore