3,683 research outputs found

    Exploring heterogeneity of unreliable machines for p2p backup

    Full text link
    P2P architecture is a viable option for enterprise backup. In contrast to dedicated backup servers, nowadays a standard solution, making backups directly on organization's workstations should be cheaper (as existing hardware is used), more efficient (as there is no single bottleneck server) and more reliable (as the machines are geographically dispersed). We present the architecture of a p2p backup system that uses pairwise replication contracts between a data owner and a replicator. In contrast to standard p2p storage systems using directly a DHT, the contracts allow our system to optimize replicas' placement depending on a specific optimization strategy, and so to take advantage of the heterogeneity of the machines and the network. Such optimization is particularly appealing in the context of backup: replicas can be geographically dispersed, the load sent over the network can be minimized, or the optimization goal can be to minimize the backup/restore time. However, managing the contracts, keeping them consistent and adjusting them in response to dynamically changing environment is challenging. We built a scientific prototype and ran the experiments on 150 workstations in the university's computer laboratories and, separately, on 50 PlanetLab nodes. We found out that the main factor affecting the quality of the system is the availability of the machines. Yet, our main conclusion is that it is possible to build an efficient and reliable backup system on highly unreliable machines (our computers had just 13% average availability)

    Beyond The Cloud, How Should Next Generation Utility Computing Infrastructures Be Designed?

    Get PDF
    To accommodate the ever-increasing demand for Utility Computing (UC) resources, while taking into account both energy and economical issues, the current trend consists in building larger and larger data centers in a few strategic locations. Although such an approach enables to cope with the actual demand while continuing to operate UC resources through centralized software system, it is far from delivering sustainable and efficient UC infrastructures. We claim that a disruptive change in UC infrastructures is required: UC resources should be managed differently, considering locality as a primary concern. We propose to leverage any facilities available through the Internet in order to deliver widely distributed UC platforms that can better match the geographical dispersal of users as well as the unending demand. Critical to the emergence of such locality-based UC (LUC) platforms is the availability of appropriate operating mechanisms. In this paper, we advocate the implementation of a unified system driving the use of resources at an unprecedented scale by turning a complex and diverse infrastructure into a collection of abstracted computing facilities that is both easy to operate and reliable. By deploying and using such a LUC Operating System on backbones, our ultimate vision is to make possible to host/operate a large part of the Internet by its internal structure itself: A scalable and nearly infinite set of resources delivered by any computing facilities forming the Internet, starting from the larger hubs operated by ISPs, government and academic institutions to any idle resources that may be provided by end-users. Unlike previous researches on distributed operating systems, we propose to consider virtual machines (VMs) instead of processes as the basic element. System virtualization offers several capabilities that increase the flexibility of resources management, allowing to investigate novel decentralized schemes.Afin de supporter la demande croissante de calcul utilitaire (UC) tout en prenant en compte les aspects énergétique et financier, la tendance actuelle consiste à construire des centres de données (ou centrales numériques) de plus en plus grands dans un nombre limité de lieux stratégiques. Cette approche permet sans aucun doute de satisfaire la demande tout en conservant une approche centralisée de la gestion de ces ressources mais elle reste loin de pouvoir fournir des infrastructures de calcul utilitaire efficaces et durables. Après avoir indiqué pourquoi cette tendance n'est pas appropriée, nous proposons au travers de ce rapport, une proposition radicalement différente. De notre point de vue, les ressources de calcul utilitaire doivent être gérées de manière à pouvoir prendre en compte la localité des demandes dès le départ. Pour ce faire, nous proposons de tirer parti de tous les équipements disponibles sur l'Internet afin de fournir des infrastructures de calcul utilitaire qui permettront de part leur distribution de prendre en compte plus efficacement la dispersion géographique des utilisateurs et leur demande toujours croissante. Un des aspects critique pour l'émergence de telles plates-formes de calcul utilitaire ''local'' (LUC) est la disponibilité de mécanismes de gestion appropriés. Dans la deuxième partie de ce document, nous défendons la mise en oeuvre d'un système unifié gérant l'utilisation des ressources à une échelle sans précédent en transformant une infrastructure complexe et hétérogène en une collection d'équipements virtualisés qui seront à la fois plus simples à gérer et plus sûrs. En déployant un système de type LUC sur les coeurs de réseau, notre vision ultime est de rendre possible l'hébergement et la gestion de l'Internet sur sa propre infrastructure interne: un ensemble de ressources extensible et quasiment infini fourni par n'importe quel équipement constituant l'Internet, partant des gros noeud réseaux gérés par les ISPs, les gouvernements et les institutions acadèmiques jusqu'à n'importe quelle ressource inactive fournie par les utilisateurs finaux. Contrairement aux approches précédentes appliquées aux systèmes distribués, nous proposons de considérer les machines virtuelles comme la granularité élémentaire du système (à la place des processus). La virtualisation système offre plusieurs fonctionnalités qui améliorent la flexibilité de la gestion de ressources, permettant l'étude de nouveaux schémas de décentralisation

    Assuring IT Services Quality through High-Reliability Risk Management in Offshore Business Process Outsourcing

    Get PDF
    Management of risks that emanate from offshore IT-enabled services outsourcing is a key challenge in the IT management area in current times. These new risks need to be addressed through the development of new risk management frameworks and methods. This paper proposes a risk management approach for offshore business process outsourcing based on the principles of high reliability organizations (HROs). Relying upon and using the organizing principles and mechanisms for HROs may help companies successfully combat the risks associated with offshore outsourcing. The proposed research model is planned to be empirically tested using data collected from offshore business process outsourcing projects of client firms in the US
    • …
    corecore