141 research outputs found

    Adaptive Redundancy Management for Durable P2P Backup

    Get PDF
    We design and analyze the performance of a redundancy management mechanism for Peer-to-Peer backup applications. Armed with the realization that a backup system has peculiar requirements -- namely, data is read over the network only during restore processes caused by data loss -- redundancy management targets data durability rather than attempting to make each piece of information availabile at any time. In our approach each peer determines, in an on-line manner, an amount of redundancy sufficient to counter the effects of peer deaths, while preserving acceptable data restore times. Our experiments, based on trace-driven simulations, indicate that our mechanism can reduce the redundancy by a factor between two and three with respect to redundancy policies aiming for data availability. These results imply an according increase in storage capacity and decrease in time to complete backups, at the expense of longer times required to restore data. We believe this is a very reasonable price to pay, given the nature of the application. We complete our work with a discussion on practical issues, and their solutions, related to which encoding technique is more suitable to support our scheme

    Economic resilience and crowdsourcing platforms

    Get PDF
    The increased interdependence and complexity of modern societies have increased the need to involve all members of a community into solving problems. In times of great uncertainty, when communities face threats of different kinds and magnitudes, the traditional top-down approach where government provides solely for community wellbeing is no longer plausible. Crowdsourcing has emerged as an effective means of empowering communities with the potential to engage individuals in innovation, self-organization activities, informal learning, mutual support, and political action that can all lead to resilience. However, there remains limited resource on the topic. In this paper, we outline the various forms of crowdsourcing, economic and community resilience, crowdsourcing and economic resilience, and a case study of the Nepal earthquake. his article presents an exploratory perspective on the link can be found between crowdsourcing and economic resilience. It introduces and describes a framework that can be used to study the impact of crowdsourcing initiatives for economic resilience by future research. An initial a set of indicators to be used to measure the change in the level of resilience is presented.info:eu-repo/semantics/publishedVersio

    Solving key design issues for massively multiplayer online games on peer-to-peer architectures

    Get PDF
    Massively Multiplayer Online Games (MMOGs) are increasing in both popularity and scale on the Internet and are predominantly implemented by Client/Server architectures. While such a classical approach to distributed system design offers many benefits, it suffers from significant technical and commercial drawbacks, primarily reliability and scalability costs. This realisation has sparked recent research interest in adapting MMOGs to Peer-to-Peer (P2P) architectures. This thesis identifies six key design issues to be addressed by P2P MMOGs, namely interest management, event dissemination, task sharing, state persistency, cheating mitigation, and incentive mechanisms. Design alternatives for each issue are systematically compared, and their interrelationships discussed. How well representative P2P MMOG architectures fulfil the design criteria is also evaluated. It is argued that although P2P MMOG architectures are developing rapidly, their support for task sharing and incentive mechanisms still need to be improved. The design of a novel framework for P2P MMOGs, Mediator, is presented. It employs a self-organising super-peer network over a P2P overlay infrastructure, and addresses the six design issues in an integrated system. The Mediator framework is extensible, as it supports flexible policy plug-ins and can accommodate the introduction of new superpeer roles. Key components of this framework have been implemented and evaluated with a simulated P2P MMOG. As the Mediator framework relies on super-peers for computational and administrative tasks, membership management is crucial, e.g. to allow the system to recover from super-peer failures. A new technology for this, namely Membership-Aware Multicast with Bushiness Optimisation (MAMBO), has been designed, implemented and evaluated. It reuses the communication structure of a tree-based application-level multicast to track group membership efficiently. Evaluation of a demonstration application shows i that MAMBO is able to quickly detect and handle peers joining and leaving. Compared to a conventional supervision architecture, MAMBO is more scalable, and yet incurs less communication overheads. Besides MMOGs, MAMBO is suitable for other P2P applications, such as collaborative computing and multimedia streaming. This thesis also presents the design, implementation and evaluation of a novel task mapping infrastructure for heterogeneous P2P environments, Deadline-Driven Auctions (DDA). DDA is primarily designed to support NPC host allocation in P2P MMOGs, and specifically in the Mediator framework. However, it can also support the sharing of computational and interactive tasks with various deadlines in general P2P applications. Experimental and analytical results demonstrate that DDA efficiently allocates computing resources for large numbers of real-time NPC tasks in a simulated P2P MMOG with approximately 1000 players. Furthermore, DDA supports gaming interactivity by keeping the communication latency among NPC hosts and ordinary players low. It also supports flexible matchmaking policies, and can motivate application participants to contribute resources to the system

    Dynamic data consistency maintenance in peer-to-peer caching system

    Get PDF
    Master'sMASTER OF SCIENC

    Design, Implementation, and Performance Analysis of In-Home Video based Monitoring System for Patients with Dementia

    Get PDF
    Dementia is a major public health problem affecting 35 million people in USA. The caregivers of dementia patients experience many types of physical and psychological stress while dealing with disruptive behaviors of dementia patients. This will also result in frequent hospitalizations and re-admissions. In this project we design, implement, and measure the performance of an advanced video based monitoring system to aide the caregivers in managing the behavioral symptoms of dementia patients. The caregivers will be able to easily capture and share the antecedents, consequences, and the function of behavior, through a video clip, and get the real-time feedback from clinical experts. Overall the system will help in reducing the hospital admission/readmission, improve the quality of life for caregivers, and in general result in reduced cost of health care systems. System is developed using python scripts, open source web frameworks, FFmpeg tool chain, and commercial off-the-shelf IP camera and mini-PC. WebRTC is used for video based coaching of caregivers. A framework has been developed to evaluate the storage and retrieval latency of video clips to public and On-premise clouds, video streaming performance in LAN and WLAN environments, and WebRTC performance in different types of access networks. InstaGENIrack, a GENI rack in KU is used as on-premise cloud infrastructure for the evaluation. OpenSSL utilities are employed for secured transport and storage of captured video clips. We conducted the trials in Google fiber ISP in Kansas city, and compared the performance with other traditional ISPs

    Community computation

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Materials Science and Engineering, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 171-186).In this thesis we lay the foundations for a distributed, community-based computing environment to tap the resources of a community to better perform some tasks, either computationally hard or economically prohibitive, or physically inconvenient, that one individual is unable to accomplish efficiently. We introduce community coding, where information systems meet social networks, to tackle some of the challenges in this new paradigm of community computation. We design algorithms, protocols and build system prototypes to demonstrate the power of community computation to better deal with reliability, scalability and security issues, which are the main challenges in many emerging community-computing environments, in several application scenarios such as community storage, community sensing and community security. For example, we develop a community storage system that is based upon a distributed P2P (peer-to-peer) storage paradigm, where we take an array of small, periodically accessible, individual computers/peer nodes and create a secure, reliable and large distributed storage system. The goal is for each one of them to act as if they have immediate access to a pool of information that is larger than they could hold themselves, and into which they can contribute new stuff in a both open and secure manner. Such a contributory and self-scaling community storage system is particularly useful where reliable infrastructure is not readily available in that such a system facilitates easy ad-hoc construction and easy portability. In another application scenario, we develop a novel framework of community sensing with a group of image sensors. The goal is to present a set of novel tools in which software, rather than humans, examines the collection of images sensed by a group of image sensors to determine what is happening in the field of view. We also present several design principles in the aspects of community security. In one application example, we present community-based email spain detection approach to deal with email spams more efficiently.by Fulu Li.Ph.D

    CORE: Augmenting Regenerating-Coding-Based Recovery for Single and Concurrent Failures in Distributed Storage Systems

    Full text link
    Data availability is critical in distributed storage systems, especially when node failures are prevalent in real life. A key requirement is to minimize the amount of data transferred among nodes when recovering the lost or unavailable data of failed nodes. This paper explores recovery solutions based on regenerating codes, which are shown to provide fault-tolerant storage and minimum recovery bandwidth. Existing optimal regenerating codes are designed for single node failures. We build a system called CORE, which augments existing optimal regenerating codes to support a general number of failures including single and concurrent failures. We theoretically show that CORE achieves the minimum possible recovery bandwidth for most cases. We implement CORE and evaluate our prototype atop a Hadoop HDFS cluster testbed with up to 20 storage nodes. We demonstrate that our CORE prototype conforms to our theoretical findings and achieves recovery bandwidth saving when compared to the conventional recovery approach based on erasure codes.Comment: 25 page
    corecore