3,945 research outputs found

    Understanding collaboration in volunteer computing systems

    Get PDF
    Volunteer computing is a paradigm in which devices participating in a distributed environment share part of their resources to help others perform their activities. The effectiveness of this computing paradigm depends on the collaboration attitude adopted by the participating devices. Unfortunately for software designers it is not clear how to contribute with local resources to the shared environment without compromising resources that could then be required by the contributors. Therefore, many designers adopt a conservative position when defining the collaboration strategy to be embedded in volunteer computing applications. This position produces an underutilization of the devices’ local resources and reduces the effectiveness of these solutions. This article presents a study that helps designers understand the impact of adopting a particular collaboration attitude to contribute with local resources to the distributed shared environment. The study considers five collaboration strategies, which are analyzed in computing environments with both, abundance and scarcity of resources. The obtained results indicate that collaboration strategies based on effort-based incentives work better than those using contribution-based incentives. These results also show that the use of effort-based incentives does not jeopardize the availability of local resources for the local needs.Peer ReviewedPostprint (published version

    TOFEC: Achieving Optimal Throughput-Delay Trade-off of Cloud Storage Using Erasure Codes

    Full text link
    Our paper presents solutions using erasure coding, parallel connections to storage cloud and limited chunking (i.e., dividing the object into a few smaller segments) together to significantly improve the delay performance of uploading and downloading data in and out of cloud storage. TOFEC is a strategy that helps front-end proxy adapt to level of workload by treating scalable cloud storage (e.g. Amazon S3) as a shared resource requiring admission control. Under light workloads, TOFEC creates more smaller chunks and uses more parallel connections per file, minimizing service delay. Under heavy workloads, TOFEC automatically reduces the level of chunking (fewer chunks with increased size) and uses fewer parallel connections to reduce overhead, resulting in higher throughput and preventing queueing delay. Our trace-driven simulation results show that TOFEC's adaptation mechanism converges to an appropriate code that provides the optimal delay-throughput trade-off without reducing system capacity. Compared to a non-adaptive strategy optimized for throughput, TOFEC delivers 2.5x lower latency under light workloads; compared to a non-adaptive strategy optimized for latency, TOFEC can scale to support over 3x as many requests

    The Design and Implementation of a Wireless Video Surveillance System.

    Get PDF
    Internet-enabled cameras pervade daily life, generating a huge amount of data, but most of the video they generate is transmitted over wires and analyzed offline with a human in the loop. The ubiquity of cameras limits the amount of video that can be sent to the cloud, especially on wireless networks where capacity is at a premium. In this paper, we present Vigil, a real-time distributed wireless surveillance system that leverages edge computing to support real-time tracking and surveillance in enterprise campuses, retail stores, and across smart cities. Vigil intelligently partitions video processing between edge computing nodes co-located with cameras and the cloud to save wireless capacity, which can then be dedicated to Wi-Fi hotspots, offsetting their cost. Novel video frame prioritization and traffic scheduling algorithms further optimize Vigil's bandwidth utilization. We have deployed Vigil across three sites in both whitespace and Wi-Fi networks. Depending on the level of activity in the scene, experimental results show that Vigil allows a video surveillance system to support a geographical area of coverage between five and 200 times greater than an approach that simply streams video over the wireless network. For a fixed region of coverage and bandwidth, Vigil outperforms the default equal throughput allocation strategy of Wi-Fi by delivering up to 25% more objects relevant to a user's query

    Image Transfer and Storage Cost Aware Brokering Strat. for Multiple Clouds

    Get PDF
    Cloud Brokering, Resource Allocation, Storage, Data Transfer, SimGrid Cloud BrokerNowadays, Clouds are used for hosting a large range of services. But between different Cloud Service Providers, the pricing model and the price of individual resources are very different. Furthermore hosting a service in one Cloud is the major cause of service outage. To increase resiliency and minimize the monetary cost of running a service, it becomes mandatory to span it between different Clouds. Moreover, due to dynamicity of both the service and Clouds, it could be required to migrate a service at run time. Accordingly, this ability must be integrated into the multi-Cloud resource manager, i.e. the Cloud broker. But, when migrating a VM to a new Cloud Service Provider, the VM disk image must be migrated too. Accordingly, data storage and transfer must be taken into account when choosing if and where an application will be migrated. In this paper, we extend a cost-optimization algorithm to take into account storage costs to approximate the optimal placement of a service. The data storage management consists in taking two decisions: where to upload an image, and keep it on-line during the experiment lifetime or delete it when unused. Although the default approach can be to upload an image on demand and delete it when it is no more used, we demonstrate that by adopting other policies the user can achieve better economical results.De nos jours, les Clouds sont utilisĂ©s pour hĂ©berger un grand ensemble de services. Mais entre les diffĂ©rents fournisseurs de service Cloud, les modĂ©les de prix et le prix de chaque ressource sont trĂšs diffĂ©rents. De plus, hĂ©berger un service dans un unique Cloud est une des causes principales d'interruption de service. Pour amĂ©liorer la rĂ©sistance et diminuer le coĂ»t monĂ©taire d'une application, il devient obligatoire de la distribuer dans plusieurs Clouds. En outre, Ă  cause de la dynamicitĂ© de l'application et des Clouds, il peut ĂȘtre nĂ©cessaire de migrer l'application pendant l'exĂ©cution. Par consĂ©quence, cette capacitĂ© doit ĂȘtre intĂ©grĂ©e dans le gestionnaire de ressources multi-Cloud i.e. le Cloud Broker. Mais, quand une VM migre vers un nouveau fournisseur de service Cloud, l'image disque de la VM doit ĂȘtre migrĂ©e Ă©galement. Par consĂ©quence, le stockage et transfert de donnĂ©e doivent ĂȘtre pris en compte quand il est choisi si une application doit migrer et oĂč. Dans ce papier, nous Ă©tendons un algorithme d'optimisation de coĂ»t pour prendre en compte le coĂ»t du stockage afin d'approximer le placement optimal d'une application. La gestion du stockage de donnĂ©e consiste Ă  devoir prendre 2 dĂ©cisions: oĂč l'image doit ĂȘtre envoyĂ©e et doit-elle ĂȘtre conservĂ©e ou supprimĂ©e quand elle n'est plus utilisĂ©e. MĂȘme si l'approche par dĂ©faut peut ĂȘtre d'envoyer l'image Ă  la demande et la supprimer quand elle n'est plus utilisĂ©e, nous dĂ©montrons qu'en adoptant d'autres politiques l'utilisateur peut rĂ©ussir Ă  atteindre de meilleurs rĂ©sultats Ă©conomiques

    A Resource-Aware and Time-Critical IoT Framework

    Get PDF
    Internet of Things (IoT) systems produce great amount of data, but usually have insufficient resources to process them in the edge. Several time-critical IoT scenarios have emerged and created a challenge of supporting low latency applications. At the same time cloud computing became a success in delivering computing as a service at affordable price with great scalability and high reliability. We propose an intelligent resource allocation system that optimally selects the important IoT data streams to transfer to the cloud for processing. The optimization runs on utility functions computed by predictor algorithms that forecast future events with some probabilistic confidence based on a dynamically recalculated data model. We investigate ways of reducing specifically the upload bandwidth of IoT video streams and propose techniques to compute the corresponding utility functions. We built a prototype for a smart squash court and simulated multiple courts to measure the efficiency of dynamic allocation of network and cloud resources for event detection during squash games. By continuously adapting to the observed system state and maximizing the expected quality of detection within the resource constraints our system can save up to 70% of the resources compared to the naive solution
    • 

    corecore