604 research outputs found

    Reliable Provisioning of Spot Instances for Compute-intensive Applications

    Full text link
    Cloud computing providers are now offering their unused resources for leasing in the spot market, which has been considered the first step towards a full-fledged market economy for computational resources. Spot instances are virtual machines (VMs) available at lower prices than their standard on-demand counterparts. These VMs will run for as long as the current price is lower than the maximum bid price users are willing to pay per hour. Spot instances have been increasingly used for executing compute-intensive applications. In spite of an apparent economical advantage, due to an intermittent nature of biddable resources, application execution times may be prolonged or they may not finish at all. This paper proposes a resource allocation strategy that addresses the problem of running compute-intensive jobs on a pool of intermittent virtual machines, while also aiming to run applications in a fast and economical way. To mitigate potential unavailability periods, a multifaceted fault-aware resource provisioning policy is proposed. Our solution employs price and runtime estimation mechanisms, as well as three fault tolerance techniques, namely checkpointing, task duplication and migration. We evaluate our strategies using trace-driven simulations, which take as input real price variation traces, as well as an application trace from the Parallel Workload Archive. Our results demonstrate the effectiveness of executing applications on spot instances, respecting QoS constraints, despite occasional failures.Comment: 8 pages, 4 figure

    CIC : an integrated approach to checkpointing in mobile agent systems

    Get PDF
    Internet and Mobile Computing Lab (in Department of Computing)Refereed conference paper2006-2007 > Academic research: refereed > Refereed conference paperVersion of RecordPublishe

    Checkpoint placement algorithms for mobile agent system

    Get PDF
    2006-2007 > Academic research: refereed > Refereed conference paperVersion of RecordPublishe

    THE PERFORMANCE OF SOFT CHEKPOINTING APPROACH IN MOBILE COMPUTING SYSTEMS

    Get PDF
    Mobile computing raises many new issues such as lack of stable storage, low bandwidth of wireless channel, high mobility, and limited battery life. These new issues make traditional checkpointing algorithms unsuitable. Coordinated checkpointing is an attractive approach for transparently adding fault tolerance to distributed applications since it avoids domino effects and minimizes the stable storage requirement. However, it suffers from high overhead associated with the checkpointing process in mobile computing systems. In literature mostly, two approaches have been used to reduce the overhead: First is to minimize the number of synchronization messages and the number of checkpoints; the other is to make the checkpointing process nonblocking. Since MHs are prone to failure, so they have to transfer a large amount of checkpoint data and control information to its local MSS which increases bandwidth overhead. In this paper, we introduce the concept of 201C;Soft checkpoint201D; which is neither a tentative checkpoint nor a permanent checkpoint, to design efficient checkpointing algorithms for mobile computing systems. Soft checkpoints can be saved anywhere, e.g., the main memory or local disk of MHs. Before disconnecting from the MSS, these soft checkpoints are converted to hard checkpoints and are sent to MSSs stable storage. In this way, taking a soft checkpoint avoids the overhead of transferring large amounts of data to the stable storage at MSSs over the wireless network. We have also shown that our soft checkpointing scheme also adapts its behaviour to the characteristics of network

    Laadunvarmistustyökalujen varmistusvedostus järjestelmätasolla

    Get PDF
    In modern software development many kinds of verification is performed to prevent regressions and to ensure robustness of the software. Execution of verification tasks is usually automated with continuous delivery (CD) systems built on CD-platforms. Currently available CD-platforms (Jenkins, Concourse, GoCD) are essentially job schedulers based on traditional job scheduling model. They execute tasks to completion in order of arrival. This model is known to cause user dissatisfaction due to long wait-times when the variation in task execution times is high. It's also known to exhibit low resource utilization. This prevents integration of new kinds of verification, reduces cost-effectiveness and decreases developer productivity. Preemption, that is task-switching, enables much more flexibility to scheduling. It greatly improves the system's responsiveness by reducing wait-times. It solves the problem of short tasks having to wait extendedly for long tasks to complete. By enabling time-slicing of resources it increases their utilization. The result is interactive service for developers, supporting more kinds of verification in CD and enabling more value to be extracted of available compute resources. Implementation of preemption requires ability to suspend and resume the execution of verification tools. We evaluate system-level checkpointing, a technique used for preemption in high performance computing, that does not require modification of the verification tools. We selected Checkpoint and Restore in Userspace (CRIU) as the checkpointing utility to be evaluated. We evaluated CRIU's capability to checkpoint verification tools and measured checkpoint creation time and checkpoint image size. We selected AFL, AddressSanitizer, Valgrind and Android Emulator as the tools to be tested. Our results show CRIU is not yet capable of preempting arbitrary verification tools as only AFL and Valgrind were checkpointable. Checkpoint creation was fast making it feasible for interactive use in a CD-system. Checkpoint image's size was found to depend on the verification tool's memory size, as expected, meaning most tools would be feasible for preemption to network storage in a cluster.Nykypäivän ohjelmistokehityksessä käytetään monenlaisia laadunvarmistusmenetelmiä regressioiden estämiseen ja ohjelmistojen vikasietoisuuden takaamiseksi. Tällaisten tehtävien suoritus yleensä automatisoidaan jatkuvan toimituksen (CD) järjestelmillä, jotka on rakennettu jollekin CD-alustalle. Saatavilla olevat CD-alustat (Jenkins, Concourse, GoCD) ovat pääpiirteissään perinteiseen ryväslaskennan vuoronnusmalliin pohjautuvia tehtävävuorontajia. Ne suorittavat tehtäviä saapumisjärjestyksessä alusta loppuun. Tehtävien keston vaihdellessa odotusajat kasvavat pitkiksi, joten mallin käyttökokemus on huono. Resursseja ei myöskään hyödynnetä tehokkaasti. Nämä estävät uusien varmistusmenetelmien käytön sekä heikentävät kustannustehokkuutta ja ohjelmistokehittäjien tuottavuutta. Tehtävien vuorottelu tekee vuoronnuksesta joustavaa. Se lyhentää odotusaikoja huomattavasti. Lyhyet tehtävät eivät enää joudu odottamaan pitkäkestoisten tehtävien päättymistä ja resursseja hyödynnetään tehokkaammin. Näillä saavutetaan ohjelmistokehittäjille vuorovaikutteinen käyttökokemus, uudenlaisia varmistusmenetelmiä voidaan ottaa käyttöön ja laskentaresursseista saadaan parempi hyöty. Vuorottelun toteuttamiseksi laadunvarmistustyökaluiden suoritus täytyy olla keskeytettävissä. Työssä arvioimme järjestelmätason varmistusvedostusta, joka on suurteholaskennassa käytetty menetelmä tehtävien vuorotteluun. Menetelmä ei vaadi muutoksia työkaluihin. Tarkastelemme Checkpoint and Restore in Userspace (CRIU)-varmistusvedostustyökalua, sen kykyä laadunvarmistustyökalujen vuorotteluun sekä vedoksen luontiin kuluvaa aikaa ja vedoksen kokoa. Kokeiltuja laadunvarmistustyökaluja olivat AFL, AddressSanitizer, Valgrind sekä Android Emulator. Ilmeni, että CRIU ei vielä kykene kaikkien laadunvarmistustyökalujen vuorotteluun sillä kokeilluista työkaluista vain AFL ja Valgrind voitiin vedostaa. Vedoksen luonti oli nopeaa, mikä tekee varmistusvedostuksesta käyttökelpoisen vuorovaikutteisissa CD-järjestelmissä. Kuten oletettiin, vedoksen koko riippui laadunvarmistustyökalun muistin koosta, joten yleisimpien työkalujen vuorottelu verkkotallennusta käyttävissä laskentaryppäissä olisi mahdollista

    Transparently Mixing Undo Logs and Software Reversibility for State Recovery in Optimistic PDES

    Get PDF
    The rollback operation is a fundamental building block to support the correct execution of a speculative Time Warp-based Parallel Discrete Event Simulation. In the literature, several solutions to reduce the execution cost of this operation have been proposed, either based on the creation of a checkpoint of previous simulation state images, or on the execution of negative copies of simulation events which are able to undo the updates on the state. In this paper, we explore the practical design and implementation of a state recoverability technique which allows to restore a previous simulation state either relying on checkpointing or on the reverse execution of the state updates occurred while processing events in forward mode. Differently from other proposals, we address the issue of executing backward updates in a fully-transparent and event granularity-independent way, by relying on static software instrumentation (targeting the x86 architecture and Linux systems) to generate at runtime reverse update code blocks (not to be confused with reverse events, proper of the reverse computing approach). These are able to undo the effects of a forward execution while minimizing the cost of the undo operation. We also present experimental results related to our implementation, which is released as free software and fully integrated into the open source ROOT-Sim (ROme OpTimistic Simulator) package. The experimental data support the viability and effectiveness of our proposal

    Study and Design of Global Snapshot Compilation Protocols for Rollback-Recovery in Mobile Distributed System

    Get PDF
    Checkpoint is characterized as an assigned place in a program at which ordinary process is intruded on particularly to protect the status data important to permit resumption of handling at a later time. A conveyed framework is an accumulation of free elements that participate to tackle an issue that can't be separately comprehended. A versatile figuring framework is a dispersed framework where some of procedures are running on portable hosts (MHs). The presence of versatile hubs in an appropriated framework presents new issues that need legitimate dealing with while outlining a checkpointing calculation for such frameworks. These issues are portability, detachments, limited power source, helpless against physical harm, absence of stable stockpiling and so forth. As of late, more consideration has been paid to giving checkpointing conventions to portable frameworks. Least process composed checkpointing is an alluring way to deal with present adaptation to internal failure in portable appropriated frameworks straightforwardly. This approach is without domino, requires at most two recovery_points of a procedure on stable stockpiling, and powers just a base number of procedures to recovery_point. In any case, it requires additional synchronization messages, hindering of the basic calculation or taking some futile recovery_points. In this paper, we complete the writing review of some Minimum-process Coordinated Checkpointing Algorithms for Mobile Computing System

    Reliable distributed data stream management in mobile environments

    Get PDF
    The proliferation of sensor technology, especially in the context of embedded systems, has brought forward novel types of applications that make use of streams of continuously generated sensor data. Many applications like telemonitoring in healthcare or roadside traffic monitoring and control particularly require data stream management (DSM) to be provided in a distributed, yet reliable way. This is even more important when DSM applications are deployed in a failure-prone distributed setting including resource-limited mobile devices, for instance in applications which aim at remotely monitoring mobile patients. In this paper, we introduce a model for distributed and reliable DSM. The contribution of this paper is threefold. First, in analogy to the SQL isolation levels, we define levels of reliability and describe necessary consistency constraints for distributed DSM that specify the tolerated loss, delay, or re-ordering of data stream elements, respectively. Second, we use this model to design and analyze an algorithm for reliable distributed DSM, namely efficient coordinated operator checkpointing (ECOC). We show that ECOC provides lossless and delay-limited reliable data stream management and thus can be used in critical application domains such as healthcare, where the loss of data stream elements can not be tolerated. Third, we present detailed performance evaluations of the ECOC algorithm running on mobile, resource-limited devices. In particular, we can show that ECOC provides a high level of reliability while, at the same time, featuring good performance characteristics with moderate resource consumption

    CamFlow: Managed Data-sharing for Cloud Services

    Full text link
    A model of cloud services is emerging whereby a few trusted providers manage the underlying hardware and communications whereas many companies build on this infrastructure to offer higher level, cloud-hosted PaaS services and/or SaaS applications. From the start, strong isolation between cloud tenants was seen to be of paramount importance, provided first by virtual machines (VM) and later by containers, which share the operating system (OS) kernel. Increasingly it is the case that applications also require facilities to effect isolation and protection of data managed by those applications. They also require flexible data sharing with other applications, often across the traditional cloud-isolation boundaries; for example, when government provides many related services for its citizens on a common platform. Similar considerations apply to the end-users of applications. But in particular, the incorporation of cloud services within `Internet of Things' architectures is driving the requirements for both protection and cross-application data sharing. These concerns relate to the management of data. Traditional access control is application and principal/role specific, applied at policy enforcement points, after which there is no subsequent control over where data flows; a crucial issue once data has left its owner's control by cloud-hosted applications and within cloud-services. Information Flow Control (IFC), in addition, offers system-wide, end-to-end, flow control based on the properties of the data. We discuss the potential of cloud-deployed IFC for enforcing owners' dataflow policy with regard to protection and sharing, as well as safeguarding against malicious or buggy software. In addition, the audit log associated with IFC provides transparency, giving configurable system-wide visibility over data flows. [...]Comment: 14 pages, 8 figure
    corecore