822 research outputs found

    On the Feasibility of Dynamic Rescheduling on the Intel Distributed Computing Platform

    Get PDF
    This paper examines the feasibility of dynamic rescheduling techniques for effectively utilizing compute resources within a data center. Our work is motivated by practical concerns of Intel’s NetBatch system, an Internet-scale data center based distributed computing platform developed by Intel Corporation for massively parallel chip simulations within the company. NetBatch has been operational for many years, and currently is deployed live on tens of thousands of machines that are globally distributed at various data centers. We perform an analysis of job execution traces obtained over a one year period collected from tens of thousands of NetBatch machines from 20 different pools. Our analysis show that we observe that the NetBatch currently does not make full use of all the resources. Specifically, the job completion time can be severely impacted due to job suspension when higher priority jobs preempt lower priority jobs. We then develop dynamic job rescheduling strategies that adaptively restart jobs to available resources elsewhere, which better utilize system resources and improve completion times. Our trace-driven evaluation results show that dynamic rescheduling enables NetBatch to significantly reduce system waste and completion time of suspended jobs

    Cost-Based Dynamic Job Rescheduling: A Case Study of the Intel Distributed Computing Platform

    Get PDF
    We perform a trace-driven analysis of the Intel Distributed Computing Platform (IDCP), an Internet-scale data center based distributed computing platform developed by Intel Corporation for massively parallel chip simulations within the company. IDCP has been operational for many years, and currently is deployed “live” on tens of thousands of machines that are globally distributed at various data centers. Our analysis is performed on job execution traces obtained over a one year period collected from tens of thousands of IDCP machines from 20 different pools. Our analysis demonstrates that job completion time can be severely impacted due to job suspension when higher priority jobs preempt lower priority jobs. We then develop cost-based dynamic job rescheduling strategies that adaptively restart suspended jobs, which better utilize system resources and improve completion times. Our trace-driven evaluation results show that dynamic rescheduling enables IDCP to significantly reduce job completion times

    Research Statement

    Get PDF
    My research centers around performance modeling, optimization and resource management for MapReduce workflows with completion time constrains. My work is motivated by (1) the popularity of MapReduce framework and its open source implementation Hadoop that provides an economically compelling alternative for efficient analytics over ”Big Data ” in the enterprise; and (2) the recent technological trend shift toward

    Flexible Management on BSP Process Rescheduling: Offering Migration at Middleware and Application Levels

    Get PDF
    This article describes the rationales for developing jMigBSP - a Java programming library that offers object rescheduling. It was designed to work on grid computing environments and offers an interface that follows the BSP (Bulk Synchronous Parallel) style. jMigBSP’s main contribution focuses on the rescheduling facility in two different ways: (i) by using migration directives on the application coded irectly and (ii) through automatic load balancing at middleware level. Especially, this second idea is feasible thanks to the Java’s inheritance feature, in which transforms a simple jMigBSP application in amigratable one only by changing a single line of code. In addition, the presented library makes the object interaction easier by providing one-sided message passing directives and hides network latency through asynchronous communications. Finally, we developed three BSP applications: (i) Prefix Sum; (ii) Fractal Image Compression (FIC) and; (iii) Fast Fourier Transform (FFT).They show our library as viable solution to offer load balancing on BSP applications. Specially, the FIC results present gains up to 37% when applying migration directives inside the code. Finally, the FFT tests emphasize strength of jMigBSP. In this situation, it outperforms a native library denoted BSPlib when migration facilities take place.Keywords: Bulk Synchronous Parallel, rescheduling, Java, adaptation, object migration, grid computing

    Laadunvarmistustyökalujen varmistusvedostus järjestelmätasolla

    Get PDF
    In modern software development many kinds of verification is performed to prevent regressions and to ensure robustness of the software. Execution of verification tasks is usually automated with continuous delivery (CD) systems built on CD-platforms. Currently available CD-platforms (Jenkins, Concourse, GoCD) are essentially job schedulers based on traditional job scheduling model. They execute tasks to completion in order of arrival. This model is known to cause user dissatisfaction due to long wait-times when the variation in task execution times is high. It's also known to exhibit low resource utilization. This prevents integration of new kinds of verification, reduces cost-effectiveness and decreases developer productivity. Preemption, that is task-switching, enables much more flexibility to scheduling. It greatly improves the system's responsiveness by reducing wait-times. It solves the problem of short tasks having to wait extendedly for long tasks to complete. By enabling time-slicing of resources it increases their utilization. The result is interactive service for developers, supporting more kinds of verification in CD and enabling more value to be extracted of available compute resources. Implementation of preemption requires ability to suspend and resume the execution of verification tools. We evaluate system-level checkpointing, a technique used for preemption in high performance computing, that does not require modification of the verification tools. We selected Checkpoint and Restore in Userspace (CRIU) as the checkpointing utility to be evaluated. We evaluated CRIU's capability to checkpoint verification tools and measured checkpoint creation time and checkpoint image size. We selected AFL, AddressSanitizer, Valgrind and Android Emulator as the tools to be tested. Our results show CRIU is not yet capable of preempting arbitrary verification tools as only AFL and Valgrind were checkpointable. Checkpoint creation was fast making it feasible for interactive use in a CD-system. Checkpoint image's size was found to depend on the verification tool's memory size, as expected, meaning most tools would be feasible for preemption to network storage in a cluster.Nykypäivän ohjelmistokehityksessä käytetään monenlaisia laadunvarmistusmenetelmiä regressioiden estämiseen ja ohjelmistojen vikasietoisuuden takaamiseksi. Tällaisten tehtävien suoritus yleensä automatisoidaan jatkuvan toimituksen (CD) järjestelmillä, jotka on rakennettu jollekin CD-alustalle. Saatavilla olevat CD-alustat (Jenkins, Concourse, GoCD) ovat pääpiirteissään perinteiseen ryväslaskennan vuoronnusmalliin pohjautuvia tehtävävuorontajia. Ne suorittavat tehtäviä saapumisjärjestyksessä alusta loppuun. Tehtävien keston vaihdellessa odotusajat kasvavat pitkiksi, joten mallin käyttökokemus on huono. Resursseja ei myöskään hyödynnetä tehokkaasti. Nämä estävät uusien varmistusmenetelmien käytön sekä heikentävät kustannustehokkuutta ja ohjelmistokehittäjien tuottavuutta. Tehtävien vuorottelu tekee vuoronnuksesta joustavaa. Se lyhentää odotusaikoja huomattavasti. Lyhyet tehtävät eivät enää joudu odottamaan pitkäkestoisten tehtävien päättymistä ja resursseja hyödynnetään tehokkaammin. Näillä saavutetaan ohjelmistokehittäjille vuorovaikutteinen käyttökokemus, uudenlaisia varmistusmenetelmiä voidaan ottaa käyttöön ja laskentaresursseista saadaan parempi hyöty. Vuorottelun toteuttamiseksi laadunvarmistustyökaluiden suoritus täytyy olla keskeytettävissä. Työssä arvioimme järjestelmätason varmistusvedostusta, joka on suurteholaskennassa käytetty menetelmä tehtävien vuorotteluun. Menetelmä ei vaadi muutoksia työkaluihin. Tarkastelemme Checkpoint and Restore in Userspace (CRIU)-varmistusvedostustyökalua, sen kykyä laadunvarmistustyökalujen vuorotteluun sekä vedoksen luontiin kuluvaa aikaa ja vedoksen kokoa. Kokeiltuja laadunvarmistustyökaluja olivat AFL, AddressSanitizer, Valgrind sekä Android Emulator. Ilmeni, että CRIU ei vielä kykene kaikkien laadunvarmistustyökalujen vuorotteluun sillä kokeilluista työkaluista vain AFL ja Valgrind voitiin vedostaa. Vedoksen luonti oli nopeaa, mikä tekee varmistusvedostuksesta käyttökelpoisen vuorovaikutteisissa CD-järjestelmissä. Kuten oletettiin, vedoksen koko riippui laadunvarmistustyökalun muistin koosta, joten yleisimpien työkalujen vuorottelu verkkotallennusta käyttävissä laskentaryppäissä olisi mahdollista

    An input centric paradigm for program dynamic optimizations and lifetime evolvement

    Get PDF
    Accurately predicting program behaviors (e.g., memory locality, method calling frequency) is fundamental for program optimizations and runtime adaptations. Despite decades of remarkable progress, prior studies have not systematically exploited the use of program inputs, a deciding factor of program behaviors, to help in program dynamic optimizations. Triggered by the strong and predictive correlations between program inputs and program behaviors that recent studies have uncovered, the dissertation work aims to bring program inputs into the focus of program behavior analysis and program dynamic optimization, cultivating a new paradigm named input-centric program behavior analysis and dynamic optimization.;The new optimization paradigm consists of three components, forming a three-layer pyramid. at the base is program input characterization, a component for resolving the complexity in program raw inputs and extracting important features. In the middle is input-behavior modeling, a component for recognizing and modeling the correlations between characterized input features and program behaviors. These two components constitute input-centric program behavior analysis, which (ideally) is able to predict the large-scope behaviors of a program\u27s execution as soon as the execution starts. The top layer is input-centric adaptation, which capitalizes on the novel opportunities created by the first two components to facilitate proactive adaptation for program optimizations.;This dissertation aims to develop this paradigm in two stages. In the first stage, we concentrate on exploring the implications of program inputs for program behaviors and dynamic optimization. We construct the basic input-centric optimization framework based on of line training to realize the basic functionalities of the three major components of the paradigm. For the second stage, we focus on making the paradigm practical by addressing multi-facet issues in handling input complexities, transparent training data collection, predictive model evolvement across production runs. The techniques proposed in this stage together cultivate a lifelong continuous optimization scheme with cross-input adaptivity.;Fundamentally the new optimization paradigm provides a brand new solution for program dynamic optimization. The techniques proposed in the dissertation together resolve the adaptivity-proactivity dilemma that has been limiting the effectiveness of existing optimization techniques. its benefits are demonstrated through proactive dynamic optimizations in Jikes RVM and version selection using IBM XL C Compiler, yielding significant performance improvement on a set of Java and C/C++ programs. It may open new opportunities for a broad range of runtime optimizations and adaptations. The evaluation results on both Java and C/C++ applications demonstrate the new paradigm is promising in advancing the current state of program optimizations

    A framework for smart production-logistics systems based on CPS and industrial IoT

    Get PDF
    Industrial Internet of Things (IIoT) has received increasing attention from both academia and industry. However, several challenges including excessively long waiting time and a serious waste of energy still exist in the IIoT-based integration between production and logistics in job shops. To address these challenges, a framework depicting the mechanism and methodology of smart production-logistics systems is proposed to implement intelligent modeling of key manufacturing resources and investigate self-organizing configuration mechanisms. A data-driven model based on analytical target cascading is developed to implement the self-organizing configuration. A case study based on a Chinese engine manufacturer is presented to validate the feasibility and evaluate the performance of the proposed framework and the developed method. The results show that the manufacturing time and the energy consumption are reduced and the computing time is reasonable. This paper potentially enables manufacturers to deploy IIoT-based applications and improve the efficiency of production-logistics systems

    Decomposition Algorithms for Stochastic Programming on a Computational Grid

    Get PDF
    We describe algorithms for two-stage stochastic linear programming with recourse and their implementation on a grid computing platform. In particular, we examine serial and asynchronous versions of the L-shaped method and a trust-region method. The parallel platform of choice is the dynamic, heterogeneous, opportunistic platform provided by the Condor system. The algorithms are of master-worker type (with the workers being used to solve second-stage problems, and the MW runtime support library (which supports master-worker computations) is key to the implementation. Computational results are presented on large sample average approximations of problems from the literature.Comment: 44 page
    corecore