33 research outputs found

    Workload Modeling for Computer Systems Performance Evaluation

    Full text link

    Understanding User Behavior: From HPC to HTC

    Get PDF
    AbstractIn this paper, we investigate the differences and similarities in user job submission behavior in High Performance Computing (HPC) and High Throughput Computing (HTC). We consider job submission behavior in terms of parallel batch-wise submissions, as well as delays and pauses in job submission. Our findings show that modeling user-based HTC job submission behavior requires knowledge of the underlying bags of tasks, which is often unavailable. Furthermore, we find evidence that subsequent job submission behavior is not influenced by the different complexities and requirements of HPC and HTC jobs

    Workload Modeling and Scalability Analysis of Resource Usage for E-commerce Application

    Get PDF
    Performance analysis and simulation of cloud resource management policies allows cloud providers to improve their QOS. QOS denotes the level of performance, reliability and availability offered by the web application and by the platform infrastructure that hosted it. The modeling of workload is difficult due to the unavailability of trace logs for analysis. A web application model has been suggested to capture different behavior of user profile. The amount of the resource consumed during the process is known as resource usage. A resource usage model shall be created and validated to choose the best estimated policy among Generalized Lambda distribution & Generalized Extreme Value distribution. The validation is carried out by simulating the same model in cloudsim. The obtained results from amazon web services environment and cloudsim simulator are compared by Wilcox method and the accuracy is defined

    Global attraction of ODE-based mean field models with hyperexponential job sizes

    Full text link
    Mean field modeling is a popular approach to assess the performance of large scale computer systems. The evolution of many mean field models is characterized by a set of ordinary differential equations that have a unique fixed point. In order to prove that this unique fixed point corresponds to the limit of the stationary measures of the finite systems, the unique fixed point must be a global attractor. While global attraction was established for various systems in case of exponential job sizes, it is often unclear whether these proof techniques can be generalized to non-exponential job sizes. In this paper we show how simple monotonicity arguments can be used to prove global attraction for a broad class of ordinary differential equations that capture the evolution of mean field models with hyperexponential job sizes. This class includes both existing as well as previously unstudied load balancing schemes and can be used for systems with either finite or infinite buffers. The main novelty of the approach exists in using a Coxian representation for the hyperexponential job sizes and a partial order that is stronger than the componentwise partial order used in the exponential case.Comment: This paper was accepted at ACM Sigmetrics 201

    OStrich: Fair Scheduling for Multiple Submissions

    Get PDF
    International audienceCampaign Scheduling is characterized by multiple job submissions issued from multiple users over time. This model perfectly suits today's systems since most available parallel environments have multiple users sharing a common infrastructure. When scheduling individually the jobs submitted by various users, one crucial issue is to ensure fairness. This work presents a new fair scheduling algorithm called OStrich whose principle is to maintain a virtual time-sharing schedule in which the same amount of processors is assigned to each user. The completion times in the virtual schedule determine the execution order on the physical processors. Then, the campaigns are interleaved in a fair way by OStrich. For independent sequential jobs, we show that OStrich guarantees the stretch of a campaign to be proportional to campaign's size and the total number of users. The stretch is used for measuring by what factor a workload is slowed down relative to the time it takes on an unloaded system. The theoretical performance of our solution is assessed by simulating OStrich compared to the classical FCFS algorithm, issued from synthetic workload traces generated by two different user profiles. This is done to demonstrate how OStrich benefits both types of users, in contrast to FCFS

    Scheduling independent stochastic tasks under deadline and budget constraints

    Get PDF
    International audienceThis paper discusses scheduling strategies for the problem of maximizing the expected number of tasks that can be executed on a cloud platform within a given budget and under a deadline constraint. The execution times of tasks follow IID probability laws. The main questions are how many processors to enroll and whether and when to interrupt tasks that have been executing for some time. We provide complexity results and an asymptotically optimal strategy for the problem instance with discrete probability distributions and without deadline. We extend the latter strategy for the general case with continuous distributions and a deadline and we design an efficient heuristic which is shown to outperform standard approaches when running simulations for a variety of useful distribution laws

    Експериментален desktop компютър за разработки

    Get PDF
    Creating a sequence of hardware platforms playing the role of desktop computers (which are of increasing complexity of the technologies used in them) is a useful educational project. On the other hand, such products are completely useless from a business point of view because they are too weak compared to modern systems and therefore the implementation of the venture can only rely on scientific funding. In the present project we consider a conceptual model in which a purely educational endeavor of the described type can become one that has certain business advantages, which in turn will improve the chances of project implementation

    Tailbench: a benchmark suite and evaluation methodology for latency-critical applications

    Get PDF
    Latency-critical applications, common in datacenters, must achieve small and predictable tail (e.g., 95th or 99th percentile) latencies. Their strict performance requirements limit utilization and efficiency in current datacenters. These problems have sparked research in hardware and software techniques that target tail latency. However, research in this area is hampered by the lack of a comprehensive suite of latency-critical benchmarks. We present TailBench, a benchmark suite and evaluation methodology that makes latency-critical workloads as easy to run and characterize as conventional, throughput-oriented ones. TailBench includes eight applications that span a wide range of latency requirements and domains, and a harness that implements a robust and statistically sound load-testing methodology. The modular design of the TailBench harness facilitates multiple load-testing scenarios, ranging from multi-node configurations that capture network overheads, to simplified single-node configurations that allow measuring tail latency in simulation. Validation results show that the simplified configurations are accurate for most applications. This flexibility enables rapid prototyping of hardware and software techniques for latency-critical workloads.National Science Foundation (U.S.) (CCF-1318384)Qatar Computing Research InstituteGoogle (Firm) (Google Research Award

    LwHBench: A low-level hardware component benchmark and dataset for Single Board Computers

    Get PDF
    In today’s computing environment, where Artificial Intelligence (AI) and data processing are moving toward the Internet of Things (IoT) and Edge computing paradigms, benchmarking resource-constrained devices is a critical task to evaluate their suitability and performance. Between the employed devices, Single-Board Computers arise as multi-purpose and affordable systems. The literature has explored Single-Board Computers performance when running high-level benchmarks specialized in particular application scenarios, such as AI or medical applications. However, lower-level benchmarking applications and datasets are needed to enable new Edge-based AI solutions for network, system and service management based on device and component performance, such as individual device identification. Thus, this paper presents LwHBench, a low-level hardware benchmarking application for Single-Board Computers that measures the performance of CPU, GPU, Memory and Storage taking into account the component constraints in these types of devices. LwHBench has been implemented for Raspberry Pi devices and run for 100 days on a set of 45 devices to generate an extensive dataset that allows the usage of AI techniques in scenarios where performance data can help in the device management process. Besides, to demonstrate the inter-scenario capability of the dataset, a series of AI-enabled use cases about device identification and context impact on performance are presented as exploration of the published data. Finally, the benchmark application has been adapted and applied to an agriculture-focused scenario where three RockPro64 devices are present
    corecore