12,172 research outputs found

    A Minimum-Cost Flow Model for Workload Optimization on Cloud Infrastructure

    Full text link
    Recent technology advancements in the areas of compute, storage and networking, along with the increased demand for organizations to cut costs while remaining responsive to increasing service demands have led to the growth in the adoption of cloud computing services. Cloud services provide the promise of improved agility, resiliency, scalability and a lowered Total Cost of Ownership (TCO). This research introduces a framework for minimizing cost and maximizing resource utilization by using an Integer Linear Programming (ILP) approach to optimize the assignment of workloads to servers on Amazon Web Services (AWS) cloud infrastructure. The model is based on the classical minimum-cost flow model, known as the assignment model.Comment: 2017 IEEE 10th International Conference on Cloud Computin

    Cloud-based desktop services for thin clients

    Get PDF
    Cloud computing and ubiquitous network availability have renewed people's interest in the thin client concept. By executing applications in virtual desktops on cloud servers, users can access any application from any location with any device. For this to be a successful alternative to traditional offline applications, however, researchers must overcome important challenges. The thin client protocol must display audiovisual output fluidly, and the server executing the virtual desktop should have sufficient resources and ideally be close to the user's current location to limit network delay. From a service provider viewpoint, cost reduction is also an important issue

    Conduction in jammed systems of tetrahedra

    Full text link
    Control of transport processes in composite microstructures is critical to the development of high performance functional materials for a variety of energy storage applications. The fundamental process of conduction and its control through the manipulation of granular composite attributes (e.g., grain shape) are the subject of this work. We show that athermally jammed packings of tetrahedra with ultra-short range order exhibit fundamentally different pathways for conduction than those in dense sphere packings. Highly resistive granular constrictions and few face-face contacts between grains result in short-range distortions from the mean temperature field. As a consequence, 'granular' or differential effective medium theory predicts the conductivity of this media within 10% at the jamming point; in contrast, strong enhancement of transport near interparticle contacts in packed-sphere composites results in conductivity divergence at the jamming onset. The results are expected to be particularly relevant to the development of nanomaterials, where nanoparticle building blocks can exhibit a variety of faceted shapes.Comment: 9 pages, 10 figure

    Computing server power modeling in a data center: survey,taxonomy and performance evaluation

    Full text link
    Data centers are large scale, energy-hungry infrastructure serving the increasing computational demands as the world is becoming more connected in smart cities. The emergence of advanced technologies such as cloud-based services, internet of things (IoT) and big data analytics has augmented the growth of global data centers, leading to high energy consumption. This upsurge in energy consumption of the data centers not only incurs the issue of surging high cost (operational and maintenance) but also has an adverse effect on the environment. Dynamic power management in a data center environment requires the cognizance of the correlation between the system and hardware level performance counters and the power consumption. Power consumption modeling exhibits this correlation and is crucial in designing energy-efficient optimization strategies based on resource utilization. Several works in power modeling are proposed and used in the literature. However, these power models have been evaluated using different benchmarking applications, power measurement techniques and error calculation formula on different machines. In this work, we present a taxonomy and evaluation of 24 software-based power models using a unified environment, benchmarking applications, power measurement technique and error formula, with the aim of achieving an objective comparison. We use different servers architectures to assess the impact of heterogeneity on the models' comparison. The performance analysis of these models is elaborated in the paper

    An Algorithm for Network and Data-aware Placement of Multi-Tier Applications in Cloud Data Centers

    Full text link
    Today's Cloud applications are dominated by composite applications comprising multiple computing and data components with strong communication correlations among them. Although Cloud providers are deploying large number of computing and storage devices to address the ever increasing demand for computing and storage resources, network resource demands are emerging as one of the key areas of performance bottleneck. This paper addresses network-aware placement of virtual components (computing and data) of multi-tier applications in data centers and formally defines the placement as an optimization problem. The simultaneous placement of Virtual Machines and data blocks aims at reducing the network overhead of the data center network infrastructure. A greedy heuristic is proposed for the on-demand application components placement that localizes network traffic in the data center interconnect. Such optimization helps reducing communication overhead in upper layer network switches that will eventually reduce the overall traffic volume across the data center. This, in turn, will help reducing packet transmission delay, increasing network performance, and minimizing the energy consumption of network components. Experimental results demonstrate performance superiority of the proposed algorithm over other approaches where it outperforms the state-of-the-art network-aware application placement algorithm across all performance metrics by reducing the average network cost up to 67% and network usage at core switches up to 84%, as well as increasing the average number of application deployments up to 18%.Comment: Submitted for publication consideration for the Journal of Network and Computer Applications (JNCA). Total page: 28. Number of figures: 15 figure
    corecore