6,234 research outputs found

    HPC Cloud for Scientific and Business Applications: Taxonomy, Vision, and Research Challenges

    Full text link
    High Performance Computing (HPC) clouds are becoming an alternative to on-premise clusters for executing scientific applications and business analytics services. Most research efforts in HPC cloud aim to understand the cost-benefit of moving resource-intensive applications from on-premise environments to public cloud platforms. Industry trends show hybrid environments are the natural path to get the best of the on-premise and cloud resources---steady (and sensitive) workloads can run on on-premise resources and peak demand can leverage remote resources in a pay-as-you-go manner. Nevertheless, there are plenty of questions to be answered in HPC cloud, which range from how to extract the best performance of an unknown underlying platform to what services are essential to make its usage easier. Moreover, the discussion on the right pricing and contractual models to fit small and large users is relevant for the sustainability of HPC clouds. This paper brings a survey and taxonomy of efforts in HPC cloud and a vision on what we believe is ahead of us, including a set of research challenges that, once tackled, can help advance businesses and scientific discoveries. This becomes particularly relevant due to the fast increasing wave of new HPC applications coming from big data and artificial intelligence.Comment: 29 pages, 5 figures, Published in ACM Computing Surveys (CSUR

    Characterizing Service Level Objectives for Cloud Services: Motivation of Short-Term Cache Allocation Performance Modeling

    Get PDF
    Service level objectives (SLOs) stipulate performance goals for cloud applications, microservices, and infrastructure. SLOs are widely used, in part, because system managers can tailor goals to their products, companies, and workloads. Systems research intended to support strong SLOs should target realistic performance goals used by system managers in the field. Evaluations conducted with uncommon SLO goals may not translate to real systems. Some textbooks discuss the structure of SLOs but (1) they only sketch SLO goals and (2) they use outdated examples. We mined real SLOs published on the web, extracted their goals and characterized them. Many web documents discuss SLOs loosely but few provide details and reflect real settings. Systematic literature review (SLR) prunes results and reduces bias by (1) modeling expected SLO structure and (2) detecting and removing outliers. We collected 75 SLOs where response time, query percentile and reporting period were specified. We used these SLOs to confirm and refute common perceptions. For example, we found few SLOs with response time guarantees below 10 ms for 90% or more queries. This reality bolsters perceptions that single digit SLOs face fundamental research challenges.This work was funded by NSF Grants 1749501 and 1350941.No embargoAcademic Major: Computer Science and EngineeringAcademic Major: Financ

    PRIORITIZED TASK SCHEDULING IN FOG COMPUTING

    Get PDF
    Cloud computing is an environment where virtual resources are shared among the many users over network. A user of Cloud services is billed according to pay-per-use model associated with this environment. To keep this bill to a minimum, efficient resource allocation is of great importance. To handle the many requests sent to Cloud by the clients, the tasks need to be processed according to the SLAs defined by the client. The increase in the usage of Cloud services on a daily basis has introduced delays in the transmission of requests. These delays can cause clients to wait for the response of the tasks beyond the deadline assigned. To overcome these concerns, Fog Computing is helpful as it is physically placed closer to the clients. This layer is placed between the client and the Cloud layer, and it reduces the delay in the transmission of the requests, processing and the response sent back to the client greatly. This paper discusses an algorithm which schedules tasks by calculating the priority of a task in the Fog layer. The tasks with higher priority are processed first so that the deadline is met, which makes the algorithm practical and efficient

    VIRTUALIZED BASEBAND UNITS CONSOLIDATION IN ADVANCED LTE NETWORKS USING MOBILITY- AND POWER-AWARE ALGORITHMS

    Get PDF
    Virtualization of baseband units in Advanced Long-Term Evolution networks and a rapid performance growth of general purpose processors naturally raise the interest in resource multiplexing. The concept of resource sharing and management between virtualized instances is not new and extensively used in data centers. We adopt some of the resource management techniques to organize virtualized baseband units on a pool of hosts and investigate the behavior of the system in order to identify features which are particularly relevant to mobile environment. Subsequently, we introduce our own resource management algorithm specifically targeted to address some of the peculiarities identified by experimental results

    OpenHuaca, a platform for easily building cloud infrastructure

    Get PDF
    La tesis está basada en la definición e implementación de una plataforma de código libre llamada OpenHuaca, la cual es una plataforma cloud privada basada en contenedores LXC y máquinas KVM. Pensada para centros de investigación, con el objetivo de acercar el mundo cloud a todos los niveles

    Adaptable Service Oriented Infrastructure Provisioning with Lightweight Containers Virtualization Technology

    Get PDF
    Modern computing infrastructures should enable realization of converged provisioning and governance operations on virtualized computing, storage and network resources used on behalf of users' workloads. These workloads must have ensured sufficient access to the resources to satisfy required QoS. This requires flexible platforms providing functionality for construction, activation and governance of Runtime Infrastructure which can be realized according to Service Oriented Infrastructure (SOI) paradigm. Implementation of the SOI management framework requires definition of flexible architecture and utilization of advanced software engineering and policy-based techniques. The paper presents an Adaptable SOI Provisioning Platform which supports adaptable SOI provisioning with lightweight virtualization, compliant with the structured process model suitable for construction, activation and governance of IT environments. The requirements, architecture and implementation of the platform are all discussed. Practical usage of the platform is presented on the basis of a complex case study for provisioning JEE middleware on top of the Solaris 10 lightweight virtualization platform

    Energy efficient task scheduling in data center

    Get PDF
    First of all, I am thankful to God for his blessings and showing me the right direction. With His mercy, it has been made possible for me to reach so far. Foremost, I would like to express my sincere gratitude to my advisor Prof. Durga Prasad Mohapatra for the continuous support of my M.Tech study and research, for his patience, motivation, enthusiasm, and immense knowledge. I am thankful for her continual support, encouragement, and invaluable suggestion. His guidance helped me in all the time of research and writing of this thesis. I could not have imagined having a better advisor and mentor for my M.Tech study. Besides my advisor, I extend my thanks to our HOD, Prof. S. K. Rath and Prof. B. D. Sahoo for their valuable advices and encouragement. I express my gratitude to all the sta members of Computer Science and Engineering Department for providing me all the facilities required for the completion of my thesis work. I would like to say thanks to all my friends especially Dilip Kumar, Alok Pandey for their support. Last but not the least I am highly grateful to all my family members for their inspiration and ever encouraging moral support, which enables me to purse my studies
    corecore