216 research outputs found

    Service Isolation vs. Consolidation: Implications for Iaas Cloud Application Deployment

    Get PDF
    Service isolation, achieved by deploying components of multi -tier applications using separate virtual machines (VMs), is a common \u27best\u27 practice. Various advantages cited include simpler deployment architectures, easier resource scalability for supporting dynamic application throughput requirements, and support for component-level fault tolerance . This paper presents results from an empirical study which investigates the performance implications of component placement for deployments of multi -tier applications to Infrastructure-as-a- Service (IaaS) clouds. Relationship s between performance and resource utilization (CPU, disk, network) are investigated to better understand the implications which result from how applications are deployed. All possible deployments for two variants of a multi -tier application were tested, one computationally bound by the model, the other bound by a geospatial database. The best performing deployments required as few as 2 VMs, half the number required for service isolation, demonstrating potential cost savings with service consolidation. Resource use (CPU time, disk I/O, and network I/O) varied based on component placement and VM memory allocation. Using separate VMs to host each application component resulted in performance overhead of ~1 -2%. Relationships between resource utilization an d performance were harnessed to build a multiple linear regression model to predict performance of component deployments. CPU time, disk sector reads, and disk sector writes are identified as the most powerful performance predictors for component deployments

    Distributed opportunistic scheduling in multihop wireless ad hoc networks

    Get PDF
    In this paper, we introduce a framework for distributed opportunistic scheduling in multihop wireless ad hoc networks. With the proposed framework, one can take a scheduling algorithm originally designed for infrastructure-based wireless networks and adapt it to multihop ad hoc networks. The framework includes a wireless link state estimation mechanism, a medium access control (MAC) protocols and a MAC load control mechanism. The proposed link state estimation mechanism accounts for the latest results of packet transmissions on each wireless link. To improve robustness and provide service isolation during channel errors, the MAC protocol should not make any packet retransmissions but only report the transmission result to the scheduler. We modify IEEE 802.11 to fulfill these requirements. The MAC load control mechanism improves the system robustness. With link state information and the modified IEEE 802.11 MAC, we use BGFS-EBA, an opportunistic scheduling algorithm for infrastructured wireless networks, as an example to demonstrate how such an algorithm is converted into its distributed version within the proposed framework. The simulation results show that our proposed method can provide robust outcome fairness in the presence of channel errors. ©2008 IEEE.published_or_final_versio

    QUALITY OF SERVICE ARCHITECTURES APPLICABILITY IN AN INTRANET NETWORK

    Get PDF
    The quality of service (QoS) concept, which appeared initially as a necessity to improve Internet users perception, deals actually with new valences along with information society maturation. At the organisation’s level, the Intranet network shall assure in a similar manner as the Internet all kinds of services, which are useful to the organisation’s users. Starting from the traditional QoS architectural models, network administrators shall plan and design a QoS architecture, which will map on the organisation’s requirements, having at disposal not only own network elements but also communication services provided by other operators. The aim of this paper is to present, starting from the general QoS models, a comparative study of main advantages and drawbacks in implementing a specific Intranet QoS architecture taking into consideration all kind of aspects (material, financial, human resources), which impact on a good Intranet QoS management.QoS, IntServ, DiffServ, IntServ over DiffServ, VPN-MPLS, Intranet network

    TrIMS: Transparent and Isolated Model Sharing for Low Latency Deep LearningInference in Function as a Service Environments

    Full text link
    Deep neural networks (DNNs) have become core computation components within low latency Function as a Service (FaaS) prediction pipelines: including image recognition, object detection, natural language processing, speech synthesis, and personalized recommendation pipelines. Cloud computing, as the de-facto backbone of modern computing infrastructure for both enterprise and consumer applications, has to be able to handle user-defined pipelines of diverse DNN inference workloads while maintaining isolation and latency guarantees, and minimizing resource waste. The current solution for guaranteeing isolation within FaaS is suboptimal -- suffering from "cold start" latency. A major cause of such inefficiency is the need to move large amount of model data within and across servers. We propose TrIMS as a novel solution to address these issues. Our proposed solution consists of a persistent model store across the GPU, CPU, local storage, and cloud storage hierarchy, an efficient resource management layer that provides isolation, and a succinct set of application APIs and container technologies for easy and transparent integration with FaaS, Deep Learning (DL) frameworks, and user code. We demonstrate our solution by interfacing TrIMS with the Apache MXNet framework and demonstrate up to 24x speedup in latency for image classification models and up to 210x speedup for large models. We achieve up to 8x system throughput improvement.Comment: In Proceedings CLOUD 201

    5G Wireless Network Slicing for eMBB, URLLC, and mMTC: A Communication-Theoretic View

    Get PDF
    The grand objective of 5G wireless technology is to support three generic services with vastly heterogeneous requirements: enhanced mobile broadband (eMBB), massive machine-type communications (mMTC), and ultra-reliable low-latency communications (URLLC). Service heterogeneity can be accommodated by network slicing, through which each service is allocated resources to provide performance guarantees and isolation from the other services. Slicing of the Radio Access Network (RAN) is typically done by means of orthogonal resource allocation among the services. This work studies the potential advantages of allowing for non-orthogonal sharing of RAN resources in uplink communications from a set of eMBB, mMTC and URLLC devices to a common base station. The approach is referred to as Heterogeneous Non-Orthogonal Multiple Access (H-NOMA), in contrast to the conventional NOMA techniques that involve users with homogeneous requirements and hence can be investigated through a standard multiple access channel. The study devises a communication-theoretic model that accounts for the heterogeneous requirements and characteristics of the three services. The concept of reliability diversity is introduced as a design principle that leverages the different reliability requirements across the services in order to ensure performance guarantees with non-orthogonal RAN slicing. This study reveals that H-NOMA can lead, in some regimes, to significant gains in terms of performance trade-offs among the three generic services as compared to orthogonal slicing.Comment: Submitted to IEE

    SDN/NFV-enabled satellite communications networks: opportunities, scenarios and challenges

    Get PDF
    In the context of next generation 5G networks, the satellite industry is clearly committed to revisit and revamp the role of satellite communications. As major drivers in the evolution of (terrestrial) fixed and mobile networks, Software Defined Networking (SDN) and Network Function Virtualisation (NFV) technologies are also being positioned as central technology enablers towards improved and more flexible integration of satellite and terrestrial segments, providing satellite network further service innovation and business agility by advanced network resources management techniques. Through the analysis of scenarios and use cases, this paper provides a description of the benefits that SDN/NFV technologies can bring into satellite communications towards 5G. Three scenarios are presented and analysed to delineate different potential improvement areas pursued through the introduction of SDN/NFV technologies in the satellite ground segment domain. Within each scenario, a number of use cases are developed to gain further insight into specific capabilities and to identify the technical challenges stemming from them.Peer ReviewedPostprint (author's final draft

    Fair Bandwidth Allocation in Optical Burst Switching Networks

    Get PDF
    Optical burst switching (OBS) is a promising switching technology for next-generation Internet backbone networks. One of the design challenges is how to provide fair bandwidth allocation in OBS networks; the schemes proposed for general store-and-forward IP switching networks can not be used because of the non-buffering and un-fully utilized bandwidth characteristics of OBS networks. We propose a rate fairness preemption (RFP) scheme to achieve approximately weighted max-min fair bandwidth allocation in OBS networks. We present an analysis of the burst loss probability in RFP-based OBS networks. The analysis and simulation results show that the RFP scheme provides fair bandwidth allocation in OBS network

    Autonomous management of cost, performance, and resource uncertainty for migration of applications to infrastructure-as-a-service (IaaS) clouds

    Get PDF
    2014 Fall.Includes bibliographical references.Infrastructure-as-a-Service (IaaS) clouds abstract physical hardware to provide computing resources on demand as a software service. This abstraction leads to the simplistic view that computing resources are homogeneous and infinite scaling potential exists to easily resolve all performance challenges. Adoption of cloud computing, in practice however, presents many resource management challenges forcing practitioners to balance cost and performance tradeoffs to successfully migrate applications. These challenges can be broken down into three primary concerns that involve determining what, where, and when infrastructure should be provisioned. In this dissertation we address these challenges including: (1) performance variance from resource heterogeneity, virtualization overhead, and the plethora of vaguely defined resource types; (2) virtual machine (VM) placement, component composition, service isolation, provisioning variation, and resource contention for multitenancy; and (3) dynamic scaling and resource elasticity to alleviate performance bottlenecks. These resource management challenges are addressed through the development and evaluation of autonomous algorithms and methodologies that result in demonstrably better performance and lower monetary costs for application deployments to both public and private IaaS clouds. This dissertation makes three primary contributions to advance cloud infrastructure management for application hosting. First, it includes design of resource utilization models based on step-wise multiple linear regression and artificial neural networks that support prediction of better performing component compositions. The total number of possible compositions is governed by Bell's Number that results in a combinatorially explosive search space. Second, it includes algorithms to improve VM placements to mitigate resource heterogeneity and contention using a load-aware VM placement scheduler, and autonomous detection of under-performing VMs to spur replacement. Third, it describes a workload cost prediction methodology that harnesses regression models and heuristics to support determination of infrastructure alternatives that reduce hosting costs. Our methodology achieves infrastructure predictions with an average mean absolute error of only 0.3125 VMs for multiple workloads
    • …
    corecore