3,107 research outputs found

    Algorithms for advance bandwidth reservation in media production networks

    Get PDF
    Media production generally requires many geographically distributed actors (e.g., production houses, broadcasters, advertisers) to exchange huge amounts of raw video and audio data. Traditional distribution techniques, such as dedicated point-to-point optical links, are highly inefficient in terms of installation time and cost. To improve efficiency, shared media production networks that connect all involved actors over a large geographical area, are currently being deployed. The traffic in such networks is often predictable, as the timing and bandwidth requirements of data transfers are generally known hours or even days in advance. As such, the use of advance bandwidth reservation (AR) can greatly increase resource utilization and cost efficiency. In this paper, we propose an Integer Linear Programming formulation of the bandwidth scheduling problem, which takes into account the specific characteristics of media production networks, is presented. Two novel optimization algorithms based on this model are thoroughly evaluated and compared by means of in-depth simulation results

    InterCloud: Utility-Oriented Federation of Cloud Computing Environments for Scaling of Application Services

    Full text link
    Cloud computing providers have setup several data centers at different geographical locations over the Internet in order to optimally serve needs of their customers around the world. However, existing systems do not support mechanisms and policies for dynamically coordinating load distribution among different Cloud-based data centers in order to determine optimal location for hosting application services to achieve reasonable QoS levels. Further, the Cloud computing providers are unable to predict geographic distribution of users consuming their services, hence the load coordination must happen automatically, and distribution of services must change in response to changes in the load. To counter this problem, we advocate creation of federated Cloud computing environment (InterCloud) that facilitates just-in-time, opportunistic, and scalable provisioning of application services, consistently achieving QoS targets under variable workload, resource and network conditions. The overall goal is to create a computing environment that supports dynamic expansion or contraction of capabilities (VMs, services, storage, and database) for handling sudden variations in service demands. This paper presents vision, challenges, and architectural elements of InterCloud for utility-oriented federation of Cloud computing environments. The proposed InterCloud environment supports scaling of applications across multiple vendor clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that federated Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.Comment: 20 pages, 4 figures, 3 tables, conference pape

    Business Integration as a Service

    No full text
    This paper presents Business Integration as a Service (BIaS) which enables connections between services operating in the Cloud. BIaS integrates different services and business activities to achieve a streamline process. We illustrate this integration using two services; Return on Investment (ROI) Measurement as a Service (RMaaS) and Risk Analysis as a Service (RAaaS) in two case studies at the University of Southampton and Vodafone/Apple. The University of Southampton case study demonstrates the cost-savings and the risk analysis achieved, so two services can work as a single service. The Vodafone/Apple case study illustrates statistical analysis and 3D Visualisation of expected revenue and associated risk. These two cases confirm the benefits of BIaS adoption, including cost reduction and improvements in efficiency and risk analysis. Implementation of BIaS in other organisations is also discussed. Important data arising from the integration of RMaaS and RAaaS are useful for management of University of Southampton and potential and current investors for Vodafone/Apple

    A Case Study for Business Integration as a Service

    No full text
    This paper presents Business Integration as a Service (BIaaS) to allow two services to work together in the Cloud to achieve a streamline process. We illustrate this integration using two services; Return on Investment (ROI) Measurement as a Service (RMaaS) and Risk Analysis as a Service (RAaaS) in the case study at the University of Southampton. The case study demonstrates the cost-savings and the risk analysis achieved, so two services can work as a single service. Advanced techniques are used to demonstrate statistical services and 3D Visualisation services under the remit of RMaaS and Monte Carlo Simulation as a Service behind the design of RAaaS. Computational results are presented with their implications discussed. Different types of risks associated with Cloud adoption can be calculated easily, rapidly and accurately with the use of BIaaS. This case study confirms the benefits of BIaaS adoption, including cost reduction and improvements in efficiency and risk analysis. Implementation of BIaaS in other organisations is also discussed. Important data arising from the integration of RMaaS and RAaaS are useful for management and stakeholders of University of Southampton

    Towards delay-aware container-based Service Function Chaining in Fog Computing

    Get PDF
    Recently, the fifth-generation mobile network (5G) is getting significant attention. Empowered by Network Function Virtualization (NFV), 5G networks aim to support diverse services coming from different business verticals (e.g. Smart Cities, Automotive, etc). To fully leverage on NFV, services must be connected in a specific order forming a Service Function Chain (SFC). SFCs allow mobile operators to benefit from the high flexibility and low operational costs introduced by network softwarization. Additionally, Cloud computing is evolving towards a distributed paradigm called Fog Computing, which aims to provide a distributed cloud infrastructure by placing computational resources close to end-users. However, most SFC research only focuses on Multi-access Edge Computing (MEC) use cases where mobile operators aim to deploy services close to end-users. Bi-directional communication between Edges and Cloud are not considered in MEC, which in contrast is highly important in a Fog environment as in distributed anomaly detection services. Therefore, in this paper, we propose an SFC controller to optimize the placement of service chains in Fog environments, specifically tailored for Smart City use cases. Our approach has been validated on the Kubernetes platform, an open-source orchestrator for the automatic deployment of micro-services. Our SFC controller has been implemented as an extension to the scheduling features available in Kubernetes, enabling the efficient provisioning of container-based SFCs while optimizing resource allocation and reducing the end-to-end (E2E) latency. Results show that the proposed approach can lower the network latency up to 18% for the studied use case while conserving bandwidth when compared to the default scheduling mechanism
    • …
    corecore