9,260 research outputs found

    Fog Computing: A Taxonomy, Survey and Future Directions

    Full text link
    In recent years, the number of Internet of Things (IoT) devices/sensors has increased to a great extent. To support the computational demand of real-time latency-sensitive applications of largely geo-distributed IoT devices/sensors, a new computing paradigm named "Fog computing" has been introduced. Generally, Fog computing resides closer to the IoT devices/sensors and extends the Cloud-based computing, storage and networking facilities. In this chapter, we comprehensively analyse the challenges in Fogs acting as an intermediate layer between IoT devices/ sensors and Cloud datacentres and review the current developments in this field. We present a taxonomy of Fog computing according to the identified challenges and its key features.We also map the existing works to the taxonomy in order to identify current research gaps in the area of Fog computing. Moreover, based on the observations, we propose future directions for research

    End-to-end resource management for federated delivery of multimedia services

    Get PDF
    Recently, the Internet has become a popular platform for the delivery of multimedia content. Currently, multimedia services are either offered by Over-the-top (OTT) providers or by access ISPs over a managed IP network. As OTT providers offer their content across the best-effort Internet, they cannot offer any Quality of Service (QoS) guarantees to their users. On the other hand, users of managed multimedia services are limited to the relatively small selection of content offered by their own ISP. This article presents a framework that combines the advantages of both existing approaches, by dynamically setting up federations between the stakeholders involved in the content delivery process. Specifically, the framework provides an automated mechanism to set up end-to-end federations for QoS-aware delivery of multimedia content across the Internet. QoS contracts are automatically negotiated between the content provider, its customers, and the intermediary network domains. Additionally, a federated resource reservation algorithm is presented, which allows the framework to identify the optimal set of stakeholders and resources to include within a federation. Its goal is to minimize delivery costs for the content provider, while satisfying customer QoS requirements. Moreover, the presented framework allows intermediary storage sites to be included in these federations, supporting on-the-fly deployment of content caches along the delivery paths. The algorithm was thoroughly evaluated in order to validate our approach and assess the merits of including intermediary storage sites. The results clearly show the benefits of our method, with delivery cost reductions of up to 80 % in the evaluated scenario

    DEPAS: A Decentralized Probabilistic Algorithm for Auto-Scaling

    Full text link
    The dynamic provisioning of virtualized resources offered by cloud computing infrastructures allows applications deployed in a cloud environment to automatically increase and decrease the amount of used resources. This capability is called auto-scaling and its main purpose is to automatically adjust the scale of the system that is running the application to satisfy the varying workload with minimum resource utilization. The need for auto-scaling is particularly important during workload peaks, in which applications may need to scale up to extremely large-scale systems. Both the research community and the main cloud providers have already developed auto-scaling solutions. However, most research solutions are centralized and not suitable for managing large-scale systems, moreover cloud providers' solutions are bound to the limitations of a specific provider in terms of resource prices, availability, reliability, and connectivity. In this paper we propose DEPAS, a decentralized probabilistic auto-scaling algorithm integrated into a P2P architecture that is cloud provider independent, thus allowing the auto-scaling of services over multiple cloud infrastructures at the same time. Our simulations, which are based on real service traces, show that our approach is capable of: (i) keeping the overall utilization of all the instantiated cloud resources in a target range, (ii) maintaining service response times close to the ones obtained using optimal centralized auto-scaling approaches.Comment: Submitted to Springer Computin

    SLA-Oriented Resource Provisioning for Cloud Computing: Challenges, Architecture, and Solutions

    Full text link
    Cloud computing systems promise to offer subscription-oriented, enterprise-quality computing services to users worldwide. With the increased demand for delivering services to a large number of users, they need to offer differentiated services to users and meet their quality expectations. Existing resource management systems in data centers are yet to support Service Level Agreement (SLA)-oriented resource allocation, and thus need to be enhanced to realize cloud computing and utility computing. In addition, no work has been done to collectively incorporate customer-driven service management, computational risk management, and autonomic resource management into a market-based resource management system to target the rapidly changing enterprise requirements of Cloud computing. This paper presents vision, challenges, and architectural elements of SLA-oriented resource management. The proposed architecture supports integration of marketbased provisioning policies and virtualisation technologies for flexible allocation of resources to applications. The performance results obtained from our working prototype system shows the feasibility and effectiveness of SLA-based resource provisioning in Clouds.Comment: 10 pages, 7 figures, Conference Keynote Paper: 2011 IEEE International Conference on Cloud and Service Computing (CSC 2011, IEEE Press, USA), Hong Kong, China, December 12-14, 201
    • …
    corecore