184 research outputs found

    Next Generation Cloud Computing: New Trends and Research Directions

    Get PDF
    The landscape of cloud computing has significantly changed over the last decade. Not only have more providers and service offerings crowded the space, but also cloud infrastructure that was traditionally limited to single provider data centers is now evolving. In this paper, we firstly discuss the changing cloud infrastructure and consider the use of infrastructure from multiple providers and the benefit of decentralising computing away from data centers. These trends have resulted in the need for a variety of new computing architectures that will be offered by future cloud infrastructure. These architectures are anticipated to impact areas, such as connecting people and devices, data-intensive computing, the service space and self-learning systems. Finally, we lay out a roadmap of challenges that will need to be addressed for realising the potential of next generation cloud systems.Comment: Accepted to Future Generation Computer Systems, 07 September 201

    FogSpot: Spot Pricing for Application Provisioning in Edge/Fog Computing

    Get PDF
    An increasing number of Low Latency Applications (LLAs) in the entertainment, IoT, and automotive domains require response times that challenge the traditional application provisioning using distant Data Centres. Fog computing paradigm extends cloud computing at the edge and middle-tier locations of the network, providing response times an order of magnitude smaller than those that can be achieved by the current "client-to-cloud" network model. Here, we address the challenges of provisioning heavily stateful LLA in the setting where fog infrastructure consists of third-party computing resources, i.e., cloudlets, that comes in the form of "data centres in the box". We introduce FogSpot, a charging mechanism for on-path, on-demand, application provisioning. In FogSpot, cloudlets offer their resources in the form of Virtual Machines (VMs) via markets, collocated with the cloudlets, that interact with forwarded users' application requests for VMs in real time. FogSpot associates each cloudlet with a spot price based on current application requests. The proposed mechanism's design takes into account the characteristics of cloudlets' resources, such as their limited elasticity, and LLAs' attributes, like the expected QoS gain and engagement duration. Lastly, FogSpot guarantees end users' requests truthfulness while focusing in maximising either each cloudlet's revenue or resource utilisation

    Mobile Edge Computing Empowers Internet of Things

    Full text link
    In this paper, we propose a Mobile Edge Internet of Things (MEIoT) architecture by leveraging the fiber-wireless access technology, the cloudlet concept, and the software defined networking framework. The MEIoT architecture brings computing and storage resources close to Internet of Things (IoT) devices in order to speed up IoT data sharing and analytics. Specifically, the IoT devices (belonging to the same user) are associated to a specific proxy Virtual Machine (VM) in the nearby cloudlet. The proxy VM stores and analyzes the IoT data (generated by its IoT devices) in real-time. Moreover, we introduce the semantic and social IoT technology in the context of MEIoT to solve the interoperability and inefficient access control problem in the IoT system. In addition, we propose two dynamic proxy VM migration methods to minimize the end-to-end delay between proxy VMs and their IoT devices and to minimize the total on-grid energy consumption of the cloudlets, respectively. Performance of the proposed methods are validated via extensive simulations

    A Taxonomy for Management and Optimization of Multiple Resources in Edge Computing

    Full text link
    Edge computing is promoted to meet increasing performance needs of data-driven services using computational and storage resources close to the end devices, at the edge of the current network. To achieve higher performance in this new paradigm one has to consider how to combine the efficiency of resource usage at all three layers of architecture: end devices, edge devices, and the cloud. While cloud capacity is elastically extendable, end devices and edge devices are to various degrees resource-constrained. Hence, an efficient resource management is essential to make edge computing a reality. In this work, we first present terminology and architectures to characterize current works within the field of edge computing. Then, we review a wide range of recent articles and categorize relevant aspects in terms of 4 perspectives: resource type, resource management objective, resource location, and resource use. This taxonomy and the ensuing analysis is used to identify some gaps in the existing research. Among several research gaps, we found that research is less prevalent on data, storage, and energy as a resource, and less extensive towards the estimation, discovery and sharing objectives. As for resource types, the most well-studied resources are computation and communication resources. Our analysis shows that resource management at the edge requires a deeper understanding of how methods applied at different levels and geared towards different resource types interact. Specifically, the impact of mobility and collaboration schemes requiring incentives are expected to be different in edge architectures compared to the classic cloud solutions. Finally, we find that fewer works are dedicated to the study of non-functional properties or to quantifying the footprint of resource management techniques, including edge-specific means of migrating data and services.Comment: Accepted in the Special Issue Mobile Edge Computing of the Wireless Communications and Mobile Computing journa

    DEEM: Enabling microservices via DEvice edge markets

    Get PDF
    Native applications running over handheld devices have an irreplaceable role in users' daily activities. That said, recent studies show that users download on average zero new applications on monthly basis, which suggests that new apps can face discoverability issues. In this work, we aim for a web-based, download/installation-free access to native application features through microservices (μ Services)that are shared between user devices in a peer-to-peer (P2P)manner. Such a P2P approach is self-scalable and requires no investment for μ Service deployment, unlike mobile edge computing or Data Centre. We introduce DEEM, a DEvice Edge Market design that enables device-hosted μServices to end-users. In DEEM, μ Service-based markets act as rendezvous points between available μ Service instances and clients. DEEM ensures the i) assignment of instances to the users that value them the most, in terms of QoS gain, and ii) devices' income maximisation. Our evaluation on synthetic settings demonstrates DEEM's capability in exploiting the pool of device instances for improving the application QoS in terms of latency

    Software-Defined Cloud Computing: Architectural Elements and Open Challenges

    Full text link
    The variety of existing cloud services creates a challenge for service providers to enforce reasonable Software Level Agreements (SLA) stating the Quality of Service (QoS) and penalties in case QoS is not achieved. To avoid such penalties at the same time that the infrastructure operates with minimum energy and resource wastage, constant monitoring and adaptation of the infrastructure is needed. We refer to Software-Defined Cloud Computing, or simply Software-Defined Clouds (SDC), as an approach for automating the process of optimal cloud configuration by extending virtualization concept to all resources in a data center. An SDC enables easy reconfiguration and adaptation of physical resources in a cloud infrastructure, to better accommodate the demand on QoS through a software that can describe and manage various aspects comprising the cloud environment. In this paper, we present an architecture for SDCs on data centers with emphasis on mobile cloud applications. We present an evaluation, showcasing the potential of SDC in two use cases-QoS-aware bandwidth allocation and bandwidth-aware, energy-efficient VM placement-and discuss the research challenges and opportunities in this emerging area.Comment: Keynote Paper, 3rd International Conference on Advances in Computing, Communications and Informatics (ICACCI 2014), September 24-27, 2014, Delhi, Indi
    corecore