29 research outputs found

    Addressing the Challenges in Federating Edge Resources

    Full text link
    This book chapter considers how Edge deployments can be brought to bear in a global context by federating them across multiple geographic regions to create a global Edge-based fabric that decentralizes data center computation. This is currently impractical, not only because of technical challenges, but is also shrouded by social, legal and geopolitical issues. In this chapter, we discuss two key challenges - networking and management in federating Edge deployments. Additionally, we consider resource and modeling challenges that will need to be addressed for a federated Edge.Comment: Book Chapter accepted to the Fog and Edge Computing: Principles and Paradigms; Editors Buyya, Sriram

    The Computing Fleet: Managing Microservices-based Applications on the Computing Continuum

    Get PDF
    In this paper we propose the concept of "Computing Fleet" as an abstract entity representing groups of heterogeneous, distributed, and dynamic infrastructure elements across the Computing Continuum (covering the Edge- Fog-Cloud computing paradigms). In the process of using fleets, stakeholders obtain the virtual resources from the fleet, deploy software applications to the fleet, and control the data flow, without worrying about what devices are used in the fleet, how they are connected, and when they may join and exit the fleet. We propose a three-layer reference architecture for the Computing Fleet capturing key elements for designing and operating fleets. We discuss key aspects related to the management of microservices-based applications on the Computing Fleet and propose an approach for deployment and orchestration of microservices-based applications on fleets. Furthermore, we present a software prototype as a preliminary evaluation of the Computing Fleet concept in a concrete Cloud- Edge scenario related to remote patients monitoring.acceptedVersio

    Li-Fi based on security cloud framework for future IT environment

    Get PDF
    This study was supported by the Research Program funded by the SeoulTech (Seoul National University of Science and Technology).Peer reviewedPublisher PD

    Fog Computing Resource Optimization: A Review on Current Scenarios and Resource Management

    Get PDF
                The unpredictable and huge data generation nowadays by smart computing devices like (Sensors, Actuators, Wi-Fi routers), to handle and maintain their computational processing power in real time environment by centralized cloud platform is difficult because of its limitations, issues and challenges, to overcome these, Cisco introduced the Fog computing paradigm as an alternative for cloud-based computing. This recent IT trend is taking the computing experience to the next level. It is an extended and advantageous extension of the centralized cloud computing technology. In this article, we tried to highlight the various issues that currently cloud computing is facing. Here in this research article, we present a comprehensive review of fog computing, differentiating it from cloud computing, also present various use-cases of fog computing in different domains, we came to conclude that Fog computing leads in an efficient energy resource management, leveraging the energy both in terms of consumption and cost scenarios. Further, we highlighted its key features, challenges and issues, resource optimization methods

    A Taxonomy for Management and Optimization of Multiple Resources in Edge Computing

    Full text link
    Edge computing is promoted to meet increasing performance needs of data-driven services using computational and storage resources close to the end devices, at the edge of the current network. To achieve higher performance in this new paradigm one has to consider how to combine the efficiency of resource usage at all three layers of architecture: end devices, edge devices, and the cloud. While cloud capacity is elastically extendable, end devices and edge devices are to various degrees resource-constrained. Hence, an efficient resource management is essential to make edge computing a reality. In this work, we first present terminology and architectures to characterize current works within the field of edge computing. Then, we review a wide range of recent articles and categorize relevant aspects in terms of 4 perspectives: resource type, resource management objective, resource location, and resource use. This taxonomy and the ensuing analysis is used to identify some gaps in the existing research. Among several research gaps, we found that research is less prevalent on data, storage, and energy as a resource, and less extensive towards the estimation, discovery and sharing objectives. As for resource types, the most well-studied resources are computation and communication resources. Our analysis shows that resource management at the edge requires a deeper understanding of how methods applied at different levels and geared towards different resource types interact. Specifically, the impact of mobility and collaboration schemes requiring incentives are expected to be different in edge architectures compared to the classic cloud solutions. Finally, we find that fewer works are dedicated to the study of non-functional properties or to quantifying the footprint of resource management techniques, including edge-specific means of migrating data and services.Comment: Accepted in the Special Issue Mobile Edge Computing of the Wireless Communications and Mobile Computing journa

    Simulating fog and edge computing scenarios: an overview and research challenges

    Get PDF
    The fourth industrial revolution heralds a paradigm shift in how people, processes, things, data and networks communicate and connect with each other. Conventional computing infrastructures are struggling to satisfy dramatic growth in demand from a deluge of connected heterogeneous endpoints located at the edge of networks while, at the same time, meeting quality of service levels. The complexity of computing at the edge makes it increasingly difficult for infrastructure providers to plan for and provision resources to meet this demand. While simulation frameworks are used extensively in the modelling of cloud computing environments in order to test and validate technical solutions, they are at a nascent stage of development and adoption for fog and edge computing. This paper provides an overview of challenges posed by fog and edge computing in relation to simulation

    Hybrid clouds for data-Intensive, 5G-Enabled IoT applications: an overview, key issues and relevant architecture

    Get PDF
    Hybrid cloud multi-access edge computing (MEC) deployments have been proposed as efficient means to support Internet of Things (IoT) applications, relying on a plethora of nodes and data. In this paper, an overview on the area of hybrid clouds considering relevant research areas is given, providing technologies and mechanisms for the formation of such MEC deployments, as well as emphasizing several key issues that should be tackled by novel approaches, especially under the 5G paradigm. Furthermore, a decentralized hybrid cloud MEC architecture, resulting in a Platform-as-a-Service (PaaS) is proposed and its main building blocks and layers are thoroughly described. Aiming to offer a broad perspective on the business potential of such a platform, the stakeholder ecosystem is also analyzed. Finally, two use cases in the context of smart cities and mobile health are presented, aimed at showing how the proposed PaaS enables the development of respective IoT applications.Peer ReviewedPostprint (published version

    Scheduling in cloud and fog architecture: identification of limitations and suggestion of improvement perspectives

    Get PDF
    Application execution required in cloud and fog architectures are generally heterogeneous in terms of device and application contexts. Scaling these requirements on these architectures is an optimization problem with multiple restrictions. Despite countless efforts, task scheduling in these architectures continue to present some enticing challenges that can lead us to the question how tasks are routed between different physical devices, fog nodes and cloud. In fog, due to its density and heterogeneity of devices, the scheduling is very complex and in the literature, there are still few studies that have been conducted. However, scheduling in the cloud has been widely studied. Nonetheless, many surveys address this issue from the perspective of service providers or optimize application quality of service (QoS) levels. Also, they ignore contextual information at the level of the device and end users and their user experiences. In this paper, we conducted a systematic review of the literature on the main task by: scheduling algorithms in the existing cloud and fog architecture; studying and discussing their limitations, and we explored and suggested some perspectives for improvement.Calouste Gulbenkian Foundation, PhD scholarship No.234242, 2019.info:eu-repo/semantics/publishedVersio

    Understanding Interdependencies among Fog System Characteristics

    Get PDF
    Fog computing adds decentralized computing, storage, and networking capabilities with dedicated nodes as an intermediate layer between cloud data centers and edge devices to solve latency, bandwidth, and resilience issues. However, in-troducing a fog layer imposes new system design challenges. Fog systems not only exhibit a multitude of key system characteristics (e.g., security, resilience, interoperability) but are also beset with various interdependencies among their key characteristics that require developers\u27 attention. Such interdependencies can either be trade-offs with improving the fog system on one characteristic impairing it on another, or synergies with improving the system on one characteristic also improving it on another. As system developers face a multifaceted and complex set of potential system design measures, it is challenging for them to oversee all potentially resulting interdependencies, mitigate trade-offs, and foster synergies. Until now, existing literature on fog system architecture has only analyzed such interdependencies in isolation for specific characteristics, thereby limiting the applicability and generalizability of their proposed system designs if other than the considered characteristics are critical. We aim to fill this gap by conducting a literature review to (1) synthesize the most relevant characteristics of fog systems and design measures to achieve them, and (2) derive interdependences among all key characteristics. From reviewing 147 articles on fog system architectures, we reveal 11 key characteristics and 39 interdependencies. We supplement the key characteristics with a description, reason for their relevance, and related design measures derived from literature to deepen the understanding of a fog system\u27s potential and clarify semantic ambiguities. For the interdependencies, we explain and differentiate each one as positive (synergies) or negative (trade-offs), guiding practitioners and researchers in future design choices to avoid pitfalls and unleash the full potential of fog computing
    corecore