1,501 research outputs found

    Foggy clouds and cloudy fogs: a real need for coordinated management of fog-to-cloud computing systems

    Get PDF
    The recent advances in cloud services technology are fueling a plethora of information technology innovation, including networking, storage, and computing. Today, various flavors have evolved of IoT, cloud computing, and so-called fog computing, a concept referring to capabilities of edge devices and users' clients to compute, store, and exchange data among each other and with the cloud. Although the rapid pace of this evolution was not easily foreseeable, today each piece of it facilitates and enables the deployment of what we commonly refer to as a smart scenario, including smart cities, smart transportation, and smart homes. As most current cloud, fog, and network services run simultaneously in each scenario, we observe that we are at the dawn of what may be the next big step in the cloud computing and networking evolution, whereby services might be executed at the network edge, both in parallel and in a coordinated fashion, as well as supported by the unstoppable technology evolution. As edge devices become richer in functionality and smarter, embedding capacities such as storage or processing, as well as new functionalities, such as decision making, data collection, forwarding, and sharing, a real need is emerging for coordinated management of fog-to-cloud (F2C) computing systems. This article introduces a layered F2C architecture, its benefits and strengths, as well as the arising open and research challenges, making the case for the real need for their coordinated management. Our architecture, the illustrative use case presented, and a comparative performance analysis, albeit conceptual, all clearly show the way forward toward a new IoT scenario with a set of existing and unforeseen services provided on highly distributed and dynamic compute, storage, and networking resources, bringing together heterogeneous and commodity edge devices, emerging fogs, as well as conventional clouds.Peer ReviewedPostprint (author's final draft

    Vehicular Fog Computing Enabled Real-time Collision Warning via Trajectory Calibration

    Full text link
    Vehicular fog computing (VFC) has been envisioned as a promising paradigm for enabling a variety of emerging intelligent transportation systems (ITS). However, due to inevitable as well as non-negligible issues in wireless communication, including transmission latency and packet loss, it is still challenging in implementing safety-critical applications, such as real-time collision warning in vehicular networks. In this paper, we present a vehicular fog computing architecture, aiming at supporting effective and real-time collision warning by offloading computation and communication overheads to distributed fog nodes. With the system architecture, we further propose a trajectory calibration based collision warning (TCCW) algorithm along with tailored communication protocols. Specifically, an application-layer vehicular-to-infrastructure (V2I) communication delay is fitted by the Stable distribution with real-world field testing data. Then, a packet loss detection mechanism is designed. Finally, TCCW calibrates real-time vehicle trajectories based on received vehicle status including GPS coordinates, velocity, acceleration, heading direction, as well as the estimation of communication delay and the detection of packet loss. For performance evaluation, we build the simulation model and implement conventional solutions including cloud-based warning and fog-based warning without calibration for comparison. Real-vehicle trajectories are extracted as the input, and the simulation results demonstrate that the effectiveness of TCCW in terms of the highest precision and recall in a wide range of scenarios

    A Taxonomy for Management and Optimization of Multiple Resources in Edge Computing

    Full text link
    Edge computing is promoted to meet increasing performance needs of data-driven services using computational and storage resources close to the end devices, at the edge of the current network. To achieve higher performance in this new paradigm one has to consider how to combine the efficiency of resource usage at all three layers of architecture: end devices, edge devices, and the cloud. While cloud capacity is elastically extendable, end devices and edge devices are to various degrees resource-constrained. Hence, an efficient resource management is essential to make edge computing a reality. In this work, we first present terminology and architectures to characterize current works within the field of edge computing. Then, we review a wide range of recent articles and categorize relevant aspects in terms of 4 perspectives: resource type, resource management objective, resource location, and resource use. This taxonomy and the ensuing analysis is used to identify some gaps in the existing research. Among several research gaps, we found that research is less prevalent on data, storage, and energy as a resource, and less extensive towards the estimation, discovery and sharing objectives. As for resource types, the most well-studied resources are computation and communication resources. Our analysis shows that resource management at the edge requires a deeper understanding of how methods applied at different levels and geared towards different resource types interact. Specifically, the impact of mobility and collaboration schemes requiring incentives are expected to be different in edge architectures compared to the classic cloud solutions. Finally, we find that fewer works are dedicated to the study of non-functional properties or to quantifying the footprint of resource management techniques, including edge-specific means of migrating data and services.Comment: Accepted in the Special Issue Mobile Edge Computing of the Wireless Communications and Mobile Computing journa
    corecore