1,166 research outputs found

    Foggy clouds and cloudy fogs: a real need for coordinated management of fog-to-cloud computing systems

    Get PDF
    The recent advances in cloud services technology are fueling a plethora of information technology innovation, including networking, storage, and computing. Today, various flavors have evolved of IoT, cloud computing, and so-called fog computing, a concept referring to capabilities of edge devices and users' clients to compute, store, and exchange data among each other and with the cloud. Although the rapid pace of this evolution was not easily foreseeable, today each piece of it facilitates and enables the deployment of what we commonly refer to as a smart scenario, including smart cities, smart transportation, and smart homes. As most current cloud, fog, and network services run simultaneously in each scenario, we observe that we are at the dawn of what may be the next big step in the cloud computing and networking evolution, whereby services might be executed at the network edge, both in parallel and in a coordinated fashion, as well as supported by the unstoppable technology evolution. As edge devices become richer in functionality and smarter, embedding capacities such as storage or processing, as well as new functionalities, such as decision making, data collection, forwarding, and sharing, a real need is emerging for coordinated management of fog-to-cloud (F2C) computing systems. This article introduces a layered F2C architecture, its benefits and strengths, as well as the arising open and research challenges, making the case for the real need for their coordinated management. Our architecture, the illustrative use case presented, and a comparative performance analysis, albeit conceptual, all clearly show the way forward toward a new IoT scenario with a set of existing and unforeseen services provided on highly distributed and dynamic compute, storage, and networking resources, bringing together heterogeneous and commodity edge devices, emerging fogs, as well as conventional clouds.Peer ReviewedPostprint (author's final draft

    Understanding Interdependencies among Fog System Characteristics

    Get PDF
    Fog computing adds decentralized computing, storage, and networking capabilities with dedicated nodes as an intermediate layer between cloud data centers and edge devices to solve latency, bandwidth, and resilience issues. However, in-troducing a fog layer imposes new system design challenges. Fog systems not only exhibit a multitude of key system characteristics (e.g., security, resilience, interoperability) but are also beset with various interdependencies among their key characteristics that require developers\u27 attention. Such interdependencies can either be trade-offs with improving the fog system on one characteristic impairing it on another, or synergies with improving the system on one characteristic also improving it on another. As system developers face a multifaceted and complex set of potential system design measures, it is challenging for them to oversee all potentially resulting interdependencies, mitigate trade-offs, and foster synergies. Until now, existing literature on fog system architecture has only analyzed such interdependencies in isolation for specific characteristics, thereby limiting the applicability and generalizability of their proposed system designs if other than the considered characteristics are critical. We aim to fill this gap by conducting a literature review to (1) synthesize the most relevant characteristics of fog systems and design measures to achieve them, and (2) derive interdependences among all key characteristics. From reviewing 147 articles on fog system architectures, we reveal 11 key characteristics and 39 interdependencies. We supplement the key characteristics with a description, reason for their relevance, and related design measures derived from literature to deepen the understanding of a fog system\u27s potential and clarify semantic ambiguities. For the interdependencies, we explain and differentiate each one as positive (synergies) or negative (trade-offs), guiding practitioners and researchers in future design choices to avoid pitfalls and unleash the full potential of fog computing

    TrustE-VC: Trustworthy Evaluation Framework for Industrial Connected Vehicles in the Cloud

    Get PDF
    The integration between cloud computing and vehicular ad hoc networks, namely, vehicular clouds (VCs), has become a significant research area. This integration was proposed to accelerate the adoption of intelligent transportation systems. The trustworthiness in VCs is expected to carry more computing capabilities that manage large-scale collected data. This trend requires a security evaluation framework that ensures data privacy protection, integrity of information, and availability of resources. To the best of our knowledge, this is the first study that proposes a robust trustworthiness evaluation of vehicular cloud for security criteria evaluation and selection. This article proposes three-level security features in order to develop effectiveness and trustworthiness in VCs. To assess and evaluate these security features, our evaluation framework consists of three main interconnected components: 1) an aggregation of the security evaluation values of the security criteria for each level; 2) a fuzzy multicriteria decision-making algorithm; and 3) a simple additive weight associated with the importance-performance analysis and performance rate to visualize the framework findings. The evaluation results of the security criteria based on the average performance rate and global weight suggest that data residency, data privacy, and data ownership are the most pressing challenges in assessing data protection in a VC environment. Overall, this article paves the way for a secure VC using an evaluation of effective security features and underscores directions and challenges facing the VC community. This article sheds light on the importance of security by design, emphasizing multiple layers of security when implementing industrial VCsThis work was supported in part by the Ministry of Education, Culture, and Sport, Government of Spain under Grant TIN2016-76373-P, in part by the Xunta de Galicia Accreditation 2016–2019 under Grant ED431G/08 and Grant ED431C 2018/2019, and in part by the European Union under the European Regional Development FundS

    Design and Implementation of Secure Location Service Using Software Engineering Approach in the Age of Industry 4.0

    Get PDF
    Data privacy and security are major concerns in any location-based system. In majority of location-based systems, data security is ensured via data replacement policies. Data replacement or hiding policy requires additional measures for providing required security standards for Industry 4.0. Whereas, cryptography primitives and protocols are integral part of any network and can be re-used for ensuring user’s locations in Industry 4.0 based applications. In this work, an application has been designed and developed that used RSA encryption/decryption algorithm for ensuring location data’s confidentiality. The proposed system is distributed in nature and gives access to location’s information after users get authenticated and authorized. In the proposed system, a threshold-based subset mechanism is adopted for keys and their storage. Server is designed to securely store the location information for clients and provide this information to those set of clients or users who are able to verify sum of subset of keys. This work has elaborated the location-based data confidentiality designs in a distributed client/server environment and presented the in-depth system working with different flow diagrams. The command line and graphical User Interface (GUI)-based implementation shows that the proposed system is capable of working with standard system requirements (i5 processor, 4 GB RAM and 64-bits operating system). In addition to location information, system is able to provide much important information (including IP address, timestamp, time to access, hop count) that enhances the overall system capabilities

    Toward sustainable serverless computing

    Get PDF
    Although serverless computing generally involves executing short-lived “functions,” the increasing migration to this computing paradigm requires careful consideration of energy and power requirements. serverless computing is also viewed as an economically-driven computational approach, often influenced by the cost of computation, as users are charged for per-subsecond use of computational resources rather than the coarse-grained charging that is common with virtual machines and containers. To ensure that the startup times of serverless functions do not discourage their use, resource providers need to keep these functions hot, often by passing in synthetic data. We describe the real power consumption characteristics of serverless, based on execution traces reported in the literature, and describe potential strategies (some adopted from existing VM and container-based approaches) that can be used to reduce the energy overheads of serverless execution. Our analysis is, purposefully, biased toward the use of machine learning workloads because: (1) workloads are increasingly being used widely across different applications; (2) functions that implement machine learning algorithms can range in complexity from long-running (deep learning) versus short-running (inference only), enabling us to consider serverless across a variety of possible execution behaviors. The general findings are easily translatable to other domains.PostprintPeer reviewe
    • …
    corecore