64,533 research outputs found

    Addressing the Node Discovery Problem in Fog Computing

    Get PDF
    In recent years, the Internet of Things (IoT) has gained a lot of attention due to connecting various sensor devices with the cloud, in order to enable smart applications such as: smart traffic management, smart houses, and smart grids, among others. Due to the growing popularity of the IoT, the number of Internet-connected devices has increased significantly. As a result, these devices generate a huge amount of network traffic which may lead to bottlenecks, and eventually increase the communication latency with the cloud. To cope with such issues, a new computing paradigm has emerged, namely: fog computing. Fog computing enables computing that spans from the cloud to the edge of the network in order to distribute the computations of the IoT data, and to reduce the communication latency. However, fog computing is still in its infancy, and there are still related open problems. In this paper, we focus on the node discovery problem, i.e., how to add new compute nodes to a fog computing system. Moreover, we discuss how addressing this problem can have a positive impact on various aspects of fog computing, such as fault tolerance, resource heterogeneity, proximity awareness, and scalability. Finally, based on the experimental results that we produce by simulating various distributed compute nodes, we show how addressing the node discovery problem can improve the fault tolerance of a fog computing system

    Fog Computing: A Taxonomy, Survey and Future Directions

    Full text link
    In recent years, the number of Internet of Things (IoT) devices/sensors has increased to a great extent. To support the computational demand of real-time latency-sensitive applications of largely geo-distributed IoT devices/sensors, a new computing paradigm named "Fog computing" has been introduced. Generally, Fog computing resides closer to the IoT devices/sensors and extends the Cloud-based computing, storage and networking facilities. In this chapter, we comprehensively analyse the challenges in Fogs acting as an intermediate layer between IoT devices/ sensors and Cloud datacentres and review the current developments in this field. We present a taxonomy of Fog computing according to the identified challenges and its key features.We also map the existing works to the taxonomy in order to identify current research gaps in the area of Fog computing. Moreover, based on the observations, we propose future directions for research

    Optimal Resource Allocation in Ultra-low Power Fog-computing SWIPT-based Networks

    Full text link
    In this paper, we consider a fog computing system consisting of a multi-antenna access point (AP), an ultra-low power (ULP) single antenna device and a fog server. The ULP device is assumed to be capable of both energy harvesting (EH) and information decoding (ID) using a time-switching simultaneous wireless information and power transfer (SWIPT) scheme. The ULP device deploys the harvested energy for ID and either local computing or offloading the computations to the fog server depending on which strategy is most energy efficient. In this scenario, we optimize the time slots devoted to EH, ID and local computation as well as the time slot and power required for the offloading to minimize the energy cost of the ULP device. Numerical results are provided to study the effectiveness of the optimized fog computing system and the relevant challenges
    corecore