18 research outputs found

    Multiple Linear Regression-Based Energy-Aware Resource Allocation in the Fog Computing Environment

    Full text link
    Fog computing is a promising computing paradigm for time-sensitive Internet of Things (IoT) applications. It helps to process data close to the users, in order to deliver faster processing outcomes than the Cloud; it also helps to reduce network traffic. The computation environment in the Fog computing is highly dynamic and most of the Fog devices are battery powered hence the chances of application failure is high which leads to delaying the application outcome. On the other hand, if we rerun the application in other devices after the failure it will not comply with time-sensitiveness. To solve this problem, we need to run applications in an energy-efficient manner which is a challenging task due to the dynamic nature of Fog computing environment. It is required to schedule application in such a way that the application should not fail due to the unavailability of energy. In this paper, we propose a multiple linear, regression-based resource allocation mechanism to run applications in an energy-aware manner in the Fog computing environment to minimise failures due to energy constraint. Prior works lack of energy-aware application execution considering dynamism of Fog environment. Hence, we propose A multiple linear regression-based approach which can achieve such objectives. We present a sustainable energy-aware framework and algorithm which execute applications in Fog environment in an energy-aware manner. The trade-off between energy-efficient allocation and application execution time has been investigated and shown to have a minimum negative impact on the system for energy-aware allocation. We compared our proposed method with existing approaches. Our proposed approach minimises the delay and processing by 20%, and 17% compared with the existing one. Furthermore, SLA violation decrease by 57% for the proposed energy-aware allocation.Comment: 8 Pages, 9 Figure

    Deadline-based dynamic resource allocation and provisioning algorithms in Fog-Cloud environment

    No full text
    The Fog computing paradigm is becoming prominent in supporting time-sensitive applications that are related to the smart Internet of Things (IoT) services, such as smart city and smart healthcare. Although Cloud computing is a promising paradigm for IoT in data processing, due to the high latency limitation of the Cloud, it is unable to satisfy the requirements for time-sensitive applications. Resource allocation and provisioning in the Fog-Cloud environment, considering dynamic changes in user requirements and limited available resources in Fog devices, is a challenging task. Among dynamic changes in the parameters of user requirements, the deadline is the most important challenge in the Fog computing environment. Current works on Fog computing address the resource provisioning without considering the dynamic changes in users’ requirements. To address the problem of satisfying deadline-based dynamic user requirements, we propose resource allocation and provisioning algorithms by using resource ranking and provision of resources in a hybrid and hierarchical fashion. The proposed algorithms are evaluated in a simulation environment by extending the CloudSim toolkit to simulate a realistic Fog environment. The experimental results indicate that the performance of the proposed algorithms is better compared with existing algorithms in terms of overall data processing time, instance cost and network delay, with the increasing number of application submissions. The average processing time and cost are decreased by 12% and 15% respectively, compared with existing solutions. •A novel resource allocation method by ranking the resources based on their constraints.•A new resource provisioning algorithm for the Fog-Cloud environment.•A resource provisioning method to satisfy the user dynamic requirements

    A micro-level compensation-based cost model for resource allocation in a fog environment

    No full text
    Fog computing aims to support applications requiring low latency and high scalability by using resources at the edge level. In general, fog computing comprises several autonomous mobile or static devices that share their idle resources to run different services. The providers of these devices also need to be compensated based on their device usage. In any fog-based resource-allocation problem, both cost and performance need to be considered for generating an efficient resource-allocation plan. Estimating the cost of using fog devices prior to the resource allocation helps to minimize the cost and maximize the performance of the system. In the fog computing domain, recent research works have proposed various resource-allocation algorithms without considering the compensation to resource providers and the cost estimation of the fog resources. Moreover, the existing cost models in similar paradigms such as in the cloud are not suitable for fog environments as the scaling of different autonomous resources with heterogeneity and variety of offerings is much more complicated. To fill this gap, this study first proposes a micro-level compensation cost model and then proposes a new resource-allocation method based on the cost model, which benefits both providers and users. Experimental results show that the proposed algorithm ensures better resource-allocation performance and lowers application processing costs when compared to the existing best-fit algorithm

    An Efficient Resource Monitoring Service for Fog Computing Environments

    No full text

    A blockchain-based framework for automatic SLA management in fog computing environments

    No full text
    Fog computing has become a prominent paradigm in providing shared resources to serve different applications near the edge. Similar to other computing paradigms such as cloud and grid, in fog computing, service-level agreements (SLAs) are essential between fog providers and end-users to guarantee the quality of service (QoS). However, due to the unique characteristics of fog resources, such as being highly distributed and heterogeneous, with their dynamic nature having nonrestrictive provider participation, SLA management techniques and frameworks, which are available for Clouds and Grids, are not directly applicable. The availability of the resources in the cloud is much more controllable and predictable compared to fog. Moreover, due to the multiple ownership of fog infrastructure and unrestricted environment, autonomous end-devices are allowed to participate with different SLAs to serve the applications near the edge as a result is a lack of trust exists between the entities and managing and enforcing SLAs according to the application QoS in this environment is a complex task. Thus, the SLA management must be undertaken in a more trustworthy manner to ensure that agreement. To fill this gap, this paper proposes an automated SLA management framework for fog computing that utilizes Smart contracts and blockchain technology to monitor and enforce SLAs in a more trustworthy manner. The results obtained from the experiments, which were conducted in the blockchain private network, show that the framework can ensure precise and efficient SLAs enforcement in the fog. The performance of the proposed framework is better than existing work in terms of transaction cost and time

    A Micro-Level Compensation-Based Cost Model for Resource Allocation in a Fog Environment

    No full text
    Fog computing aims to support applications requiring low latency and high scalability by using resources at the edge level. In general, fog computing comprises several autonomous mobile or static devices that share their idle resources to run different services. The providers of these devices also need to be compensated based on their device usage. In any fog-based resource-allocation problem, both cost and performance need to be considered for generating an efficient resource-allocation plan. Estimating the cost of using fog devices prior to the resource allocation helps to minimize the cost and maximize the performance of the system. In the fog computing domain, recent research works have proposed various resource-allocation algorithms without considering the compensation to resource providers and the cost estimation of the fog resources. Moreover, the existing cost models in similar paradigms such as in the cloud are not suitable for fog environments as the scaling of different autonomous resources with heterogeneity and variety of offerings is much more complicated. To fill this gap, this study first proposes a micro-level compensation cost model and then proposes a new resource-allocation method based on the cost model, which benefits both providers and users. Experimental results show that the proposed algorithm ensures better resource-allocation performance and lowers application processing costs when compared to the existing best-fit algorithm
    corecore