9 research outputs found

    Multiple Linear Regression-Based Energy-Aware Resource Allocation in the Fog Computing Environment

    Full text link
    Fog computing is a promising computing paradigm for time-sensitive Internet of Things (IoT) applications. It helps to process data close to the users, in order to deliver faster processing outcomes than the Cloud; it also helps to reduce network traffic. The computation environment in the Fog computing is highly dynamic and most of the Fog devices are battery powered hence the chances of application failure is high which leads to delaying the application outcome. On the other hand, if we rerun the application in other devices after the failure it will not comply with time-sensitiveness. To solve this problem, we need to run applications in an energy-efficient manner which is a challenging task due to the dynamic nature of Fog computing environment. It is required to schedule application in such a way that the application should not fail due to the unavailability of energy. In this paper, we propose a multiple linear, regression-based resource allocation mechanism to run applications in an energy-aware manner in the Fog computing environment to minimise failures due to energy constraint. Prior works lack of energy-aware application execution considering dynamism of Fog environment. Hence, we propose A multiple linear regression-based approach which can achieve such objectives. We present a sustainable energy-aware framework and algorithm which execute applications in Fog environment in an energy-aware manner. The trade-off between energy-efficient allocation and application execution time has been investigated and shown to have a minimum negative impact on the system for energy-aware allocation. We compared our proposed method with existing approaches. Our proposed approach minimises the delay and processing by 20%, and 17% compared with the existing one. Furthermore, SLA violation decrease by 57% for the proposed energy-aware allocation.Comment: 8 Pages, 9 Figure

    Resource Management Techniques in Cloud-Fog for IoT and Mobile Crowdsensing Environments

    Get PDF
    The unpredictable and huge data generation nowadays by smart devices from IoT and mobile Crowd Sensing applications like (Sensors, smartphones, Wi-Fi routers) need processing power and storage. Cloud provides these capabilities to serve organizations and customers, but when using cloud appear some limitations, the most important of these limitations are Resource Allocation and Task Scheduling. The resource allocation process is a mechanism that ensures allocation virtual machine when there are multiple applications that require various resources such as CPU and I/O memory. Whereas scheduling is the process of determining the sequence in which these tasks come and depart the resources in order to maximize efficiency. In this paper we tried to highlight the most relevant difficulties that cloud computing is now facing. We presented a comprehensive review of resource allocation and scheduling techniques to overcome these limitations. Finally, the previous techniques and strategies for allocation and scheduling have been compared in a table with their drawbacks

    Resource Management Techniques in Cloud-Fog for IoT and Mobile Crowdsensing Environments

    Get PDF
    The unpredictable and huge data generation nowadays by smart devices from IoT and mobile Crowd Sensing applications like (Sensors, smartphones, Wi-Fi routers) need processing power and storage. Cloud provides these capabilities to serve organizations and customers, but when using cloud appear some limitations, the most important of these limitations are Resource Allocation and Task Scheduling. The resource allocation process is a mechanism that ensures allocation virtual machine when there are multiple applications that require various resources such as CPU and I/O memory. Whereas scheduling is the process of determining the sequence in which these tasks come and depart the resources in order to maximize efficiency. In this paper we tried to highlight the most relevant difficulties that cloud computing is now facing. We presented a comprehensive review of resource allocation and scheduling techniques to overcome these limitations. Finally, the previous techniques and strategies for allocation and scheduling have been compared in a table with their drawbacks

    Vue d'ensemble du problème de placement de service dans Fog and Edge Computing

    Get PDF
    To support the large and various applications generated by the Internet of Things(IoT), Fog Computing was introduced to complement the Cloud Computing and offer Cloud-like services at the edge of the network with low latency and real-time responses. Large-scale, geographical distribution and heterogeneity of edge computational nodes make service placement insuch infrastructure a challenging issue. Diversity of user expectations and IoT devices characteristics also complexify the deployment problem. This paper presents a survey of current research conducted on Service Placement Problem (SPP) in the Fog/Edge Computing. Based on a new clas-sification scheme, a categorization of current proposals is given and identified issues and challenges are discussed.Pour prendre en charge les applications volumineuses et variées générées par l'Internet des objets (IoT), le Fog Computing a été introduit pour compléter le Cloud et exploiter les ressources de calcul en périphérie du réseau afin de répondre aux besoins de calcul à faible latence et temps réel des applications. La répartition géographique à grande échelle et l'hétérogénéité des noeuds de calcul de périphérie rendent difficile le placement de services dans une telle infrastructure. La diversité des attentes des utilisateurs et des caractéristiques des périphériques IoT complexifie également le probllème de déploiement. Cet article présente une vue d'ensemble des recherches actuelles sur le problème de placement de service (SPP) dans l'informatique Fog et Edge. Sur la base d'un nouveau schéma de classification, les solutions présentées dans la littérature sont classées et les problèmes et défis identifiés sont discutés

    Vue d'ensemble du problème de placement de service dans Fog and Edge Computing

    Get PDF
    To support the large and various applications generated by the Internet of Things(IoT), Fog Computing was introduced to complement the Cloud Computing and offer Cloud-like services at the edge of the network with low latency and real-time responses. Large-scale, geographical distribution and heterogeneity of edge computational nodes make service placement insuch infrastructure a challenging issue. Diversity of user expectations and IoT devices characteristics also complexify the deployment problem. This paper presents a survey of current research conducted on Service Placement Problem (SPP) in the Fog/Edge Computing. Based on a new clas-sification scheme, a categorization of current proposals is given and identified issues and challenges are discussed.Pour prendre en charge les applications volumineuses et variées générées par l'Internet des objets (IoT), le Fog Computing a été introduit pour compléter le Cloud et exploiter les ressources de calcul en périphérie du réseau afin de répondre aux besoins de calcul à faible latence et temps réel des applications. La répartition géographique à grande échelle et l'hétérogénéité des noeuds de calcul de périphérie rendent difficile le placement de services dans une telle infrastructure. La diversité des attentes des utilisateurs et des caractéristiques des périphériques IoT complexifie également le probllème de déploiement. Cet article présente une vue d'ensemble des recherches actuelles sur le problème de placement de service (SPP) dans l'informatique Fog et Edge. Sur la base d'un nouveau schéma de classification, les solutions présentées dans la littérature sont classées et les problèmes et défis identifiés sont discutés

    Service level agreement specification for IoT application workflow activity deployment, configuration and monitoring

    Get PDF
    PhD ThesisCurrently, we see the use of the Internet of Things (IoT) within various domains such as healthcare, smart homes, smart cars, smart-x applications, and smart cities. The number of applications based on IoT and cloud computing is projected to increase rapidly over the next few years. IoT-based services must meet the guaranteed levels of quality of service (QoS) to match users’ expectations. Ensuring QoS through specifying the QoS constraints using service level agreements (SLAs) is crucial. Also because of the potentially highly complex nature of multi-layered IoT applications, lifecycle management (deployment, dynamic reconfiguration, and monitoring) needs to be automated. To achieve this it is essential to be able to specify SLAs in a machine-readable format. currently available SLA specification languages are unable to accommodate the unique characteristics (interdependency of its multi-layers) of the IoT domain. Therefore, in this research, we propose a grammar for a syntactical structure of an SLA specification for IoT. The grammar is based on a proposed conceptual model that considers the main concepts that can be used to express the requirements for most common hardware and software components of an IoT application on an end-to-end basis. We follow the Goal Question Metric (GQM) approach to evaluate the generality and expressiveness of the proposed grammar by reviewing its concepts and their predefined lists of vocabularies against two use-cases with a number of participants whose research interests are mainly related to IoT. The results of the analysis show that the proposed grammar achieved 91.70% of its generality goal and 93.43% of its expressiveness goal. To enhance the process of specifying SLA terms, We then developed a toolkit for creating SLA specifications for IoT applications. The toolkit is used to simplify the process of capturing the requirements of IoT applications. We demonstrate the effectiveness of the toolkit using a remote health monitoring service (RHMS) use-case as well as applying a user experience measure to evaluate the tool by applying a questionnaire-oriented approach. We discussed the applicability of our tool by including it as a core component of two different applications: 1) a contextaware recommender system for IoT configuration across layers; and 2) a tool for automatically translating an SLA from JSON to a smart contract, deploying it on different peer nodes that represent the contractual parties. The smart contract is able to monitor the created SLA using Blockchain technology. These two applications are utilized within our proposed SLA management framework for IoT. Furthermore, we propose a greedy heuristic algorithm to decentralize workflow activities of an IoT application across Edge and Cloud resources to enhance response time, cost, energy consumption and network usage. We evaluated the efficiency of our proposed approach using iFogSim simulator. The performance analysis shows that the proposed algorithm minimized cost, execution time, networking, and Cloud energy consumption compared to Cloud-only and edge-ward placement approaches
    corecore