26 research outputs found

    HSO: A Hybrid Swarm Optimization Algorithm for Re-Ducing Energy Consumption in the Cloudlets

    Get PDF
    Mobile Cloud Computing (MCC) is an emerging technology for the improvement of mobile service quality. MCC resources are dynamically allocated to the users who pay for the resources based on their needs. The drawback of this process is that it is prone to failure and demands a high energy input. Resource providers mainly focus on resource performance and utilization with more consideration on the constraints of service level agreement (SLA). Resource performance can be achieved through virtualization techniques which facilitates the sharing of resource providers’ information between different virtual machines. To address these issues, this study sets forth a novel algorithm (HSO) that optimized energy efficiency resource management in the cloud; the process of the proposed method involves the use of the developed cost and runtime-effective model to create a minimum energy configuration of the cloud compute nodes while guaranteeing the maintenance of all minimum performances. The cost functions will cover energy, performance and reliability concerns. With the proposed model, the performance of the Hybrid swarm algorithm was significantly increased, as observed by optimizing the number of tasks through simulation, (power consumption was reduced by 42%). The simulation studies also showed a reduction in the number of required calculations by about 20% by the inclusion of the presented algorithms compared to the traditional static approach. There was also a decrease in the node loss which allowed the optimization algorithm to achieve a minimal overhead on cloud compute resources while still saving energy significantly. Conclusively, an energy-aware optimization model which describes the required system constraints was presented in this study, and a further proposal for techniques to determine the best overall solution was also made

    Hybrid Workload Enabled and Secure Healthcare Monitoring Sensing Framework in Distributed Fog-Cloud Network

    Get PDF
    The Internet of Medical Things (IoMT) workflow applications have been rapidly growing in practice. These internet-based applications can run on the distributed healthcare sensing system, which combines mobile computing, edge computing and cloud computing. Offloading and scheduling are the required methods in the distributed network. However, a security issue exists and it is hard to run different types of tasks (e.g., security, delay-sensitive, and delay-tolerant tasks) of IoMT applications on heterogeneous computing nodes. This work proposes a new healthcare architecture for workflow applications based on heterogeneous computing nodes layers: an application layer, management layer, and resource layer. The goal is to minimize the makespan of all applications. Based on these layers, the work proposes a secure offloading-efficient task scheduling (SEOS) algorithm framework, which includes the deadline division method, task sequencing rules, homomorphic security scheme, initial scheduling, and the variable neighbourhood searching method. The performance evaluation results show that the proposed plans outperform all existing baseline approaches for healthcare applications in terms of makespan

    HSO: A hybrid swarm optimization algorithm for reducing energy consumption in the cloudlets

    Get PDF
    Mobile Cloud Computing (MCC) is an emerging technology for the improvement of mobile service quality. MCC resources are dynamically allocated to the users who pay for the resources based on their needs. The drawback of this process is that it is prone to failure and demands a high energy input. Resource providers mainly focus on resource performance and utilization with more consideration on the constraints of service level agreement (SLA). Resource performance can be achieved through virtualization techniques which facilitates the sharing of resource providers’ information between different virtual machines. To address these issues, this study sets forth a novel algorithm (HSO) that optimized energy efficiency resource management in the cloud; the process of the proposed method involves the use of the developed cost and runtime-effective model to create a minimum energy configuration of the cloud compute nodes while guaranteeing the maintenance of all minimum performances. The cost functions will cover energy, performance and reliability concerns. With the proposed model, the performance of the Hybrid swarm algorithm was significantly increased, as observed by optimizing the number of tasks through simulation, (power consumption was reduced by 42%). The simulation studies also showed a reduction in the number of required calculations by about 20% by the inclusion of the presented algorithms compared to the traditional static approach. There was also a decrease in the node loss which allowed the optimization algorithm to achieve a minimal overhead on cloud compute resources while still saving energy significantly. Conclusively, an energy-aware optimization model which describes the required system constraints was presented in this study, and a further proposal for techniques to determine the best overall solution was also made

    Edge/Fog Computing Technologies for IoT Infrastructure

    Get PDF
    The prevalence of smart devices and cloud computing has led to an explosion in the amount of data generated by IoT devices. Moreover, emerging IoT applications, such as augmented and virtual reality (AR/VR), intelligent transportation systems, and smart factories require ultra-low latency for data communication and processing. Fog/edge computing is a new computing paradigm where fully distributed fog/edge nodes located nearby end devices provide computing resources. By analyzing, filtering, and processing at local fog/edge resources instead of transferring tremendous data to the centralized cloud servers, fog/edge computing can reduce the processing delay and network traffic significantly. With these advantages, fog/edge computing is expected to be one of the key enabling technologies for building the IoT infrastructure. Aiming to explore the recent research and development on fog/edge computing technologies for building an IoT infrastructure, this book collected 10 articles. The selected articles cover diverse topics such as resource management, service provisioning, task offloading and scheduling, container orchestration, and security on edge/fog computing infrastructure, which can help to grasp recent trends, as well as state-of-the-art algorithms of fog/edge computing technologies

    Vue d'ensemble du problĂšme de placement de service dans Fog and Edge Computing

    Get PDF
    To support the large and various applications generated by the Internet of Things(IoT), Fog Computing was introduced to complement the Cloud Computing and offer Cloud-like services at the edge of the network with low latency and real-time responses. Large-scale, geographical distribution and heterogeneity of edge computational nodes make service placement insuch infrastructure a challenging issue. Diversity of user expectations and IoT devices characteristics also complexify the deployment problem. This paper presents a survey of current research conducted on Service Placement Problem (SPP) in the Fog/Edge Computing. Based on a new clas-sification scheme, a categorization of current proposals is given and identified issues and challenges are discussed.Pour prendre en charge les applications volumineuses et variées générées par l'Internet des objets (IoT), le Fog Computing a été introduit pour compléter le Cloud et exploiter les ressources de calcul en périphérie du réseau afin de répondre aux besoins de calcul à faible latence et temps réel des applications. La répartition géographique à grande échelle et l'hétérogénéité des noeuds de calcul de périphérie rendent difficile le placement de services dans une telle infrastructure. La diversité des attentes des utilisateurs et des caractéristiques des périphériques IoT complexifie également le probllÚme de déploiement. Cet article présente une vue d'ensemble des recherches actuelles sur le problÚme de placement de service (SPP) dans l'informatique Fog et Edge. Sur la base d'un nouveau schéma de classification, les solutions présentées dans la littérature sont classées et les problÚmes et défis identifiés sont discutés

    Vue d'ensemble du problĂšme de placement de service dans Fog and Edge Computing

    Get PDF
    To support the large and various applications generated by the Internet of Things(IoT), Fog Computing was introduced to complement the Cloud Computing and offer Cloud-like services at the edge of the network with low latency and real-time responses. Large-scale, geographical distribution and heterogeneity of edge computational nodes make service placement insuch infrastructure a challenging issue. Diversity of user expectations and IoT devices characteristics also complexify the deployment problem. This paper presents a survey of current research conducted on Service Placement Problem (SPP) in the Fog/Edge Computing. Based on a new clas-sification scheme, a categorization of current proposals is given and identified issues and challenges are discussed.Pour prendre en charge les applications volumineuses et variées générées par l'Internet des objets (IoT), le Fog Computing a été introduit pour compléter le Cloud et exploiter les ressources de calcul en périphérie du réseau afin de répondre aux besoins de calcul à faible latence et temps réel des applications. La répartition géographique à grande échelle et l'hétérogénéité des noeuds de calcul de périphérie rendent difficile le placement de services dans une telle infrastructure. La diversité des attentes des utilisateurs et des caractéristiques des périphériques IoT complexifie également le probllÚme de déploiement. Cet article présente une vue d'ensemble des recherches actuelles sur le problÚme de placement de service (SPP) dans l'informatique Fog et Edge. Sur la base d'un nouveau schéma de classification, les solutions présentées dans la littérature sont classées et les problÚmes et défis identifiés sont discutés
    corecore