64 research outputs found

    Resource Management Techniques in Cloud-Fog for IoT and Mobile Crowdsensing Environments

    Get PDF
    The unpredictable and huge data generation nowadays by smart devices from IoT and mobile Crowd Sensing applications like (Sensors, smartphones, Wi-Fi routers) need processing power and storage. Cloud provides these capabilities to serve organizations and customers, but when using cloud appear some limitations, the most important of these limitations are Resource Allocation and Task Scheduling. The resource allocation process is a mechanism that ensures allocation virtual machine when there are multiple applications that require various resources such as CPU and I/O memory. Whereas scheduling is the process of determining the sequence in which these tasks come and depart the resources in order to maximize efficiency. In this paper we tried to highlight the most relevant difficulties that cloud computing is now facing. We presented a comprehensive review of resource allocation and scheduling techniques to overcome these limitations. Finally, the previous techniques and strategies for allocation and scheduling have been compared in a table with their drawbacks

    BIG, MEDIUM AND LITTLE (BML) SCHEDULING IN FOG ENVIRONMENT

    Get PDF
    BIG, MEDIUM AND LITTLE (BML) SCHEDULING IN FOG ENVIRONMENTAbstractFog computing has got great attntion due to its importance especially in Internet of Things (IoT) environment where computation at the edge of the network is most desired. Due to the geographical proximity of resources, Fog computing exhibits lower latency compared to cloud; however, inefïŹcient resource allocation in Fog environment can result in higher delays and degraded performance. Hence, efïŹcient resource scheduling in Fog computing is crucial to get true beneïŹts of the cloud like services at the proximity of data generation sources. In this paper, a Big-Medium-Little (BML) scheduling technique is proposed to efïŹciently allocate Fog and Cloud resources to the incoming IoT jobs. Moreover, cooperative and non-cooperative Fog computing environments are also explored. Additionally, a thorough comparative study of existing scheduling techniques in Fog-cloud environment is also presented. The technique is rigorously evaluated and shows promising results in terms of makespan, energy consumption, latecny and throughput.Keywords: Cloud node, Fog node, Max-Min, Min-Min, Big, Medium, Little, Task, Resource, Cooperative and Non-Cooperative Systems

    Essentiality of managing the resource information in the coordinated fog-to-cloud paradigm

    Get PDF
    This is the peer reviewed version of the following article: Sengupta, S, Garcia, J, Masip‐Bruin, X. Essentiality of managing the resource information in the coordinated fog‐to‐cloud paradigm. Int J Commun Syst. 2019, which has been published in final form at https://doi.org/10.1002/dac.4286. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Self-Archiving.Fog-to-cloud (F2C) computing is an emerging computational platform. By combing the cloud, fog, and IoT, it provides an excellent framework for managing and coordinating the resources in any smart computing domain. Efficient management of these kinds of diverse resources is one of the critical tasks in the F2C system. Also, it must be considered that different types of services are offered by any smart system. So, before managing these resources and enabling the various types of services, it is essential to have some comprehensive informational catalogue of resources and services. Hence, after identifying the resource and service-task taxonomy, our main aim in this paper is finding out a solution for properly organizing this information over the F2C system. For that purpose, we are proposing a modified F2C framework where all the information is distributively stored near to the edge of the network. Finally, by presenting some experimental results, we evaluate and validate the performance of our proposing framework.This work has been supported by the Spanish Ministry of Science, Innovation and Universities and by the European Regional Development Fund (FEDER) under contract RTI2018-094532-B-I00 and by the H2020 European Union mF2C project with reference 730929.Peer ReviewedPostprint (published version

    An Unsupervised and Non-Invasive Model for Predicting Network Resource Demands

    Get PDF
    During the last decade, network providers are faced by a growing problem regarding the distribution of bandwidth and computing resources. Recently, the mobile edge computing paradigm was proposed as a possible solution, mainly in consideration of the provided possibility of transferring service demands at the edge of the network. This solution heavily relies on the dynamic allocation of resources, depending on the user needs and network connection, therefore it becomes essential to correctly predict user movements and activities. This paper proposes an unsupervised methodology to define meaningful user locations from noninvasive user information, captured by the user terminal with no computing or battery overhead. The data is analyzed through a conjoined clustering algorithm to build a stochastic Markov chain to predict the users’ movements and their bandwidth demands. Such a model could be used by network operators to optimize network resources allocation. To evaluate the proposed methodology, we tested it on one of the largest public community’s labeled mobile and sensor dataset, developed by the “CrowdSignals.io” initiative, and we present positive and promising results concerning the prediction capabilities of the model

    Vue d'ensemble du problĂšme de placement de service dans Fog and Edge Computing

    Get PDF
    To support the large and various applications generated by the Internet of Things(IoT), Fog Computing was introduced to complement the Cloud Computing and offer Cloud-like services at the edge of the network with low latency and real-time responses. Large-scale, geographical distribution and heterogeneity of edge computational nodes make service placement insuch infrastructure a challenging issue. Diversity of user expectations and IoT devices characteristics also complexify the deployment problem. This paper presents a survey of current research conducted on Service Placement Problem (SPP) in the Fog/Edge Computing. Based on a new clas-sification scheme, a categorization of current proposals is given and identified issues and challenges are discussed.Pour prendre en charge les applications volumineuses et variées générées par l'Internet des objets (IoT), le Fog Computing a été introduit pour compléter le Cloud et exploiter les ressources de calcul en périphérie du réseau afin de répondre aux besoins de calcul à faible latence et temps réel des applications. La répartition géographique à grande échelle et l'hétérogénéité des noeuds de calcul de périphérie rendent difficile le placement de services dans une telle infrastructure. La diversité des attentes des utilisateurs et des caractéristiques des périphériques IoT complexifie également le probllÚme de déploiement. Cet article présente une vue d'ensemble des recherches actuelles sur le problÚme de placement de service (SPP) dans l'informatique Fog et Edge. Sur la base d'un nouveau schéma de classification, les solutions présentées dans la littérature sont classées et les problÚmes et défis identifiés sont discutés

    Big Data Pipelines on the Computing Continuum: Tapping the Dark Data

    Get PDF
    The computing continuum enables new opportunities for managing big data pipelines concerning efficient management of heterogeneous and untrustworthy resources. We discuss the big data pipelines lifecycle on the computing continuum and its associated challenges, and we outline a future research agenda in this area.acceptedVersio

    Vue d'ensemble du problĂšme de placement de service dans Fog and Edge Computing

    Get PDF
    To support the large and various applications generated by the Internet of Things(IoT), Fog Computing was introduced to complement the Cloud Computing and offer Cloud-like services at the edge of the network with low latency and real-time responses. Large-scale, geographical distribution and heterogeneity of edge computational nodes make service placement insuch infrastructure a challenging issue. Diversity of user expectations and IoT devices characteristics also complexify the deployment problem. This paper presents a survey of current research conducted on Service Placement Problem (SPP) in the Fog/Edge Computing. Based on a new clas-sification scheme, a categorization of current proposals is given and identified issues and challenges are discussed.Pour prendre en charge les applications volumineuses et variées générées par l'Internet des objets (IoT), le Fog Computing a été introduit pour compléter le Cloud et exploiter les ressources de calcul en périphérie du réseau afin de répondre aux besoins de calcul à faible latence et temps réel des applications. La répartition géographique à grande échelle et l'hétérogénéité des noeuds de calcul de périphérie rendent difficile le placement de services dans une telle infrastructure. La diversité des attentes des utilisateurs et des caractéristiques des périphériques IoT complexifie également le probllÚme de déploiement. Cet article présente une vue d'ensemble des recherches actuelles sur le problÚme de placement de service (SPP) dans l'informatique Fog et Edge. Sur la base d'un nouveau schéma de classification, les solutions présentées dans la littérature sont classées et les problÚmes et défis identifiés sont discutés

    Improving Fog Computing Performance via Fog-2-Fog Collaboration

    Get PDF
    In the Internet of Things (IoT) era, a large volume of data is continuously emitted from a plethora of connected devices. The current network paradigm, which relies on centralized data centers (aka Cloudcomputing), has become inefficient to respond to IoT latency concern. To address this concern, fog computing allows data processing and storage \close" to IoT devices. However, fog is still not efficient due to spatial and temporal distribution of these devices, which leads to fog nodes' unbalanced loads. This paper proposes a new Fog-2-Fog (F2F) collaboration model that promotes offloading incoming requests among fog nodes, according to their load and processing capabilities, via a novel load balancing known as Fog Resource manAgeMEnt Scheme (FRAMES). A formal mathematical model of F2F and FRAMES has been fomulated, and a set of experiments has been carried out demonstrating the technical doability of F2F collaboration. The performance of the proposed fog load balancing model is compared to other load balancing models

    Resource Allocation Framework in Fog Computing for the Internet of Things Environments

    Get PDF
    Fog computing plays a pivotal role in the Internet of Things (IoT) ecosystem because of its ability to support delay-sensitive tasks, bringing resources from cloud servers closer to the “ground” and support IoT devices that are resource-constrained. Although fog computing offers some benefits such as quick response to requests, geo-distributed data processing and data processing in the proximity of the IoT devices, the exponential increase of IoT devices and large volumes of data being generated has led to a new set of challenges. One such problem is the allocation of resources to IoT tasks to match their computational needs and quality of service (QoS) requirements, whilst meeting both task deadlines and user expectations. Most proposed solutions in existing works suggest task offloading mechanisms where IoT devices would offload their tasks randomly to the fog layer or cloud layer. This helps in minimizing the communication delay; however, most tasks would end up missing their deadlines as many delays are experienced during offloading. This study proposes and introduces a Resource Allocation Scheduler (RAS) at the IoT-Fog gateway, whose goal is to decide where and when a task is to be offloaded, either to the fog layer, or the cloud layer based on their priority needs, computational needs and QoS requirements. The aim directly places work within the communication networks domain, in the transport layer of the Open Systems Interconnection (OSI) model. As such, this study follows the four phases of the top-down approach because of its reusability characteristics. To validate and test the efficiency and effectiveness of the RAS, the fog framework was implemented and evaluated in a simulated smart home setup. The essential metrics that were used to check if round-trip time was minimized are the queuing time, offloading time and throughput for QoS. The results showed that the RAS helps to reduce the round-trip time, increases throughput and leads to improved QoS. Furthermore, the approach addressed the starvation problem, a phenomenon that tends to affect low priority tasks. Most importantly, the results provides evidence that if resource allocation and assignment are appropriately done, round-trip time can be reduced and QoS can be improved in fog computing. The significant contribution of this research is the novel framework which minimizes round-trip time, addresses the starvation problem and improves QoS. Moreover, a literature reviewed paper which was regarded by reviewers as the first, as far as QoS in fog computing is concerned was produced
    • 

    corecore