104 research outputs found

    Single-Board-Computer Clusters for Cloudlet Computing in Internet of Things

    Get PDF
    The number of connected sensors and devices is expected to increase to billions in the near future. However, centralised cloud-computing data centres present various challenges to meet the requirements inherent to Internet of Things (IoT) workloads, such as low latency, high throughput and bandwidth constraints. Edge computing is becoming the standard computing paradigm for latency-sensitive real-time IoT workloads, since it addresses the aforementioned limitations related to centralised cloud-computing models. Such a paradigm relies on bringing computation close to the source of data, which presents serious operational challenges for large-scale cloud-computing providers. In this work, we present an architecture composed of low-cost Single-Board-Computer clusters near to data sources, and centralised cloud-computing data centres. The proposed cost-efficient model may be employed as an alternative to fog computing to meet real-time IoT workload requirements while keeping scalability. We include an extensive empirical analysis to assess the suitability of single-board-computer clusters as cost-effective edge-computing micro data centres. Additionally, we compare the proposed architecture with traditional cloudlet and cloud architectures, and evaluate them through extensive simulation. We finally show that acquisition costs can be drastically reduced while keeping performance levels in data-intensive IoT use cases.Ministerio de Economía y Competitividad TIN2017-82113-C2-1-RMinisterio de Economía y Competitividad RTI2018-098062-A-I00European Union’s Horizon 2020 No. 754489Science Foundation Ireland grant 13/RC/209

    Modeling the Internet of Things: a simulation perspective

    Full text link
    This paper deals with the problem of properly simulating the Internet of Things (IoT). Simulating an IoT allows evaluating strategies that can be employed to deploy smart services over different kinds of territories. However, the heterogeneity of scenarios seriously complicates this task. This imposes the use of sophisticated modeling and simulation techniques. We discuss novel approaches for the provision of scalable simulation scenarios, that enable the real-time execution of massively populated IoT environments. Attention is given to novel hybrid and multi-level simulation techniques that, when combined with agent-based, adaptive Parallel and Distributed Simulation (PADS) approaches, can provide means to perform highly detailed simulations on demand. To support this claim, we detail a use case concerned with the simulation of vehicular transportation systems.Comment: Proceedings of the IEEE 2017 International Conference on High Performance Computing and Simulation (HPCS 2017

    Cloud computing resource scheduling and a survey of its evolutionary approaches

    Get PDF
    A disruptive technology fundamentally transforming the way that computing services are delivered, cloud computing offers information and communication technology users a new dimension of convenience of resources, as services via the Internet. Because cloud provides a finite pool of virtualized on-demand resources, optimally scheduling them has become an essential and rewarding topic, where a trend of using Evolutionary Computation (EC) algorithms is emerging rapidly. Through analyzing the cloud computing architecture, this survey first presents taxonomy at two levels of scheduling cloud resources. It then paints a landscape of the scheduling problem and solutions. According to the taxonomy, a comprehensive survey of state-of-the-art approaches is presented systematically. Looking forward, challenges and potential future research directions are investigated and invited, including real-time scheduling, adaptive dynamic scheduling, large-scale scheduling, multiobjective scheduling, and distributed and parallel scheduling. At the dawn of Industry 4.0, cloud computing scheduling for cyber-physical integration with the presence of big data is also discussed. Research in this area is only in its infancy, but with the rapid fusion of information and data technology, more exciting and agenda-setting topics are likely to emerge on the horizon

    Simulating Energy Efficient Fog Computing

    Get PDF
    Nõudlus arvuti ressursside järele üha suureneb ning seega on vajadus vähendada energiakulu, et tagada arvutisüsteemide jätkusuutlikus. Praegused pilve- ja uduandmetöötlus arhitektuuride edasiarendamiseks on vaja ajajaotus- ja asetusalgoritme, mis arvestavad energiakuluga. Selles töös kirjeldatakse energiasäästlikkust pilve- ja uduandmetöötluses. Töös luuakse ajajaotus- ja asetusalgoritmid, mis maksimeerivad vabade seadmete arvu ning vähendavad seeläbi süsteemi energiakulu. Algoritme katsetatakse erinevates simulatsioonides. Simulatsioonide tulemusi analüüsitakse ja võrreldakse ning tehakse järeldused algoritmide kasulikkusest. Töö sisaldab ka lühikest ülevaadet sarnastest uurimustest.With increasing demand on computing resources, there is a need to reduce energy consumption in order to keep computer systems sustainable. Current cloud and fog computing architectures need to be improved by designing energy efficient scheduling and placement algorithms. This thesis describes power efficiency in fog computing and cloud computing. It shows a way to minimize power usage by designing scheduling and placement algorithms that maximize the number of idle hosts. Algorithms are designed to archive that goal in cloud and fog systems. The algorithms are tested in different simulation scenarios. The results are compared and analysed. The thesis also contains a brief overview of similar research that has been done on this topic

    Disaster Recovery Services in Intercloud using Genetic Algorithm Load Balancer

    Get PDF
    Paradigm need to shifts from cloud computing to intercloud for disaster recoveries, which can outbreak anytime and anywhere. Natural disaster treatment includes radically high voluminous impatient job request demanding immediate attention. Under the disequilibrium circumstance, intercloud is more practical and functional option. There are need of protocols like quality of services, service level agreement and disaster recovery pacts to be discussed and clarified during the initial setup to fast track the distress scenario. Orchestration of resources in large scale distributed system having muli-objective optimization of resources, minimum energy consumption, maximum throughput, load balancing, minimum carbon footprint altogether is quite challenging. Intercloud where resources of different clouds are in align, plays crucial role in resource mapping. The objective of this paper is to improvise and fast track the mapping procedures in cloud platform and addressing impatient job requests in balanced and efficient manner. Genetic algorithm based resource allocation is proposed using pareto optimal mapping of resources to keep high utilization rate of processors, high througput and low carbon footprint.  Decision variables include utilization of processors, throughput, locality cost and real time deadline. Simulation results of load balancer using first in first out and genetic algorithm are compared under similar circumstances

    DQN-based intelligent controller for multiple edge domains

    Get PDF
    Advanced technologies like network function virtualization (NFV) and multi-access edge computing (MEC) have been used to build flexible, highly programmable, and autonomously manageable infrastructures close to the end-users, at the edge of the network. In this vein, the use of single-board computers (SBCs) in commodity clusters has gained attention to deploy virtual network functions (VNFs) due to their low cost, low energy consumption, and easy programmability. This paper deals with the problem of deploying VNFs in a multi-cluster system formed by this kind of node which is characterized by limited computational and battery capacities. Additionally, existing platforms to orchestrate and manage VNFs do not consider energy levels during their placement decisions, and therefore, they are not optimized for energy-constrained environments. In this regard, this study proposes an intelligent controller as a global allocation mechanism based on deep reinforcement learning (DRL), specifically on deep Q-network (DQN). The conceived mechanism optimizes energy consumption in SBCs by selecting the most suitable nodes across several clusters to deploy event requests in terms of nodes’ resources and events’ demands. A comparison with available allocation algorithms revealed that our solution required 28% fewer resource costs and reduced 35% the energy consumption in the clusters’ computing nodes while maintaining high levels of acceptance ratio.This work has been supported in part (50%) by the Agencia Estatal de Investigación of Ministerio de Ciencia e Innovación of Spain under projects PID2019-108713RB-C51 & PID2019-108713RB-C52 MCIN/ AEI/10.13039/501100011033; and in part (50%) by AI@EDGE H2020-ICT-52-2020 under grant agreement No. 10101592

    Dynamic congestion management system for cloud service broker

    Get PDF
    This is an open access article licenced under a CC-BY-SA license, https://creativecommons.org/licenses/by-sa/4.0/The cloud computing model offers a shared pool of resources and services with diverse models presented to the clients through the internet by an on-demand scalable and dynamic pay-per-use model. The developers have identified the need for an automated system (cloud service broker (CSB)) that can contribute to exploiting the cloud capability, enhancing its functionality, and improving its performance. This research presents a dynamic congestion management (DCM) system which can manage the massive amount of cloud requests while considering the required quality for the clients’ requirements as regulated by the service-level policy. In addition, this research introduces a forwarding policy that can be utilized to choose high-priority calls coming from the cloud service requesters and passes them by the broker to the suitable cloud resources. The policy has made use of one of the mechanisms that are used by Cisco to assist the administration of the congestion that might take place at the broker side. Furthermore, the DCM system is used to help in provisioning and monitoring the works of the cloud providers through the job operation. The proposed DCM system was implemented and evaluated by using the CloudSim tool.Peer reviewe
    corecore