5 research outputs found

    Design and Analysis of an Optimized Scheduling Approach using Decision Making over IoT (TOPSI) for Relay based Routing Protocols

    Get PDF
    This research work focuses on support towards QoS approaches over IoT using computational models based on scheduling schemes to enable service oriented systems. IoT system supports on application of day-to-day physical tasks with virtual objects which inter-connect to create opportunities for integration of world into computer-based systems. The QoS scheduling model TOPSI implements a top-down decision making process over top to bottom interconnected layers using service supportive optimization algorithms based on demandable QoS requirements and applications. TOPSI adopts Markov Decision Process (MDP) at the three layers from transport layer to application layer which identifies the QoS supportive metrics for IoT and maximizes the service quality at network layer. The connection cost over multiple sessions is stochastic in nature as service is supportive based on decision making algorithms. TOPSI uses QoS attributes adopted in traditional QoS mechanisms based on transmission of sensor data and decision making based on sensing ability. TOPSI model defines and measures the QoS metrics of IoT network using adaptive monitoring module at transport layer for the defined service in use. TOPSI shows optimized throughput for variable load in use, sessions and observed delay. TOPSI works on route identification, route binding, update and deletion process based on the validation of adaptive QoS metrics, before the optimal route selection process between source and destination. This research work discusses on the survey and analyzes the performance of TOPSI and RBL schemes. The simulation test beds and scenario mapping are carried out using Cooja network simulator

    Scheduling the Execution of Tasks at the Edge

    Get PDF
    The Internet of Things provides a huge infrastructure where numerous devices produce, collect and process data. These data are the basis for offering analytics to support novel applications. The processing of huge volumes of data is a demanding process, thus, the power of Cloud is already utilized. However, latency, privacy and the drawbacks of this centralized approach became the motivation for the emerge of edge computing. In edge computing, data could be processed at the edge of the network; at the IoT nodes to deliver immediate results. Due to the limited resources of IoT nodes, it is not possible to have a high number of demanding tasks locally executed to support applications. In this paper, we propose a scheme for selecting the most significant tasks to be executed at the edge while the remaining are transferred into the Cloud. Our distributed scheme focuses on mobile IoT nodes and provides a decision making mechanism and an optimization module for delivering the tasks that will be executed locally. We take into consideration multiple characteristics of tasks and optimize the final decision. With our mechanism, IoT nodes can be adapted to, possibly, unknown environments evolving their decision making. We evaluate the proposed scheme through a high number of simulations and give numerical results

    In-Network Decision Making Intelligence for Task Allocation in Edge Computing

    Get PDF
    Humongous contextual data are produced by sensing and computing devices (nodes) in distributed computing environments supporting inferential/predictive analytics. Nodes locally process and execute analytics tasks over contextual data. Demanding inferential analytics are crucial for supporting local real-time applications, however, they deplete nodes' resources. We contribute with a distributed methodology that pushes the task allocation decision at the network edge by intelligently scheduling and distributing analytics tasks among nodes. Each node autonomously decides whether the tasks are conditionally executed locally, or in networked neighboring nodes, or delegated to the Cloud based on the current nodes' context and statistical data relevance. We comprehensively evaluate our methodology demonstrating its applicability in edge computing environments

    An Intelligent Edge-Centric Queries Allocation Scheme based on Ensemble Models

    Get PDF
    The combination of Internet of Things (IoT) and Edge Computing (EC) can assist in the delivery of novel applications that will facilitate end users activities. Data collected by numerous devices present in the IoT infrastructure can be hosted into a set of EC nodes becoming the subject of processing tasks for the provision of analytics. Analytics are derived as the result of various queries defined by end users or applications. Such queries can be executed in the available EC nodes to limit the latency in the provision of responses. In this paper, we propose a meta-ensemble learning scheme that supports the decision making for the allocation of queries to the appropriate EC nodes. Our learning model decides over queries' and nodes' characteristics. We provide the description of a matching process between queries and nodes after concluding the contextual information for each envisioned characteristic adopted in our meta-ensemble scheme. We rely on widely known ensemble models, combine them and offer an additional processing layer to increase the performance. The aim is to result a subset of EC nodes that will host each incoming query. Apart from the description of the proposed model, we report on its evaluation and the corresponding results. Through a large set of experiments and a numerical analysis, we aim at revealing the pros and cons of the proposed scheme

    An intelligent edge-centric queries allocation scheme based on ensemble models

    Get PDF
    The combination of Internet of Things (IoT) and Edge Computing (EC) can assist in the delivery of novel applications that will facilitate end-users’ activities. Data collected by numerous devices present in the IoT infrastructure can be hosted into a set of EC nodes becoming the subject of processing tasks for the provision of analytics. Analytics are derived as the result of various queries defined by end-users or applications. Such queries can be executed in the available EC nodes to limit the latency in the provision of responses. In this article, we propose a meta-ensemble learning scheme that supports the decision making for the allocation of queries to the appropriate EC nodes. Our learning model decides over queries’ and nodes’ characteristics. We provide the description of a matching process between queries and nodes after concluding the contextual information for each envisioned characteristic adopted in our meta-ensemble scheme. We rely on widely known ensemble models, combine them, and offer an additional processing layer to increase the performance. The aim is to result a subset of EC nodes that will host each incoming query. Apart from the description of the proposed model, we report on its evaluation and the corresponding results. Through a large set of experiments and a numerical analysis, we aim at revealing the pros and cons of the proposed scheme
    corecore