317 research outputs found

    Real-Time IoV Task Offloading through Dynamic Assignment of SDN Controllers: Algorithmic Approaches and Performance Evaluation

    Get PDF
    Task offloading in Internet of Vehicles (IoV) is very crucial. The widespread use of IoT applications frequently interacts with the cloud, thereby increasing the load on centralized cloud controllers. Centralized network management in cloud infrastructure is not feasible for the latest IoT trends. Decentralized and decoupled network management in Software Defined Networks (SDN) can enhance IoV services. SDN and IoV coupling can better handle task offloading in ubiquitous and dynamic IoV environments. However, appropriate SDN controller assignment and allotment strategies play a prominent role in IoV communication. In this study, we developed algorithms for SDN controller assignment and allotment namely 1) Next Fit Allotment and Assignment of SDN Controller in IoV (NFAAC), 2) Dynamic Bin Packing Allotment and Assignment of SDN Controller in IoV (DBPAAC), and 3) Dynamic Focused and Bidding Allotment and Assignment algorithm of SDN Controller in IoV (DFBAAC). These algorithms were simulated using open-flow switch controllers. The controllers were modeled as Road Side Units (RSU) that can allocate bandwidth and resource requirements to vehicles on the road. Our results show that our proposed algorithm works efficiently for SDN controller assignment and allocation, outperforming the existing work by a significant improvement of 13.5%. The working of the proposed algorithms are verified, tested, and analytically presented in this study

    Drone-Assisted Wireless Communications

    Get PDF
    In order to address the increased demand for any-time/any-where wireless connectivity, both academic and industrial researchers are actively engaged in the design of the fifth generation (5G) wireless communication networks. In contrast to the traditional bottom-up or horizontal design approaches, 5G wireless networks are being co-created with various stakeholders to address connectivity requirements across various verticals (i.e., employing a top-to-bottom approach). From a communication networks perspective, this requires obliviousness under various failures. In the context of cellular networks, base station (BS) failures can be caused either due to a natural or synthetic phenomenon. Natural phenomena such as earthquake or flooding can result in either destruction of communication hardware or disruption of energy supply to BSs. In such cases, there is a dire need for a mechanism through which capacity short-fall can be met in a rapid manner. Drone empowered small cellular networks, or so-called \quotes{flying cellular networks}, present an attractive solution as they can be swiftly deployed for provisioning public safety (PS) networks. While drone empowered self-organising networks (SONs) and drone small cell networks (DSCNs) have received some attention in the recent past, the design space of such networks has not been extensively traversed. So, the purpose of this thesis is to study the optimal deployment of drone empowered networks in different scenarios and for different applications (i.e., in cellular post-disaster scenarios and briefly in assisting backscatter internet of things (IoT)). To this end, we borrow the well-known tools from stochastic geometry to study the performance of multiple network deployments, as stochastic geometry provides a very powerful theoretical framework that accommodates network scalability and different spatial distributions. We will then investigate the design space of flying wireless networks and we will also explore the co-existence properties of an overlaid DSCN with the operational part of the existing networks. We define and study the design parameters such as optimal altitude and number of drone BSs, etc., as a function of destroyed BSs, propagation conditions, etc. Next, due to capacity and back-hauling limitations on drone small cells (DSCs), we assume that each coverage hole requires a multitude of DSCs to meet the shortfall coverage at a desired quality-of-service (QoS). Hence, we consider the clustered deployment of DSCs around the site of the destroyed BS. Accordingly, joint consideration of partially operating BSs and deployed DSCs yields a unique topology for such PS networks. Hence, we propose a clustering mechanism that extends the traditional Mat\'{e}rn and Thomas cluster processes to a more general case where cluster size is dependent upon the size of the coverage hole. As a result, it is demonstrated that by intelligently selecting operational network parameters such as drone altitude, density, number, transmit power and the spatial distribution of the deployment, ground user coverage can be significantly enhanced. As another contribution of this thesis, we also present a detailed analysis of the coverage and spectral efficiency of a downlink cellular network. Rather than relying on the first-order statistics of received signal-to-interference-ratio (SIR) such as coverage probability, we focus on characterizing its meta-distribution. As a result, our new design framework reveals that the traditional results which advocate lowering of BS heights or even optimal selection of BS height do not yield consistent service experience across users. Finally, for drone-assisted IoT sensor networks, we develop a comprehensive framework to characterize the performance of a drone-assisted backscatter communication-based IoT sensor network. A statistical framework is developed to quantify the coverage probability that explicitly accommodates a dyadic backscatter channel which experiences deeper fades than that of the one-way Rayleigh channel. We practically implement the proposed system using software defined radio (SDR) and a custom-designed sensor node (SN) tag. The measurements of parameters such as noise figure, tag reflection coefficient etc., are used to parametrize the developed framework

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Data Collection in Two-Tier IoT Networks with Radio Frequency (RF) Energy Harvesting Devices and Tags

    Get PDF
    The Internet of things (IoT) is expected to connect physical objects and end-users using technologies such as wireless sensor networks and radio frequency identification (RFID). In addition, it will employ a wireless multi-hop backhaul to transfer data collected by a myriad of devices to users or applications such as digital twins operating in a Metaverse. A critical issue is that the number of packets collected and transferred to the Internet is bounded by limited network resources such as bandwidth and energy. In this respect, IoT networks have adopted technologies such as time division multiple access (TDMA), signal interference cancellation (SIC) and multiple-input multiple-output (MIMO) in order to increase network capacity. Another fundamental issue is energy. To this end, researchers have exploited radio frequency (RF) energy-harvesting technologies to prolong the lifetime of energy constrained sensors and smart devices. Specifically, devices with RF energy harvesting capabilities can rely on ambient RF sources such as access points, television towers, and base stations. Further, an operator may deploy dedicated power beacons that serve as RF-energy sources. Apart from that, in order to reduce energy consumption, devices can adopt ambient backscattering communication technologies. Advantageously, backscattering allows devices to communicate using negligible amount of energy by modulating ambient RF signals. To address the aforementioned issues, this thesis first considers data collection in a two-tier MIMO ambient RF energy-harvesting network. The first tier consists of routers with MIMO capability and a set of source-destination pairs/flows. The second tier consists of energy harvesting devices that rely on RF transmissions from routers for energy supply. The problem is to determine a minimum-length TDMA link schedule that satisfies the traffic demand of source-destination pairs and energy demand of energy harvesting devices. It formulates the problem as a linear program (LP), and outlines a heuristic to construct transmission sets that are then used by the said LP. In addition, it outlines a new routing metric that considers the energy demand of energy harvesting devices to cope with routing requirements of IoT networks. The simulation results show that the proposed algorithm on average achieves 31.25% shorter schedules as compared to competing schemes. In addition, the said routing metric results in link schedules that are at most 24.75% longer than those computed by the LP

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Survey on 6G Frontiers: Trends, Applications, Requirements, Technologies and Future Research

    Get PDF
    Emerging applications such as Internet of Everything, Holographic Telepresence, collaborative robots, and space and deep-sea tourism are already highlighting the limitations of existing fifth-generation (5G) mobile networks. These limitations are in terms of data-rate, latency, reliability, availability, processing, connection density and global coverage, spanning over ground, underwater and space. The sixth-generation (6G) of mobile networks are expected to burgeon in the coming decade to address these limitations. The development of 6G vision, applications, technologies and standards has already become a popular research theme in academia and the industry. In this paper, we provide a comprehensive survey of the current developments towards 6G. We highlight the societal and technological trends that initiate the drive towards 6G. Emerging applications to realize the demands raised by 6G driving trends are discussed subsequently. We also elaborate the requirements that are necessary to realize the 6G applications. Then we present the key enabling technologies in detail. We also outline current research projects and activities including standardization efforts towards the development of 6G. Finally, we summarize lessons learned from state-of-the-art research and discuss technical challenges that would shed a new light on future research directions towards 6G

    Resource management for cost-effective cloud and edge systems

    Get PDF
    With the booming of Internet-based and cloud/edge computing applications and services,datacenters hosting these services have become ubiquitous in every sector of our economy which leads to tremendous research opportunities. Specifically, in cloud computing, all data are gathered and processed in centralized cloud datacenters whereas in edge computing, the frontier of data and services is pushed away from the centralized cloud to the edge of the network. By fusing edge computing with cloud computing, the Internet companies and end users can benefit from their respective merits, abundant computation and storage resources from cloud computing, and the data-gathering potential of edge computing. However, resource management in cloud and edge systems is complicated and challenging due to the large scale of cloud datacenters, diverse interconnected resource types, unpredictable generated workloads, and a range of performance objectives. It necessitates the systematic modeling of cloud and edge systems to achieve desired performance objectives.This dissertation presents a holistic system modeling and novel solution methodology to effectivelysolve the optimization problems formulated in three cloud and edge architectures: 1) cloud computing in colocation datacenters; 2) cloud computing in geographically distributed datacenters; 3) UAV-enabled mobile edge computing. First, we study resource management with the goal of overall cost minimization in the context of cloud computing systems. A cooperative game is formulated to model the scenario where a multi-tenant colocation datacenter collectively procures electricity in the wholesale electricity market. Then, a two-stage stochastic programming is formulated to model the scenario where geographically distributed datacenters dispatch workload and procure electricity in the multi-timescale electricity markets. Last, we extend our focus on joint task offloading and resource management with the goal of overall cost minimization in the context of edge computing systems, where edge nodes with computing capabilities are deployed in proximity to end users. A nonconvex optimization problem is formulated in the UAV-enabled mobile edge computing system with the goal of minimizing both energy consumption for computation and task offloading and system response delay. Furthermore, a novel hybrid algorithm that unifies differential evolution and successive convex approximation is proposed to efficiently solve the problem with improved performance.This dissertation addresses several fundamental issues related to resource management incloud and edge computing systems that will further in-depth investigations to improve costeffective performance. The advanced modeling and efficient algorithms developed in this research enable the system operator to make optimal and strategic decisions in resource allocation and task offloading for cost savings

    Enabling Computational Intelligence for Green Internet of Things: Data-Driven Adaptation in LPWA Networking

    Get PDF
    With the exponential expansion of the number of Internet of Things (IoT) devices, many state-of-the-art communication technologies are being developed to use the lowerpower but extensively deployed devices. Due to the limits of pure channel characteristics, most protocols cannot allow an IoT network to be simultaneously large-scale and energy-efficient, especially in hybrid architectures. However, different from the original intention to pursue faster and broader connectivity, the daily operation of IoT devices only requires stable and low-cost links. Thus, our design goal is to develop a comprehensive solution for intelligent green IoT networking to satisfy the modern requirements through a data-driven mechanism, so that the IoT networks use computational intelligence to realize self-regulation of composition, size minimization, and throughput optimization. To the best of our knowledge, this study is the first to use the green protocols of LoRa and ZigBee to establish an ad hoc network and solve the problem of energy efficiency. First, we propose a unique initialization mechanism that automatically schedules node clustering and throughput optimization. Then, each device executes a procedure to manage its own energy consumption to optimize switching in and out of sleep mode, which relies on AI-controlled service usage habit prediction to learn the future usage trend. Finally, our new theory is corroborated through real-world deployment and numerical comparisons. We believe that our new type of network organization and control system could improve the performance of all green-oriented IoT services and even change human lifestyle habits

    5GAuRA. D3.3: RAN Analytics Mechanisms and Performance Benchmarking of Video, Time Critical, and Social Applications

    Get PDF
    5GAuRA deliverable D3.3.This is the final deliverable of Work Package 3 (WP3) of the 5GAuRA project, providing a report on the project’s developments on the topics of Radio Access Network (RAN) analytics and application performance benchmarking. The focus of this deliverable is to extend and deepen the methods and results provided in the 5GAuRA deliverable D3.2 in the context of specific use scenarios of video, time critical, and social applications. In this respect, four major topics of WP3 of 5GAuRA – namely edge-cloud enhanced RAN architecture, machine learning assisted Random Access Channel (RACH) approach, Multi-access Edge Computing (MEC) content caching, and active queue management – are put forward. Specifically, this document provides a detailed discussion on the service level agreement between tenant and service provider in the context of network slicing in Fifth Generation (5G) communication networks. Network slicing is considered as a key enabler to 5G communication system. Legacy telecommunication networks have been providing various services to all kinds of customers through a single network infrastructure. In contrast, by deploying network slicing, operators are now able to partition one network into individual slices, each with its own configuration and Quality of Service (QoS) requirements. There are many applications across industry that open new business opportunities with new business models. Every application instance requires an independent slice with its own network functions and features, whereby every single slice needs an individual Service Level Agreement (SLA). In D3.3, we propose a comprehensive end-to-end structure of SLA between the tenant and the service provider of sliced 5G network, which balances the interests of both sides. The proposed SLA defines reliability, availability, and performance of delivered telecommunication services in order to ensure that right information is delivered to the right destination at right time, safely and securely. We also discuss the metrics of slicebased network SLA such as throughput, penalty, cost, revenue, profit, and QoS related metrics, which are, in the view of 5GAuRA, critical features of the agreement.Peer ReviewedPostprint (published version

    Smart Resource Allocation in Internet-of-Things: Perspectives of Network, Security, and Economics

    Get PDF
    abstract: Emerging from years of research and development, the Internet-of-Things (IoT) has finally paved its way into our daily lives. From smart home to Industry 4.0, IoT has been fundamentally transforming numerous domains with its unique superpower of interconnecting world-wide devices. However, the capability of IoT is largely constrained by the limited resources it can employ in various application scenarios, including computing power, network resource, dedicated hardware, etc. The situation is further exacerbated by the stringent quality-of-service (QoS) requirements of many IoT applications, such as delay, bandwidth, security, reliability, and more. This mismatch in resources and demands has greatly hindered the deployment and utilization of IoT services in many resource-intense and QoS-sensitive scenarios like autonomous driving and virtual reality. I believe that the resource issue in IoT will persist in the near future due to technological, economic and environmental factors. In this dissertation, I seek to address this issue by means of smart resource allocation. I propose mathematical models to formally describe various resource constraints and application scenarios in IoT. Based on these, I design smart resource allocation algorithms and protocols to maximize the system performance in face of resource restrictions. Different aspects are tackled, including networking, security, and economics of the entire IoT ecosystem. For different problems, different algorithmic solutions are devised, including optimal algorithms, provable approximation algorithms, and distributed protocols. The solutions are validated with rigorous theoretical analysis and/or extensive simulation experiments.Dissertation/ThesisDoctoral Dissertation Computer Science 201
    corecore