10 research outputs found

    Deep Reinforcement Scheduling for Mobile Crowdsensing in Fog Computing

    Get PDF
    Mobile crowdsensing becomes a promising technology for the emerging Internet of Things (IoT) applications in smart environments. Fog computing is enabling a new breed of IoT services, which is also a new opportunity for mobile crowdsensing. Thus, in this article, we introduce a framework enabling mobile crowdsensing in fog environments with a hierarchical scheduling strategy. We first introduce the crowdsensing framework that has a hierarchical structure to organize different resources. Since different positions and performance of fog nodes influence the quality of service (QoS) of IoT applications, we formulate a scheduling problem in the hierarchical fog structure and solve it by using a deep reinforcement learning-based strategy. From extensive simulation results, our solution outperforms other scheduling solutions for mobile crowdsensing in the given fog computing environment

    BeneWinD: An Adaptive Benefit Win–Win Platform with Distributed Virtual Emotion Foundation

    Get PDF
    In recent decades, online platforms that use Web 3.0 have tremendously expanded their goods, services, and values to numerous applications thanks to its inherent advantages of convenience, service speed, connectivity, etc. Although online commerce and other relevant platforms have clear merits, offline-based commerce and payments are indispensable and should be activated continuously, because offline systems have intrinsic value for people. With the theme of benefiting all humankind, we propose a new adaptive benefit platform, called BeneWinD, which is endowed with strengths of online and offline platforms. Furthermore, a new currency for integrated benefits, the win–win digital currency, is used in the proposed platform. Essentially, the proposed platform with a distributed virtual emotion foundation aims to provide a wide scope of benefits to both parties, the seller and consumer, in online and offline settings. We primarily introduce features, applicable scenarios, and services of the proposed platform. Different from previous systems and perspectives, BeneWinD can be combined with Web 3.0 because it deliberates based on the decentralized or distributed virtual emotion foundation, and the virtual emotion feature and the detected virtual emotion information with anonymity are open to everyone who wants to participate in the platform. It follows that the BeneWinD platform can be connected to the linked virtual emotion data block or win–win digital currency. Furthermore, crucial research challenges and issues are addressed in order to make great contributions to improve the development of the platform

    SimTune: bridging the simulator reality gap for resource management in edge-cloud computing

    Get PDF
    Industries and services are undergoing an Internet of Things centric transformation globally, giving rise to an explosion of multi-modal data generated each second. This, with the requirement of low-latency result delivery, has led to the ubiquitous adoption of edge and cloud computing paradigms. Edge computing follows the data gravity principle, wherein the computational devices move closer to the end-users to minimize data transfer and communication times. However, large-scale computation has exacerbated the problem of efficient resource management in hybrid edge-cloud platforms. In this regard, data-driven models such as deep neural networks (DNNs) have gained popularity to give rise to the notion of edge intelligence. However, DNNs face significant problems of data saturation when fed volatile data. Data saturation is when providing more data does not translate to improvements in performance. To address this issue, prior work has leveraged coupled simulators that, akin to digital twins, generate out-of-distribution training data alleviating the data-saturation problem. However, simulators face the reality-gap problem, which is the inaccuracy in the emulation of real computational infrastructure due to the abstractions in such simulators. To combat this, we develop a framework, SimTune, that tackles this challenge by leveraging a low-fidelity surrogate model of the high-fidelity simulator to update the parameters of the latter, so to increase the simulation accuracy. This further helps co-simulated methods to generalize to edge-cloud configurations for which human encoded parameters are not known apriori. Experiments comparing SimTune against state-of-the-art data-driven resource management solutions on a real edge-cloud platform demonstrate that simulator tuning can improve quality of service metrics such as energy consumption and response time by up to 14.7% and 7.6% respectively

    An Adaptive Task Scheduling in Fog Computing

    Get PDF
    Internet applications generate massive amount of data. For processing the data, it is transmitted to cloud. Time-sensitive applications require faster access. However, the limitation with the cloud is the connectivity with the end devices. Fog was developed by Cisco to overcome this limitation. Fog has better connectivity with the end devices, with some limitations. Fog works as intermediate layer between the end devices and the cloud. When providing the quality of service to end users, scheduling plays an important role. Scheduling a task based on the end users requirement is a tedious thing. In this paper, we proposed a cloud-fog task scheduling model, which provides quality of service to end devices with proper security

    Machine learning methods for service placement : a systematic review

    Get PDF
    With the growth of real-time and latency-sensitive applications in the Internet of Everything (IoE), service placement cannot rely on cloud computing alone. In response to this need, several computing paradigms, such as Mobile Edge Computing (MEC), Ultra-dense Edge Computing (UDEC), and Fog Computing (FC), have emerged. These paradigms aim to bring computing resources closer to the end user, reducing delay and wasted backhaul bandwidth. One of the major challenges of these new paradigms is the limitation of edge resources and the dependencies between different service parts. Some solutions, such as microservice architecture, allow different parts of an application to be processed simultaneously. However, due to the ever-increasing number of devices and incoming tasks, the problem of service placement cannot be solved today by relying on rule-based deterministic solutions. In such a dynamic and complex environment, many factors can influence the solution. Optimization and Machine Learning (ML) are two well-known tools that have been used most for service placement. Both methods typically use a cost function. Optimization is usually a way to define the difference between the predicted and actual value, while ML aims to minimize the cost function. In simpler terms, ML aims to minimize the gap between prediction and reality based on historical data. Instead of relying on explicit rules, ML uses prediction based on historical data. Due to the NP-hard nature of the service placement problem, classical optimization methods are not sufficient. Instead, metaheuristic and heuristic methods are widely used. In addition, the ever-changing big data in IoE environments requires the use of specific ML methods. In this systematic review, we present a taxonomy of ML methods for the service placement problem. Our findings show that 96% of applications use a distributed microservice architecture. Also, 51% of the studies are based on on-demand resource estimation methods and 81% are multi-objective. This article also outlines open questions and future research trends. Our literature review shows that one of the most important trends in ML is reinforcement learning, with a 56% share of research

    Cellular, Wide-Area, and Non-Terrestrial IoT: A Survey on 5G Advances and the Road Towards 6G

    Full text link
    The next wave of wireless technologies is proliferating in connecting things among themselves as well as to humans. In the era of the Internet of things (IoT), billions of sensors, machines, vehicles, drones, and robots will be connected, making the world around us smarter. The IoT will encompass devices that must wirelessly communicate a diverse set of data gathered from the environment for myriad new applications. The ultimate goal is to extract insights from this data and develop solutions that improve quality of life and generate new revenue. Providing large-scale, long-lasting, reliable, and near real-time connectivity is the major challenge in enabling a smart connected world. This paper provides a comprehensive survey on existing and emerging communication solutions for serving IoT applications in the context of cellular, wide-area, as well as non-terrestrial networks. Specifically, wireless technology enhancements for providing IoT access in fifth-generation (5G) and beyond cellular networks, and communication networks over the unlicensed spectrum are presented. Aligned with the main key performance indicators of 5G and beyond 5G networks, we investigate solutions and standards that enable energy efficiency, reliability, low latency, and scalability (connection density) of current and future IoT networks. The solutions include grant-free access and channel coding for short-packet communications, non-orthogonal multiple access, and on-device intelligence. Further, a vision of new paradigm shifts in communication networks in the 2030s is provided, and the integration of the associated new technologies like artificial intelligence, non-terrestrial networks, and new spectra is elaborated. Finally, future research directions toward beyond 5G IoT networks are pointed out.Comment: Submitted for review to IEEE CS&

    Intelligent Multi-Dimensional Resource Management in MEC-Assisted Vehicular Networks

    Get PDF
    Benefiting from advances in the automobile industry and wireless communication technologies, the vehicular network has been emerged as a key enabler of intelligent transportation services. Allowing real-time information exchanging between vehicle and everything, traffic safety and efficiency are significantly enhanced, and ubiquitous Internet access is enabled to support new data services and applications. However, with more and more services and applications, mobile data traffic generated by vehicles has been increasing and the issue on the overloaded computing task has been getting worse. Because of the limitation of spectrum and vehicles' on-board computing and caching resources, it is challenging to promote vehicular networking technologies to support the emerging services and applications, especially those requiring sensitive delay and diverse resources. To overcome these challenges, in this thesis, we propose a new vehicular network architecture and design efficient resource management schemes to support the emerging applications and services with different levels of quality-of-service (QoS) guarantee. Firstly, we propose a multi-access edge computing (MEC)-assisted vehicular network (MVNET) architecture that integrates the concepts of software-defined networking (SDN) and network function virtualization (NFV). With MEC, the interworking of multiple wireless access technologies can be realized to exploit the diversity gain over a wide range of radio spectrum, and at the same time, vehicle's computing/caching tasks can be offloaded to and processed by the MEC servers. By enabling NFV in MEC, different functions can be programmed on the server to support diversified vehicular applications, thus enhancing the server's flexibility. Moreover, by using SDN concepts in MEC, a unified control plane interface and global information can be provided, and by subsequently using this information, intelligent traffic steering and efficient resource management can be achieved. Secondly, under the proposed MVNET architecture, we propose a dynamic spectrum management framework to improve spectrum resource utilization while guaranteeing QoS requirements for different applications, in which, spectrum slicing, spectrum allocating, and transmit power controlling are jointly considered. Accordingly, three non-convex network utility maximization problems are formulated to slice spectrum among base stations (BSs), allocate spectrum among vehicles associated with the same BS, and control transmit powers of BSs, respectively. Via linear programming relaxation and first-order Taylor series approximation, these problems are transformed into tractable forms and then are jointly solved by a proposed alternate concave search algorithm. As a result, optimal spectrum slicing ratios among BSs, optimal BS-vehicle association patterns, optimal fractions of spectrum resources allocated to vehicles, and optimal transmit powers of BSs are obtained. Based on our simulation, a high aggregate network utility is achieved by the proposed spectrum management scheme compared with two existing schemes. Thirdly, we study the joint allocation of the spectrum, computing, and caching resources in MVNETs. To support different vehicular applications, we consider two typical MVNET architectures and formulate multi-dimensional resource optimization problems accordingly, which are usually with high computation complexity and overlong problem-solving time. Thus, we exploit reinforcement learning to transform the two formulated problems and solve them by leveraging the deep deterministic policy gradient (DDPG) and hierarchical learning architectures. Via off-line training, the network dynamics can be automatically learned and appropriate resource allocation decisions can be rapidly obtained to satisfy the QoS requirements of vehicular applications. From simulation results, the proposed resource management schemes can achieve high delay/QoS satisfaction ratios. Fourthly, we extend the proposed MVNET architecture to an unmanned aerial vehicle (UAV)-assisted MVNET and investigate multi-dimensional resource management for it. To efficiently provide on-demand resource access, the macro eNodeB and UAV, both mounted with MEC servers, cooperatively make association decisions and allocate proper amounts of resources to vehicles. Since there is no central controller, we formulate the resource allocation at the MEC servers as a distributive optimization problem to maximize the number of offloaded tasks while satisfying their heterogeneous QoS requirements, and then solve it with a multi-agent DDPG (MADDPG)-based method. Through centrally training the MADDPG model offline, the MEC servers, acting as learning agents, then can rapidly make vehicle association and resource allocation decisions during the online execution stage. From our simulation results, the MADDPG-based method can achieve a comparable convergence rate and higher delay/QoS satisfaction ratios than the benchmarks. In summary, we have proposed an MEC-assisted vehicular network architecture and investigated the spectrum slicing and allocation, and multi-dimensional resource allocation in the MEC- and/or UAV-assisted vehicular networks in this thesis. The proposed architecture and schemes should provide useful guidelines for future research in multi-dimensional resource management scheme designing and resource utilization enhancement in highly dynamic wireless networks with diversified data services and applications
    corecore