41,599 research outputs found

    Deep Learning Techniques for Mobility Prediction and Management in Mobile Networks

    Get PDF
    Trajectory prediction is an important research topic in modern mobile networks (e.g., 5G and beyond 5G) to enhance the network quality of service by accurately predicting the future locations of mobile users, such as pedestrians and vehicles, based on their past mobility patterns. A trajectory is defined as the sequence of locations the user visits over time. The primary objective of this thesis is to improve the modeling of mobility data and establish personalized, scalable, collective-intelligent, distributed, and strategic trajectory prediction techniques that can effectively adapt to the dynamics of urban environments in order to facilitate the optimal delivery of mobility-aware network services. Our proposed approaches aim to increase the accuracy of trajectory prediction while minimizing communication and computational costs leading to more efficient mobile networks. The thesis begins by introducing a personalized trajectory prediction technique using deep learning and reinforcement learning. It adapts the neural network architecture to capture the distinct characteristics of mobile users’ data. Furthermore, it introduces advanced anticipatory handover management and dynamic service migration techniques that optimize network management using our high-performance trajectory predictor. This approach ensures seamless connectivity and proactively migrates network services, enhancing the quality of service in dense wireless networks. The second contribution of the thesis introduces cluster-level prediction to extend the reinforcement learning-based trajectory prediction, addressing scalability challenges in large-scale networks. Cluster-level trajectory prediction leverages users’ similarities within clusters to train only a few representatives. This enables efficient transfer learning of pre-trained mobility models and reduces computational overhead enhancing the network scalability. The third contribution proposes a collaborative social-aware multi-agent trajectory prediction technique that accounts for the interactions between multiple intra-cluster agents in a dynamic urban environment, increasing the prediction accuracy but decreasing the algorithm complexity and computational resource usage. The fourth contribution proposes a federated learning-driven multi-agent trajectory prediction technique that leverages the collaborative power of multiple local data sources in a decentralized manner to enhance user privacy and improve the accuracy of trajectory prediction while jointly minimizing computational and communication costs. The fifth contribution proposes a game theoretic non-cooperative multi-agent prediction technique that considers the strategic behaviors among competitive inter-cluster mobile users. The proposed approaches are evaluated on small-scale and large-scale location-based mobility datasets, where locations could be GPS coordinates or cellular base station IDs. Our experiments demonstrate that our proposed approaches outperform state-of-the-art trajectory prediction methods making significant contributions to the field of mobile networks

    A Hierarchical Framework of Cloud Resource Allocation and Power Management Using Deep Reinforcement Learning

    Full text link
    Automatic decision-making approaches, such as reinforcement learning (RL), have been applied to (partially) solve the resource allocation problem adaptively in the cloud computing system. However, a complete cloud resource allocation framework exhibits high dimensions in state and action spaces, which prohibit the usefulness of traditional RL techniques. In addition, high power consumption has become one of the critical concerns in design and control of cloud computing systems, which degrades system reliability and increases cooling cost. An effective dynamic power management (DPM) policy should minimize power consumption while maintaining performance degradation within an acceptable level. Thus, a joint virtual machine (VM) resource allocation and power management framework is critical to the overall cloud computing system. Moreover, novel solution framework is necessary to address the even higher dimensions in state and action spaces. In this paper, we propose a novel hierarchical framework for solving the overall resource allocation and power management problem in cloud computing systems. The proposed hierarchical framework comprises a global tier for VM resource allocation to the servers and a local tier for distributed power management of local servers. The emerging deep reinforcement learning (DRL) technique, which can deal with complicated control problems with large state space, is adopted to solve the global tier problem. Furthermore, an autoencoder and a novel weight sharing structure are adopted to handle the high-dimensional state space and accelerate the convergence speed. On the other hand, the local tier of distributed server power managements comprises an LSTM based workload predictor and a model-free RL based power manager, operating in a distributed manner.Comment: accepted by 37th IEEE International Conference on Distributed Computing (ICDCS 2017

    Bayesian model predictive control: Efficient model exploration and regret bounds using posterior sampling

    Full text link
    Tight performance specifications in combination with operational constraints make model predictive control (MPC) the method of choice in various industries. As the performance of an MPC controller depends on a sufficiently accurate objective and prediction model of the process, a significant effort in the MPC design procedure is dedicated to modeling and identification. Driven by the increasing amount of available system data and advances in the field of machine learning, data-driven MPC techniques have been developed to facilitate the MPC controller design. While these methods are able to leverage available data, they typically do not provide principled mechanisms to automatically trade off exploitation of available data and exploration to improve and update the objective and prediction model. To this end, we present a learning-based MPC formulation using posterior sampling techniques, which provides finite-time regret bounds on the learning performance while being simple to implement using off-the-shelf MPC software and algorithms. The performance analysis of the method is based on posterior sampling theory and its practical efficiency is illustrated using a numerical example of a highly nonlinear dynamical car-trailer system

    Vision-Based Lane-Changing Behavior Detection Using Deep Residual Neural Network

    Get PDF
    Accurate lane localization and lane change detection are crucial in advanced driver assistance systems and autonomous driving systems for safer and more efficient trajectory planning. Conventional localization devices such as Global Positioning System only provide road-level resolution for car navigation, which is incompetent to assist in lane-level decision making. The state of art technique for lane localization is to use Light Detection and Ranging sensors to correct the global localization error and achieve centimeter-level accuracy, but the real-time implementation and popularization for LiDAR is still limited by its computational burden and current cost. As a cost-effective alternative, vision-based lane change detection has been highly regarded for affordable autonomous vehicles to support lane-level localization. A deep learning-based computer vision system is developed to detect the lane change behavior using the images captured by a front-view camera mounted on the vehicle and data from the inertial measurement unit for highway driving. Testing results on real-world driving data have shown that the proposed method is robust with real-time working ability and could achieve around 87% lane change detection accuracy. Compared to the average human reaction to visual stimuli, the proposed computer vision system works 9 times faster, which makes it capable of helping make life-saving decisions in time

    The State-of-the-art of Coordinated Ramp Control with Mixed Traffic Conditions

    Get PDF
    Ramp metering, a traditional traffic control strategy for conventional vehicles, has been widely deployed around the world since the 1960s. On the other hand, the last decade has witnessed significant advances in connected and automated vehicle (CAV) technology and its great potential for improving safety, mobility and environmental sustainability. Therefore, a large amount of research has been conducted on cooperative ramp merging for CAVs only. However, it is expected that the phase of mixed traffic, namely the coexistence of both human-driven vehicles and CAVs, would last for a long time. Since there is little research on the system-wide ramp control with mixed traffic conditions, the paper aims to close this gap by proposing an innovative system architecture and reviewing the state-of-the-art studies on the key components of the proposed system. These components include traffic state estimation, ramp metering, driving behavior modeling, and coordination of CAVs. All reviewed literature plot an extensive landscape for the proposed system-wide coordinated ramp control with mixed traffic conditions.Comment: 8 pages, 1 figure, IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE - ITSC 201
    • …
    corecore