19 research outputs found

    Centralized and Distributed Machine Learning-Based QoT Estimation for Sliceable Optical Networks

    Full text link
    Dynamic network slicing has emerged as a promising and fundamental framework for meeting 5G's diverse use cases. As machine learning (ML) is expected to play a pivotal role in the efficient control and management of these networks, in this work we examine the ML-based Quality-of-Transmission (QoT) estimation problem under the dynamic network slicing context, where each slice has to meet a different QoT requirement. We examine ML-based QoT frameworks with the aim of finding QoT model/s that are fine-tuned according to the diverse QoT requirements. Centralized and distributed frameworks are examined and compared according to their accuracy and training time. We show that the distributed QoT models outperform the centralized QoT model, especially as the number of diverse QoT requirements increases.Comment: accepted for presentation at the IEEE GLOBECOM 201

    A Multi-task Learning Framework for Drone State Identification and Trajectory Prediction

    Full text link
    The rise of unmanned aerial vehicle (UAV) operations, as well as the vulnerability of the UAVs' sensors, has led to the need for proper monitoring systems for detecting any abnormal behavior of the UAV. This work addresses this problem by proposing an innovative multi-task learning framework (MLF-ST) for UAV state identification and trajectory prediction, that aims to optimize the performance of both tasks simultaneously. A deep neural network with shared layers to extract features from the input data is employed, utilizing drone sensor measurements and historical trajectory information. Moreover, a novel loss function is proposed that combines the two objectives, encouraging the network to jointly learn the features that are most useful for both tasks. The proposed MLF-ST framework is evaluated on a large dataset of UAV flights, illustrating that it is able to outperform various state-of-the-art baseline techniques in terms of both state identification and trajectory prediction. The evaluation of the proposed framework, using real-world data, demonstrates that it can enable applications such as UAV-based surveillance and monitoring, while also improving the safety and efficiency of UAV operations

    Adaptive Autopilot: Constrained DRL for Diverse Driving Behaviors

    Get PDF
    In pursuit of autonomous vehicles, achieving human- like driving behavior is vital. This study introduces adaptive autopilot (AA), a unique framework utilizing constrained-deep reinforcement learning (C-DRL). AA aims to safely emulate human driving to reduce the necessity for driver intervention. Focusing on the car-following scenario, the process involves: (1) extracting data from the highD natural driving study, categorizing it into three driving styles using a rule-based classifier; (2) employing deep neural network (DNN) regressors to predict human-like acceleration across styles; (3) using C-DRL, specifically the soft actor-critic Lagrangian technique, to learn human-like safe driving policies. Results indicate effectiveness in each step, with the rule-based classifier distinguishing driving styles, the regressor model accurately predicting acceleration, outperforming traditional car-following models, and C-DRL agents learning optimal policies for human-like driving across styles

    Edge Learning of Vehicular Trajectories at Regulated Intersections

    Get PDF
    Trajectory prediction is crucial in assisting both human-driven and autonomous vehicles. Most of the existing approaches, however, focus on straight stretches of road and do not address trajectory prediction at intersections. This work aims to fill this gap by proposing a solution that copes with the higher complexity exhibited for the intersection scenario, leveraging the 5G-MEC capabilities. In particular, the reduced latency and edge computational power are exploited to centrally collect and process measurements from both vehicles (e.g., odometry) and road infrastructure (e.g., traffic light phases). Based on such a holistic system view, we develop a Long Short Term Memory (LSTM) recurrent neural network which, as shown through simulations using a real-world dataset, provides high-accuracy trajectory predictions. The encountered challenges and advantages of the presented approach are analyzed in detail, paving the way for a new vehicle trajectory prediction methodology

    Charging policies for PREYs used for service delivery: a reinforcement learning approach

    No full text
    This work examines a cost optimization problem for plug-in hybrid electric vehicles (PHEVs) used for service delivery, in the presence of energy consumption uncertainty. For the cost optimization problem, an optimal policy is found that dynamically decides, as the vehicle moves, at which charging station the vehicle should be charged, in order to minimize the service fuel cost. The problem is formulated as a Partially Observable Markov Decision Process (POMDP) and is solved by applying reinforcement learning (RL). The RL charging policy (RLCP), found after solving the POMDP, is compared to two benchmark policies and it is shown that RLCP outperforms both. Most importantly, RLCP can be automatically adjusted to significant variations on the vehicle's energy consumption behavior by continuously training the RLCP model according to the most recent information obtained from the vehicle's environment

    On the Fair-Efficient Charging Scheduling of Electric Vehicles in Parking Structures

    No full text
    This work examines the off-line electric vehicle (EV) scheduling problem for cloud-based parking operators, that a-priori accept parking reservations for EVs requesting charging services during their stay. Specifically, it examines the fair EV charging scheduling problem, where fairness refers to the achievable charging levels of EVs contending for energy utilities within a planning horizon. For finding fair utility allocations the a-fairness approach is used, inspired by welfare economics, that is formulated as an integer linear program (ILP) and as an ant colony optimization (ACO), considering both the system's and EV owners' constraints and requirements. It is shown that with this approach the operator is able to control the fairness-efficiency trade-off (with system efficiency affecting the operator's revenue) by appropriately selecting the inequality aversion parameter a to best meet targeted performance metrics. Further, it is shown that ACO, deriving near-optimal allocations, significantly outperforms the ILP-based algorithm in terms of processing time (up to 99%), thus it is a promising approach when optimal ILP allocations cannot be derived fast enough for a practical implementation

    On the Fair-Efficient Charging Scheduling of Electric Vehicles in Parking Structures

    No full text
    This work examines the off-line electric vehicle (EV) scheduling problem for cloud-based parking operators, that a-priori accept parking reservations for EVs requesting charging services during their stay. Specifically, it examines the fair EV charging scheduling problem, where fairness refers to the achievable charging levels of EVs contending for energy utilities within a planning horizon. For finding fair utility allocations the a-fairness approach is used, inspired by welfare economics, that is formulated as an integer linear program (ILP) and as an ant colony optimization (ACO), considering both the system's and EV owners' constraints and requirements. It is shown that with this approach the operator is able to control the fairness-efficiency trade-off (with system efficiency affecting the operator's revenue) by appropriately selecting the inequality aversion parameter a to best meet targeted performance metrics. Further, it is shown that ACO, deriving near-optimal allocations, significantly outperforms the ILP-based algorithm in terms of processing time (up to 99%), thus it is a promising approach when optimal ILP allocations cannot be derived fast enough for a practical implementation

    A probabilistic approach for failure localization

    No full text
    This work considers the problem of fault localization in transparent optical networks. The aim is to localize single-link failures by utilizing statistical machine learning techniques trained on data that describe the network state upon current and past failure incidents. In particular, a Gaussian Process (GP) classifier is trained on historical data extracted from the examined network, with the goal of modeling and predicting the failure probability of each link therein. To limit the set of suspect links for every failure incident, the proposed approach is complemented with the utilization of a Graph-Based Correlation heuristic. The proposed approach is tested on a dataset generated for an OFDM-based optical network, demonstrating that it achieves a high localization accuracy. The proposed scheme can be used by service providers for reducing the Mean-Time-To-Repair of the failure

    Performance analysis of a data-driven quality-of-transmission decision approach on a dynamic multicast- capable metro optical network

    No full text
    The performance of a data-driven qualityof- transmission (QoT) model is investigated on a dynamic metro optical network capable of supporting both unicast and multicast connections. The data-driven QoT technique analyzes data of previous connection requests and, through a training procedure that is performed on a neural network, returns a data-driven QoT model that nearaccurately decides the QoT of the newly arriving requests. The advantages of the data-driven QoT approach over the existing Q-factor techniques are that it is self-adaptive, it is a function of data that are independent from the physical layer impairments (PLIs) eliminating the requirement of specific measurement equipment, and it does not assume the existence of a system with extensive processingandstorage capabilities. Further, it is fast in processing new data and fast in finding a near-accurateQoT model provided that such a model exists. On the contrary, existing Q-factor models lack self-adaptiveness; they are a function of the PLIs, and their evaluation requires time-consuming simulations, lab experiments, specific measurement equipment, and considerable human effort. It is shown that the data-driven QoT model exhibits a high accuracy (close to 92%-95%) in determining, during the provisioning phase, whether a connection to be established has a sufficient (or insufficient) QoT, when compared with the QoT decisions performed by the Q-factor model. It is also shown that, when sufficient wavelength capacity is available in the network, the network performance is not significantly affected when the data-driven QoT model is used for the dynamic system instead of the Q-factor model, which is an indicator that the proposed approach can efficiently replace the existing Q-factor model
    corecore