52,842 research outputs found

    Heterogeneous V2V Communications in Multi-Link and Multi-RAT Vehicular Networks

    Get PDF
    Connected and automated vehicles will enable advanced traffic safety and efficiency applications thanks to the dynamic exchange of information between vehicles, and between vehicles and infrastructure nodes. Connected vehicles can utilize IEEE 802.11p for vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications. However, a widespread deployment of connected vehicles and the introduction of connected automated driving applications will notably increase the bandwidth and scalability requirements of vehicular networks. This paper proposes to address these challenges through the adoption of heterogeneous V2V communications in multi-link and multi-RAT vehicular networks. In particular, the paper proposes the first distributed (and decentralized) context-aware heterogeneous V2V communications algorithm that is technology and application agnostic, and that allows each vehicle to autonomously and dynamically select its communications technology taking into account its application requirements and the communication context conditions. This study demonstrates the potential of heterogeneous V2V communications, and the capability of the proposed algorithm to satisfy the vehicles' application requirements while approaching the estimated upper bound network capacity

    Automating Vehicles by Deep Reinforcement Learning using Task Separation with Hill Climbing

    Full text link
    Within the context of autonomous driving a model-based reinforcement learning algorithm is proposed for the design of neural network-parameterized controllers. Classical model-based control methods, which include sampling- and lattice-based algorithms and model predictive control, suffer from the trade-off between model complexity and computational burden required for the online solution of expensive optimization or search problems at every short sampling time. To circumvent this trade-off, a 2-step procedure is motivated: first learning of a controller during offline training based on an arbitrarily complicated mathematical system model, before online fast feedforward evaluation of the trained controller. The contribution of this paper is the proposition of a simple gradient-free and model-based algorithm for deep reinforcement learning using task separation with hill climbing (TSHC). In particular, (i) simultaneous training on separate deterministic tasks with the purpose of encoding many motion primitives in a neural network, and (ii) the employment of maximally sparse rewards in combination with virtual velocity constraints (VVCs) in setpoint proximity are advocated.Comment: 10 pages, 6 figures, 1 tabl
    • …
    corecore