47,475 research outputs found

    A predefined channel coefficients library for vehicle-to-vehicle communications

    Get PDF
    It is noticeable that most of VANETs communications tests are assessed through simulation. In a majority of simulation results, the physical layer is often affected by an apparent lack of realism. Therefore, vehicular channel model has become a critical issue in the field of intelligent transport systems (ITS). To overcome the lack of realism problem, a more robust channel model is needed to reflect the reality. This paper provides an open access, predefined channel coefficients library. The library is based on 2x2 and 4x4 Multiple – Input – Multiple – Output (MIMO) systems in V2V communications, using a spatial channel model extended SCME which will help to reduce the overall simulation time. In addition, it provides a more realistic channel model for V2V communications; considering: over ranges of speeds, distances, multipath signals, sub-path signals, different angle of arrivals, different angle departures, no line of sight and line of sight. An intensive evaluation process has taken place to validate the library and acceptance results are produced. Having an open access predefined library, enables the researcher at relevant communities to test and evaluate several complicated vehicular communications scenarios in a wider manners with less time and efforts

    Trace-driven simulation for LoRaWan868 MHz propagation in an urban scenario

    Get PDF

    Reinforcement Learning for UAV Attitude Control

    Full text link
    Autopilot systems are typically composed of an "inner loop" providing stability and control, while an "outer loop" is responsible for mission-level objectives, e.g. way-point navigation. Autopilot systems for UAVs are predominately implemented using Proportional, Integral Derivative (PID) control systems, which have demonstrated exceptional performance in stable environments. However more sophisticated control is required to operate in unpredictable, and harsh environments. Intelligent flight control systems is an active area of research addressing limitations of PID control most recently through the use of reinforcement learning (RL) which has had success in other applications such as robotics. However previous work has focused primarily on using RL at the mission-level controller. In this work, we investigate the performance and accuracy of the inner control loop providing attitude control when using intelligent flight control systems trained with the state-of-the-art RL algorithms, Deep Deterministic Gradient Policy (DDGP), Trust Region Policy Optimization (TRPO) and Proximal Policy Optimization (PPO). To investigate these unknowns we first developed an open-source high-fidelity simulation environment to train a flight controller attitude control of a quadrotor through RL. We then use our environment to compare their performance to that of a PID controller to identify if using RL is appropriate in high-precision, time-critical flight control.Comment: 13 pages, 9 figure

    Automating Vehicles by Deep Reinforcement Learning using Task Separation with Hill Climbing

    Full text link
    Within the context of autonomous driving a model-based reinforcement learning algorithm is proposed for the design of neural network-parameterized controllers. Classical model-based control methods, which include sampling- and lattice-based algorithms and model predictive control, suffer from the trade-off between model complexity and computational burden required for the online solution of expensive optimization or search problems at every short sampling time. To circumvent this trade-off, a 2-step procedure is motivated: first learning of a controller during offline training based on an arbitrarily complicated mathematical system model, before online fast feedforward evaluation of the trained controller. The contribution of this paper is the proposition of a simple gradient-free and model-based algorithm for deep reinforcement learning using task separation with hill climbing (TSHC). In particular, (i) simultaneous training on separate deterministic tasks with the purpose of encoding many motion primitives in a neural network, and (ii) the employment of maximally sparse rewards in combination with virtual velocity constraints (VVCs) in setpoint proximity are advocated.Comment: 10 pages, 6 figures, 1 tabl

    Node Density Estimation in VANETs Using Received Signal Power

    Get PDF
    Accurately estimating node density in Vehicular Ad hoc Networks, VANETs, is a challenging and crucial task. Various approaches exist, yet none takes advantage of physical layer parameters in a distributed fashion. This paper describes a framework that allows individual nodes to estimate the node density of their surrounding network independent of beacon messages and other infrastructure-based information. The proposal relies on three factors: 1) a discrete event simulator to estimate the average number of nodes transmitting simultaneously; 2) a realistic channel model for VANETs environment; and 3) a node density estimation technique. This work provides every vehicle on the road with two equations indicating the relation between 1) received signal strength versus simultaneously transmitting nodes, and 2) simultaneously transmitting nodes versus node density. Access to these equations enables individual nodes to estimate their real-time surrounding node density. The system is designed to work for the most complicated scenarios where nodes have no information about the topology of the network and, accordingly, the results indicate that the system is reasonably reliable and accurate. The outcome of this work has various applications and can be used for any protocol that is affected by node density
    • …
    corecore