124 research outputs found

    Control of autonomous multibody vehicles using artificial intelligence

    Get PDF
    The field of autonomous driving has been evolving rapidly within the last few years and a lot of research has been dedicated towards the control of autonomous vehicles, especially car-like ones. Due to the recent successes of artificial intelligence techniques, even more complex problems can be solved, such as the control of autonomous multibody vehicles. Multibody vehicles can accomplish transportation tasks in a faster and cheaper way compared to multiple individual mobile vehicles or robots. But even for a human, driving a truck-trailer is a challenging task. This is because of the complex structure of the vehicle and the maneuvers that it has to perform, such as reverse parking to a loading dock. In addition, the detailed technical solution for an autonomous truck is challenging and even though many single-domain solutions are available, e.g. for pathplanning, no holistic framework exists. Also, from the control point of view, designing such a controller is a high complexity problem, which makes it a widely used benchmark. In this thesis, a concept for a plurality of tasks is presented. In contrast to most of the existing literature, a holistic approach is developed which combines many stand-alone systems to one entire framework. The framework consists of a plurality of modules, such as modeling, pathplanning, training for neural networks, controlling, jack-knife avoidance, direction switching, simulation, visualization and testing. There are model-based and model-free control approaches and the system comprises various pathplanning methods and target types. It also accounts for noisy sensors and the simulation of whole environments. To achieve superior performance, several modules had to be developed, redesigned and interlinked with each other. A pathplanning module with multiple available methods optimizes the desired position by also providing an efficient implementation for trajectory following. Classical approaches, such as optimal control (LQR) and model predictive control (MPC) can safely control a truck with a given model. Machine learning based approaches, such as deep reinforcement learning, are designed, implemented, trained and tested successfully. Furthermore, the switching of the driving direction is enabled by continuous analysis of a cost function to avoid collisions and improve driving behavior. This thesis introduces a working system of all integrated modules. The system proposed can complete complex scenarios, including situations with buildings and partial trajectories. In thousands of simulations, the system using the LQR controller or the reinforcement learning agent had a success rate of >95 % in steering a truck with one trailer, even with added noise. For the development of autonomous vehicles, the implementation of AI at scale is important. This is why a digital twin of the truck-trailer is used to simulate the full system at a much higher speed than one can collect data in real life.Tesi

    Comparison of Modern Controls and Reinforcement Learning for Robust Control of Autonomously Backing Up Tractor-Trailers to Loading Docks

    Get PDF
    Two controller performances are assessed for generalization in the path following task of autonomously backing up a tractor-trailer. Starting from random locations and orientations, paths are generated to loading docks with arbitrary pose using Dubins Curves. The combination vehicles can be varied in wheelbase, hitch length, weight distributions, and tire cornering stiffness. The closed form calculation of the gains for the Linear Quadratic Regulator (LQR) rely heavily on having an accurate model of the plant. However, real-world applications cannot expect to have an updated model for each new trailer. Finding alternative robust controllers when the trailer model is changed was the motivation of this research. Reinforcement learning, with neural networks as their function approximators, can allow for generalized control from its learned experience that is characterized by a scalar reward value. The Linear Quadratic Regulator and the Deep Deterministic Policy Gradient (DDPG) are compared for robust control when the trailer is changed. This investigation quantifies the capabilities and limitations of both controllers in simulation using a kinematic model. The controllers are evaluated for generalization by altering the kinematic model trailer wheelbase, hitch length, and velocity from the nominal case. In order to close the gap from simulation and reality, the control methods are also assessed with sensor noise and various controller frequencies. The root mean squared and maximum errors from the path are used as metrics, including the number of times the controllers cause the vehicle to jackknife or reach the goal. Considering the runs where the LQR did not cause the trailer to jackknife, the LQR tended to have slightly better precision. DDPG, however, controlled the trailer successfully on the paths where the LQR jackknifed. Reinforcement learning was found to sacrifice a short term reward, such as precision, to maximize the future expected reward like reaching the loading dock. The reinforcement learning agent learned a policy that imposed nonlinear constraints such that it never jackknifed, even when it wasn\u27t the trailer it trained on

    Deep Learning Based Methods for Outdoor Robot Localization and Navigation

    Get PDF
    The number of elderly people is increasing around the globe. In order to support the growing of ageing society, mobile robot is one of viable choices for assisting the elders in their daily activities. These activities happen in any places, either indoor or outdoor. Although outdoor activities benefit the elders in many ways, outdoor environments contain difficulties from their unpredictable natures. Mobile robots for supporting humans in outdoor environments must automatically traverse through various difficulties in the environments using suitable navigation systems.Core components of mobile robots always include the navigation segments. Navigation system helps guiding the robot to its destination where it can perform its designated tasks. There are various tools to be chosen for navigation systems. Outdoor environments are mostly open for conventional navigation tools such as Global Positioning System (GPS) devices. In this thesis three systems for localization and navigation of mobile robots based on visual data and deep learning algorithms are proposed. The first localization system is based on landmark detection. The Faster Regional-Convolutional Neural Network (Faster R-CNN) detects landmarks and signs in the captured image. A Feed-Forward Neural Network (FFNN) is trained to determine robot location coordinates and compass orientation from detected landmarks. The dataset consists of images, geolocation data and labeled bounding boxes to train and test two proposed localization methods. Results are illustrated with absolute errors from the comparisons between localization results and reference geolocation data in the dataset. The second system is the navigation system based on visual data and a deep reinforcement learning algorithm called Deep Q Network (DQN). The employed DQN automatically guides the mobile robot with visual data in the form of images, which received from the only Universal Serial Bus (USB) camera that attached to the robot. DQN consists of a deep neural network called convolutional neural network (CNN), and a reinforcement learning algorithm named Q-Learning. It can make decisions with visual data as input, using experiences from consequences of trial-and-error attempts. Our DQN agents are trained in the simulation environments provided by a platform based on a First-Person Shooter (FPS) game named ViZDoom. Simulation is implemented for training to avoid any possible damage on the real robot during trial-and-error process. Perspective from the simulation is the same as if a camera is attached to the front of the mobile robot. There are many differences between the simulation and the real world. We applied a markerbased Augmented Reality (AR) algorithm to reduce differences between the simulation and the world by altering visual data from the camera with resources from the simulation.The second system is assigned the task of simple navigation to the robot, in which the starting location is fixed but the goal location is random in the designated zone. The robot must be able to detect and track the goal object using a USB camera as its only sensor. Once started, the robot must move from its starting location to the designated goal object. Our DQN navigation method is tested in the simulation and on the real robot. Performances of our DQN are measured quantitatively via average total scores and the number of success navigation attempts. The results show that our DQN can effectively guide a mobile robot to the goal object of the simple navigation tasks, for both the simulation and the real world.The third system employs a Transfer Learning (TL) strategy to reduce training time and resources required for the training of newly added tasks of DQN agents. The new task is the task of reaching the goal while also avoiding obstacles at the same time. Additionally, the starting and the goal locations are all random within the specified areas. The employed transfer learning strategy uses the whole network of the DQN agent trained for the first simple navigation task as the base for training the DQN agent for the second task. The training in our TL strategy decrease the exploration factor, which cause the agent to rely on the existing knowledge from the base network more than randomly selecting actions during the training. It results in the decreased training time, in which optimal solutions can be found faster than training from scratch.We evaluate performances of our TL strategy by comparing the DQN agents trained with our TL at different exploration factor values and the DQN agent trained from scratch. Additionally, agents trained from our TL are trained with the decreased number of episodes to extensively display performances of our TL agents. All DQN agents for the second navigation task are tested in the simulation to avoid any possible and uncontrollable damages from the obstacles. Performances are measured through success attempts and average total scores, same as in the first navigation task. Results show that DQN agents trained via the TL strategy can greatly outperform the agent trained from scratch, despite the lower number of training episodes.博士(工学)法政大学 (Hosei University

    Applications

    Get PDF
    Volume 3 describes how resource-aware machine learning methods and techniques are used to successfully solve real-world problems. The book provides numerous specific application examples: in health and medicine for risk modelling, diagnosis, and treatment selection for diseases in electronics, steel production and milling for quality control during manufacturing processes in traffic, logistics for smart cities and for mobile communications

    Technology 2003: The Fourth National Technology Transfer Conference and Exposition, volume 2

    Get PDF
    Proceedings from symposia of the Technology 2003 Conference and Exposition, Dec. 7-9, 1993, Anaheim, CA, are presented. Volume 2 features papers on artificial intelligence, CAD&E, computer hardware, computer software, information management, photonics, robotics, test and measurement, video and imaging, and virtual reality/simulation

    Mobile Robots Navigation

    Get PDF
    Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described

    Product Development within Artificial Intelligence, Ethics and Legal Risk

    Get PDF
    This open-access-book synthesizes a supportive developer checklist considering sustainable Team and agile Project Management in the challenge of Artificial Intelligence and limits of image recognition. The study bases on technical, ethical, and legal requirements with examples concerning autonomous vehicles. As the first of its kind, it analyzes all reported car accidents state wide (1.28 million) over a 10-year period. Integrating of highly sensitive international court rulings and growing consumer expectations make this book a helpful guide for product and team development from initial concept until market launch

    Machbarkeitsstudie zum automatisierten Fahren von HO-LKWs im Murgtal - im Rahmen der Begleitforschung zum Projekt eWayBW

    Get PDF
    Diese im Zuge der Begleitforschung zum Projekt eWayBW entstandene Studie untersucht die Machbarkeit automatisierten Fahrens mit Hybrid-Oberleitungs-LKWs (HO-LKWs) im unteren Murgtal. Ausgehend vom Stand der Forschung im Bereich automatisierter Fahrzeuge werden die Besonderheiten des elektrischen Betriebes unter Fahrleitung als auch die Besonderheiten der untersuchten Strecke zwischen Kuppenheim und Hilpertsau analysiert und Lösungsvorschläge entwickelt. Die Betriebsführung automatisierter und insbesondere fahrerloser Verkehre sowie die betriebswirtschaftlichen Auswirkungen werden dargestellt, soweit das zum derzeitigen Stand der Entwicklung möglich ist. Die Ergebnisse zeigen, dass die technischen Fragestellungen für die Automatisierung, die durch das Fahren der HO-LKWs unter Fahrleitung entstehen, mit geringem Zusatzaufwand gelöst werden können und dass sich somit die Automatisierung von HO-LKWs nicht schwieriger darstellt als die Automatisierung anderer LKWs. Für das betrachtete Anwendungsszenario im unteren Murgtal wurden die für die Automatisierung besonders schwierigen Stellen identifiziert, die somit Knackpunkte bei der technischen Realisierung darstellen. Dies sind insbesondere die Rangiermanöver in den Werkshöfen, das Einbiegen auf stark befahrene Straßen sowie das Einfädeln und Verflechten an Auffahrten auf autobahnartige Straßen. Aus betrieblicher Sicht konnte erarbeitet werden, dass eine Automatisierung der LKW-Verkehre im unteren Murgtal erst dann nennenswerte betriebswirtschaftliche Vorteile erwarten lässt, wenn die LKW-Verkehre ganz oder zumindest über den größten Teil der Strecke fahrerlos durchgeführt werden können

    Applications

    Get PDF
    Volume 3 describes how resource-aware machine learning methods and techniques are used to successfully solve real-world problems. The book provides numerous specific application examples: in health and medicine for risk modelling, diagnosis, and treatment selection for diseases in electronics, steel production and milling for quality control during manufacturing processes in traffic, logistics for smart cities and for mobile communications

    Product Development within Artificial Intelligence, Ethics and Legal Risk

    Get PDF
    This open-access-book synthesizes a supportive developer checklist considering sustainable Team and agile Project Management in the challenge of Artificial Intelligence and limits of image recognition. The study bases on technical, ethical, and legal requirements with examples concerning autonomous vehicles. As the first of its kind, it analyzes all reported car accidents state wide (1.28 million) over a 10-year period. Integrating of highly sensitive international court rulings and growing consumer expectations make this book a helpful guide for product and team development from initial concept until market launch
    corecore