334 research outputs found

    A Hierarchal Planning Framework for AUV Mission Management in a Spatio-Temporal Varying Ocean

    Full text link
    The purpose of this paper is to provide a hierarchical dynamic mission planning framework for a single autonomous underwater vehicle (AUV) to accomplish task-assign process in a limited time interval while operating in an uncertain undersea environment, where spatio-temporal variability of the operating field is taken into account. To this end, a high level reactive mission planner and a low level motion planning system are constructed. The high level system is responsible for task priority assignment and guiding the vehicle toward a target of interest considering on-time termination of the mission. The lower layer is in charge of generating optimal trajectories based on sequence of tasks and dynamicity of operating terrain. The mission planner is able to reactively re-arrange the tasks based on mission/terrain updates while the low level planner is capable of coping unexpected changes of the terrain by correcting the old path and re-generating a new trajectory. As a result, the vehicle is able to undertake the maximum number of tasks with certain degree of maneuverability having situational awareness of the operating field. The computational engine of the mentioned framework is based on the biogeography based optimization (BBO) algorithm that is capable of providing efficient solutions. To evaluate the performance of the proposed framework, firstly, a realistic model of undersea environment is provided based on realistic map data, and then several scenarios, treated as real experiments, are designed through the simulation study. Additionally, to show the robustness and reliability of the framework, Monte-Carlo simulation is carried out and statistical analysis is performed. The results of simulations indicate the significant potential of the two-level hierarchical mission planning system in mission success and its applicability for real-time implementation

    Behavioural strategy for indoor mobile robot navigation in dynamic environments

    Get PDF
    PhD ThesisDevelopment of behavioural strategies for indoor mobile navigation has become a challenging and practical issue in a cluttered indoor environment, such as a hospital or factory, where there are many static and moving objects, including humans and other robots, all of which trying to complete their own specific tasks; some objects may be moving in a similar direction to the robot, whereas others may be moving in the opposite direction. The key requirement for any mobile robot is to avoid colliding with any object which may prevent it from reaching its goal, or as a consequence bring harm to any individual within its workspace. This challenge is further complicated by unobserved objects suddenly appearing in the robots path, particularly when the robot crosses a corridor or an open doorway. Therefore the mobile robot must be able to anticipate such scenarios and manoeuvre quickly to avoid collisions. In this project, a hybrid control architecture has been designed to navigate within dynamic environments. The control system includes three levels namely: deliberative, intermediate and reactive, which work together to achieve short, fast and safe navigation. The deliberative level creates a short and safe path from the current position of the mobile robot to its goal using the wavefront algorithm, estimates the current location of the mobile robot, and extracts the region from which unobserved objects may appear. The intermediate level links the deliberative level and the reactive level, that includes several behaviours for implementing the global path in such a way to avoid any collision. In avoiding dynamic obstacles, the controller has to identify and extract obstacles from the sensor data, estimate their speeds, and then regular its speed and direction to minimize the collision risk and maximize the speed to the goal. The velocity obstacle approach (VO) is considered an easy and simple method for avoiding dynamic obstacles, whilst the collision cone principle is used to detect the collision situation between two circular-shaped objects. However the VO approach has two challenges when applied in indoor environments. The first challenge is extraction of collision cones of non-circular objects from sensor data, in which applying fitting circle methods generally produces large and inaccurate collision cones especially for line-shaped obstacle such as walls. The second challenge is that the mobile robot cannot sometimes move to its goal because all its velocities to the goal are located within collision cones. In this project, a method has been demonstrated to extract the colliii sion cones of circular and non-circular objects using a laser sensor, where the obstacle size and the collision time are considered to weigh the robot velocities. In addition the principle of the virtual obstacle was proposed to minimize the collision risk with unobserved moving obstacles. The simulation and experiments using the proposed control system on a Pioneer mobile robot showed that the mobile robot can successfully avoid static and dynamic obstacles. Furthermore the mobile robot was able to reach its target within an indoor environment without causing any collision or missing the target

    Active Object Classification from 3D Range Data with Mobile Robots

    Get PDF
    This thesis addresses the problem of how to improve the acquisition of 3D range data with a mobile robot for the task of object classification. Establishing the identities of objects in unknown environments is fundamental for robotic systems and helps enable many abilities such as grasping, manipulation, or semantic mapping. Objects are recognised by data obtained from sensor observations, however, data is highly dependent on viewpoint; the variation in position and orientation of the sensor relative to an object can result in large variation in the perception quality. Additionally, cluttered environments present a further challenge because key data may be missing. These issues are not always solved by traditional passive systems where data are collected from a fixed navigation process then fed into a perception pipeline. This thesis considers an active approach to data collection by deciding where is most appropriate to make observations for the perception task. The core contributions of this thesis are a non-myopic planning strategy to collect data efficiently under resource constraints, and supporting viewpoint prediction and evaluation methods for object classification. Our approach to planning uses Monte Carlo methods coupled with a classifier based on non-parametric Bayesian regression. We present a novel anytime and non-myopic planning algorithm, Monte Carlo active perception, that extends Monte Carlo tree search to partially observable environments and the active perception problem. This is combined with a particle-based estimation process and a learned observation likelihood model that uses Gaussian process regression. To support planning, we present 3D point cloud prediction algorithms and utility functions that measure the quality of viewpoints by their discriminatory ability and effectiveness under occlusion. The utility of viewpoints is quantified by information-theoretic metrics, such as mutual information, and an alternative utility function that exploits learned data is developed for special cases. The algorithms in this thesis are demonstrated in a variety of scenarios. We extensively test our online planning and classification methods in simulation as well as with indoor and outdoor datasets. Furthermore, we perform hardware experiments with different mobile platforms equipped with different types of sensors. Most significantly, our hardware experiments with an outdoor robot are to our knowledge the first demonstrations of online active perception in a real outdoor environment. Active perception has broad significance in many applications. This thesis emphasises the advantages of an active approach to object classification and presents its assimilation with a wide range of robotic systems, sensors, and perception algorithms. By demonstration of performance enhancements and diversity, our hope is that the concept of considering perception and planning in an integrated manner will be of benefit in improving current systems that rely on passive data collection

    Lidar-based scene understanding for autonomous driving using deep learning

    Get PDF
    With over 1.35 million fatalities related to traffic accidents worldwide, autonomous driving was foreseen at the beginning of this century as a feasible solution to improve security in our roads. Nevertheless, it is meant to disrupt our transportation paradigm, allowing to reduce congestion, pollution, and costs, while increasing the accessibility, efficiency, and reliability of the transportation for both people and goods. Although some advances have gradually been transferred into commercial vehicles in the way of Advanced Driving Assistance Systems (ADAS) such as adaptive cruise control, blind spot detection or automatic parking, however, the technology is far from mature. A full understanding of the scene is actually needed so that allowing the vehicles to be aware of the surroundings, knowing the existing elements of the scene, as well as their motion, intentions and interactions. In this PhD dissertation, we explore new approaches for understanding driving scenes from 3D LiDAR point clouds by using Deep Learning methods. To this end, in Part I we analyze the scene from a static perspective using independent frames to detect the neighboring vehicles. Next, in Part II we develop new ways for understanding the dynamics of the scene. Finally, in Part III we apply all the developed methods to accomplish higher level challenges such as segmenting moving obstacles while obtaining their rigid motion vector over the ground. More specifically, in Chapter 2 we develop a 3D vehicle detection pipeline based on a multi-branch deep-learning architecture and propose a Front (FR-V) and a Bird’s Eye view (BE-V) as 2D representations of the 3D point cloud to serve as input for training our models. Later on, in Chapter 3 we apply and further test this method on two real uses-cases, for pre-filtering moving obstacles while creating maps to better localize ourselves on subsequent days, as well as for vehicle tracking. From the dynamic perspective, in Chapter 4 we learn from the 3D point cloud a novel dynamic feature that resembles optical flow from RGB images. For that, we develop a new approach to leverage RGB optical flow as pseudo ground truth for training purposes but allowing the use of only 3D LiDAR data at inference time. Additionally, in Chapter 5 we explore the benefits of combining classification and regression learning problems to face the optical flow estimation task in a joint coarse-and-fine manner. Lastly, in Chapter 6 we gather the previous methods and demonstrate that with these independent tasks we can guide the learning of higher challenging problems such as segmentation and motion estimation of moving vehicles from our own moving perspective.Con más de 1,35 millones de muertes por accidentes de tráfico en el mundo, a principios de siglo se predijo que la conducción autónoma sería una solución viable para mejorar la seguridad en nuestras carreteras. Además la conducción autónoma está destinada a cambiar nuestros paradigmas de transporte, permitiendo reducir la congestión del tráfico, la contaminación y el coste, a la vez que aumentando la accesibilidad, la eficiencia y confiabilidad del transporte tanto de personas como de mercancías. Aunque algunos avances, como el control de crucero adaptativo, la detección de puntos ciegos o el estacionamiento automático, se han transferido gradualmente a vehículos comerciales en la forma de los Sistemas Avanzados de Asistencia a la Conducción (ADAS), la tecnología aún no ha alcanzado el suficiente grado de madurez. Se necesita una comprensión completa de la escena para que los vehículos puedan entender el entorno, detectando los elementos presentes, así como su movimiento, intenciones e interacciones. En la presente tesis doctoral, exploramos nuevos enfoques para comprender escenarios de conducción utilizando nubes de puntos en 3D capturadas con sensores LiDAR, para lo cual empleamos métodos de aprendizaje profundo. Con este fin, en la Parte I analizamos la escena desde una perspectiva estática para detectar vehículos. A continuación, en la Parte II, desarrollamos nuevas formas de entender las dinámicas del entorno. Finalmente, en la Parte III aplicamos los métodos previamente desarrollados para lograr desafíos de nivel superior, como segmentar obstáculos dinámicos a la vez que estimamos su vector de movimiento sobre el suelo. Específicamente, en el Capítulo 2 detectamos vehículos en 3D creando una arquitectura de aprendizaje profundo de dos ramas y proponemos una vista frontal (FR-V) y una vista de pájaro (BE-V) como representaciones 2D de la nube de puntos 3D que sirven como entrada para entrenar nuestros modelos. Más adelante, en el Capítulo 3 aplicamos y probamos aún más este método en dos casos de uso reales, tanto para filtrar obstáculos en movimiento previamente a la creación de mapas sobre los que poder localizarnos mejor en los días posteriores, como para el seguimiento de vehículos. Desde la perspectiva dinámica, en el Capítulo 4 aprendemos de la nube de puntos en 3D una característica dinámica novedosa que se asemeja al flujo óptico sobre imágenes RGB. Para ello, desarrollamos un nuevo enfoque que aprovecha el flujo óptico RGB como pseudo muestras reales para entrenamiento, usando solo information 3D durante la inferencia. Además, en el Capítulo 5 exploramos los beneficios de combinar los aprendizajes de problemas de clasificación y regresión para la tarea de estimación de flujo óptico de manera conjunta. Por último, en el Capítulo 6 reunimos los métodos anteriores y demostramos que con estas tareas independientes podemos guiar el aprendizaje de problemas de más alto nivel, como la segmentación y estimación del movimiento de vehículos desde nuestra propia perspectivaAmb més d’1,35 milions de morts per accidents de trànsit al món, a principis de segle es va predir que la conducció autònoma es convertiria en una solució viable per millorar la seguretat a les nostres carreteres. D’altra banda, la conducció autònoma està destinada a canviar els paradigmes del transport, fent possible així reduir la densitat del trànsit, la contaminació i el cost, alhora que augmentant l’accessibilitat, l’eficiència i la confiança del transport tant de persones com de mercaderies. Encara que alguns avenços, com el control de creuer adaptatiu, la detecció de punts cecs o l’estacionament automàtic, s’han transferit gradualment a vehicles comercials en forma de Sistemes Avançats d’Assistència a la Conducció (ADAS), la tecnologia encara no ha arribat a aconseguir el grau suficient de maduresa. És necessària, doncs, una total comprensió de l’escena de manera que els vehicles puguin entendre l’entorn, detectant els elements presents, així com el seu moviment, intencions i interaccions. A la present tesi doctoral, explorem nous enfocaments per tal de comprendre les diferents escenes de conducció utilitzant núvols de punts en 3D capturats amb sensors LiDAR, mitjançant l’ús de mètodes d’aprenentatge profund. Amb aquest objectiu, a la Part I analitzem l’escena des d’una perspectiva estàtica per a detectar vehicles. A continuació, a la Part II, desenvolupem noves formes d’entendre les dinàmiques de l’entorn. Finalment, a la Part III apliquem els mètodes prèviament desenvolupats per a aconseguir desafiaments d’un nivell superior, com, per exemple, segmentar obstacles dinàmics al mateix temps que estimem el seu vector de moviment respecte al terra. Concretament, al Capítol 2 detectem vehicles en 3D creant una arquitectura d’aprenentatge profund amb dues branques, i proposem una vista frontal (FR-V) i una vista d’ocell (BE-V) com a representacions 2D del núvol de punts 3D que serveixen com a punt de partida per entrenar els nostres models. Més endavant, al Capítol 3 apliquem i provem de nou aquest mètode en dos casos d’ús reals, tant per filtrar obstacles en moviment prèviament a la creació de mapes en els quals poder localitzar-nos millor en dies posteriors, com per dur a terme el seguiment de vehicles. Des de la perspectiva dinàmica, al Capítol 4 aprenem una nova característica dinàmica del núvol de punts en 3D que s’assembla al flux òptic sobre imatges RGB. Per a fer-ho, desenvolupem un nou enfocament que aprofita el flux òptic RGB com pseudo mostres reals per a entrenament, utilitzant només informació 3D durant la inferència. Després, al Capítol 5 explorem els beneficis que s’obtenen de combinar els aprenentatges de problemes de classificació i regressió per la tasca d’estimació de flux òptic de manera conjunta. Finalment, al Capítol 6 posem en comú els mètodes anteriors i demostrem que mitjançant aquests processos independents podem abordar l’aprenentatge de problemes més complexos, com la segmentació i estimació del moviment de vehicles des de la nostra pròpia perspectiva

    Navigational Path Analysis of Mobile Robot in Various Environments

    Get PDF
    This dissertation describes work in the area of an autonomous mobile robot. The objective is navigation of mobile robot in a real world dynamic environment avoiding structured and unstructured obstacles either they are static or dynamic. The shapes and position of obstacles are not known to robot prior to navigation. The mobile robot has sensory recognition of specific objects in the environments. This sensory-information provides local information of robots immediate surroundings to its controllers. The information is dealt intelligently by the robot to reach the global objective (the target). Navigational paths as well as time taken during navigation by the mobile robot can be expressed as an optimisation problem and thus can be analyzed and solved using AI techniques. The optimisation of path as well as time taken is based on the kinematic stability and the intelligence of the robot controller. A successful way of structuring the navigation task deals with the issues of individual behaviour design and action coordination of the behaviours. The navigation objective is addressed using fuzzy logic, neural network, adaptive neuro-fuzzy inference system and different other AI technique.The research also addresses distributed autonomous systems using multiple robot
    corecore