41 research outputs found

    Intervention AUVs: The Next Challenge

    Get PDF
    While commercially available AUVs are routinely used in survey missions, a new set of applications exist which clearly demand intervention capabilities. The maintenance of: permanent underwater observatories, submerged oil wells, cabled sensor networks, pipes and the deployment and recovery of benthic stations are a few of them. These tasks are addressed nowadays using manned submersibles or work-class ROVs, equipped with teleoperated arms under human supervision. Although researchers have recently opened the door to future I-AUVs, a long path is still necessary to achieve autonomous underwater interventions. This paper reviews the evolution timeline in autonomous underwater intervention systems. Milestone projects in the state of the art are reviewed, highlighting their principal contributions to the field. To the best of the authors knowledge, only three vehicles have demonstrated some autonomous intervention capabilities so far: ALIVE, SAUVIM and GIRONA 500, being the last one the lightest one. In this paper GIRONA 500 I-AUV is presented and its software architecture discussed. Recent results in different scenarios are reported: 1) Valve turning and connector plugging/unplugging while docked to a subsea panel, 2) Free floating valve turning using learning by demonstration, and 3) Multipurpose free-floating object recovery. The paper ends discussing the lessons learned so far

    Sonar attentive underwater navigation in structured environment

    Get PDF
    One of the fundamental requirements of a persistently Autonomous Underwater Vehicle (AUV) is a robust navigation system. The success of most complex robotic tasks depends on the accuracy of a vehicle’s navigation system. In a basic form, an AUV estimates its position using an on-board navigation sensors through Dead-Reckoning (DR). However DR navigation systems tends to drift in the long run due to accumulated measurement errors. One way of mitigating this problem require the use of Simultaneous Localization and Mapping (SLAM) by concurrently mapping external environment features. The performance of a SLAM navigation system depends on the availability of enough good features in the environment. On the contrary, a typical underwater structured environment (harbour, pier or oilfield) has a limited amount of sonar features in a limited locations, hence exploitation of good features is a key for effective underwater SLAM. This thesis develops a novel attentive sonar line feature based SLAM framework that improves the performance of a SLAM navigation by steering a multibeam sonar sensor,which is mounted on a pan and tilt unit, towards feature-rich regions of the environment. A sonar salience map is generated at each vehicle pose to identify highly informative and stable regions of the environment. Results from a simulated test and real AUV experiment show an attentive SLAM performs better than a passive counterpart by repeatedly visiting good sonar landmarks

    A Comprehensive Review on Autonomous Navigation

    Full text link
    The field of autonomous mobile robots has undergone dramatic advancements over the past decades. Despite achieving important milestones, several challenges are yet to be addressed. Aggregating the achievements of the robotic community as survey papers is vital to keep the track of current state-of-the-art and the challenges that must be tackled in the future. This paper tries to provide a comprehensive review of autonomous mobile robots covering topics such as sensor types, mobile robot platforms, simulation tools, path planning and following, sensor fusion methods, obstacle avoidance, and SLAM. The urge to present a survey paper is twofold. First, autonomous navigation field evolves fast so writing survey papers regularly is crucial to keep the research community well-aware of the current status of this field. Second, deep learning methods have revolutionized many fields including autonomous navigation. Therefore, it is necessary to give an appropriate treatment of the role of deep learning in autonomous navigation as well which is covered in this paper. Future works and research gaps will also be discussed

    A Stable Nonlinear Switched System for Landmark-aided Motion Planning

    Get PDF
    To guarantee navigation accuracy, the robotic applications utilize landmarks. This paper proposes a novel nonlinear switched system for the fundamental motion planning problem in autonomous mobile robot navigation: the generation of continuous collision free paths to a goal configuration via numerous land marks (waypoints) in a cluttered environment. The proposed system leverages the Lyapunov based control scheme (LbCS) and constructs Lyapunov like functions for the system’s subsystems. These functions guide a planar point mass object, representing an autonomous robotic agent, towards its goal by utilizing artificial landmarks. Extracting a set of nonlinear, time invariant, continuous, and stabilizing switched velocity controllers from these Lyapunov like functions, the system invokes the controllers based on a switching rule, enabling hierarchical landmark navigation in complex environments. Using the well known stability criteria by Branicky for switched systems based on multiple Lyapunov functions, the stability of the proposed system is provided. A new method to extract action landmarks from multiple landmarks is also introduced. The control laws are then used to control the motion of a nonholonomic car like vehicle governed by its kinematic equations. Numerical examples with simulations illustrate the effectiveness of the Lyapunov based control laws. The proposed control laws can automate various processes where the transportation of goods or workers between different sections is required

    Underwater Vehicles

    Get PDF
    For the latest twenty to thirty years, a significant number of AUVs has been created for the solving of wide spectrum of scientific and applied tasks of ocean development and research. For the short time period the AUVs have shown the efficiency at performance of complex search and inspection works and opened a number of new important applications. Initially the information about AUVs had mainly review-advertising character but now more attention is paid to practical achievements, problems and systems technologies. AUVs are losing their prototype status and have become a fully operational, reliable and effective tool and modern multi-purpose AUVs represent the new class of underwater robotic objects with inherent tasks and practical applications, particular features of technology, systems structure and functional properties

    An intelligent multi-floor mobile robot transportation system in life science laboratories

    Get PDF
    In this dissertation, a new intelligent multi-floor transportation system based on mobile robot is presented to connect the distributed laboratories in multi-floor environment. In the system, new indoor mapping and localization are presented, hybrid path planning is proposed, and an automated doors management system is presented. In addition, a hybrid strategy with innovative floor estimation to handle the elevator operations is implemented. Finally the presented system controls the working processes of the related sub-system. The experiments prove the efficiency of the presented system

    Search and restore: a study of cooperative multi-robot systems

    Get PDF
    Swarm intelligence is the study of natural biological systems with the ability to transform simple local interactions into complex global behaviours. Swarm robotics takes these principles and applies them to multi-robot systems with the aim of achieving the same level of complex behaviour which can result in more robust, scalable and flexible robotic solutions than singular robot systems. This research concerns how cooperative multi-robot systems can be utilised to solve real world challenges and outperform existing techniques. The majority of this research is focused around an emergency ship hull repair scenario where a ship has taken damage and sea water is flowing into the hull, decreasing the stability of the ship. A bespoke team of simulated robots using novel algorithms enable the robots to perform a coordinated ship hull inspection, allowing the robots to locate the damage faster than a similarly sized uncoordinated team of robots. Following this investigation, a method is presented by which the same team of robots can use self-assembly to form a structure, using their own bodies as material, to cover and repair the hole in the ship hull, halting the ingress of sea water. The results from a collaborative nature-inspired scenario are also presented in which a swarm of simple robots are tasked with foraging within an initially unexplored bounded arena. Many of the behaviours implemented in swarm robotics are inspired by biological swarms including their goals such as optimal distribution within environments. In this scenario, there are multiple items of varying quality which can be collected from different sources in the area to be returned to a central depot. The aim of this study is to imbue the robot swarm with a behaviour that will allow them to achieve the most optimal foraging strategy similar to those observed in more complex biological systems such as ants. The author’s main contribution to this study is the implementation of an obstacle avoidance behaviour which allows the swarm of robots to behave more similarly to systems of higher complexity

    Mobile robot vavigation using a vision based approach

    Get PDF
    PhD ThesisThis study addresses the issue of vision based mobile robot navigation in a partially cluttered indoor environment using a mapless navigation strategy. The work focuses on two key problems, namely vision based obstacle avoidance and vision based reactive navigation strategy. The estimation of optical flow plays a key role in vision based obstacle avoidance problems, however the current view is that this technique is too sensitive to noise and distortion under real conditions. Accordingly, practical applications in real time robotics remain scarce. This dissertation presents a novel methodology for vision based obstacle avoidance, using a hybrid architecture. This integrates an appearance-based obstacle detection method into an optical flow architecture based upon a behavioural control strategy that includes a new arbitration module. This enhances the overall performance of conventional optical flow based navigation systems, enabling a robot to successfully move around without experiencing collisions. Behaviour based approaches have become the dominant methodologies for designing control strategies for robot navigation. Two different behaviour based navigation architectures have been proposed for the second problem, using monocular vision as the primary sensor and equipped with a 2-D range finder. Both utilize an accelerated version of the Scale Invariant Feature Transform (SIFT) algorithm. The first architecture employs a qualitative-based control algorithm to steer the robot towards a goal whilst avoiding obstacles, whereas the second employs an intelligent control framework. This allows the components of soft computing to be integrated into the proposed SIFT-based navigation architecture, conserving the same set of behaviours and system structure of the previously defined architecture. The intelligent framework incorporates a novel distance estimation technique using the scale parameters obtained from the SIFT algorithm. The technique employs scale parameters and a corresponding zooming factor as inputs to train a neural network which results in the determination of physical distance. Furthermore a fuzzy controller is designed and integrated into this framework so as to estimate linear velocity, and a neural network based solution is adopted to estimate the steering direction of the robot. As a result, this intelligent iv approach allows the robot to successfully complete its task in a smooth and robust manner without experiencing collision. MS Robotics Studio software was used to simulate the systems, and a modified Pioneer 3-DX mobile robot was used for real-time implementation. Several realistic scenarios were developed and comprehensive experiments conducted to evaluate the performance of the proposed navigation systems. KEY WORDS: Mobile robot navigation using vision, Mapless navigation, Mobile robot architecture, Distance estimation, Vision for obstacle avoidance, Scale Invariant Feature Transforms, Intelligent framework

    Autonomous navigation and mapping of mobile robots based on 2D/3D cameras combination

    Get PDF
    Aufgrund der tendenziell zunehmenden Nachfrage an Systemen zur Unterstützung des alltäglichen Lebens gibt es derzeit ein großes Interesse an autonomen Systemen. Autonome Systeme werden in Häusern, Büros, Museen sowie in Fabriken eingesetzt. Sie können verschiedene Aufgaben erledigen, beispielsweise beim Reinigen, als Helfer im Haushalt, im Bereich der Sicherheit und Bildung, im Supermarkt sowie im Empfang als Auskunft, weil sie dazu verwendet werden können, die Verarbeitungszeit zu kontrollieren und präzise, zuverlässige Ergebnisse zu liefern. Ein Forschungsgebiet autonomer Systeme ist die Navigation und Kartenerstellung. Das heißt, mobile Roboter sollen selbständig ihre Aufgaben erledigen und zugleich eine Karte der Umgebung erstellen, um navigieren zu können. Das Hauptproblem besteht darin, dass der mobile Roboter in einer unbekannten Umgebung, in der keine zusätzlichen Bezugsinformationen vorhanden sind, das Gelände erkunden und eine dreidimensionale Karte davon erstellen muss. Der Roboter muss seine Positionen innerhalb der Karte bestimmen. Es ist notwendig, ein unterscheidbares Objekt zu finden. Daher spielen die ausgewählten Sensoren und der Register-Algorithmus eine relevante Rolle. Die Sensoren, die sowohl Tiefen- als auch Bilddaten liefern können, sind noch unzureichend. Der neue 3D-Sensor, nämlich der "Photonic Mixer Device" (PMD), erzeugt mit hoher Bildwiederholfrequenz eine Echtzeitvolumenerfassung des umliegenden Szenarios und liefert Tiefen- und Graustufendaten. Allerdings erfordert die höhere Qualität der dreidimensionalen Erkundung der Umgebung Details und Strukturen der Oberflächen, die man nur mit einer hochauflösenden CCD-Kamera erhalten kann. Die vorliegende Arbeit präsentiert somit eine Exploration eines mobilen Roboters mit Hilfe der Kombination einer CCD- und PMD-Kamera, um eine dreidimensionale Karte der Umgebung zu erstellen. Außerdem wird ein Hochleistungsalgorithmus zur Erstellung von 3D Karten und zur Poseschätzung in Echtzeit unter Verwendung des "Simultaneous Localization and Mapping" (SLAM) Verfahrens präsentiert. Der autonom arbeitende, mobile Roboter soll ferner Aufgaben übernehmen, wie z.B. die Erkennung von Objekten in ihrer Umgebung, um verschiedene praktische Aufgaben zu lösen. Die visuellen Daten der CCD-Kamera liefern nicht nur eine hohe Auflösung der Textur-Daten für die Tiefendaten, sondern werden auch für die Objekterkennung verwendet. Der "Iterative Closest Point" (ICP) Algorithmus benutzt zwei Punktwolken, um den Bewegungsvektor zu bestimmen. Schließlich sind die Auswertung der Korrespondenzen und die Rekonstruktion der Karte, um die reale Umgebung abzubilden, in dieser Arbeit enthalten.Presently, intelligent autonomous systems have to perform very interesting tasks due to trendy increases in support demands of human living. Autonomous systems have been used in various applications like houses, offices, museums as well as in factories. They are able to operate in several kinds of applications such as cleaning, household assistance, transportation, security, education and shop assistance because they can be used to control the processing time, and to provide precise and reliable output. One research field of autonomous systems is mobile robot navigation and map generation. That means the mobile robot should work autonomously while generating a map, which the robot follows. The main issue is that the mobile robot has to explore an unknown environment and to generate a three dimensional map of an unknown environment in case that there is not any further reference information. The mobile robot has to estimate its position and pose. It is required to find distinguishable objects. Therefore, the selected sensors and registered algorithms are significant. The sensors, which can provide both, depth as well as image data are still deficient. A new 3D sensor, namely the Photonic Mixer Device (PMD), generates a high rate output in real-time capturing the surrounding scenario as well as the depth and gray scale data. However, a higher quality of three dimension explorations requires details and textures of surfaces, which can be obtained from a high resolution CCD camera. This work hence presents the mobile robot exploration using the integration of CCD and PMD camera in order to create a three dimensional map. In addition, a high performance algorithm for 3D mapping and pose estimation of the locomotion in real time, using the "Simultaneous Localization and Mapping" (SLAM) technique is proposed. The flawlessly mobile robot should also handle the tasks, such as the recognition of objects in its environment, in order to achieve various practical missions. Visual input from the CCD camera not only delivers high resolution texture data on depth volume, but is also used for object recognition. The “Iterative Closest Point” (ICP) algorithm is using two sets of points to find out the translation and rotation vector between two scans. Finally, the evaluation of the correspondences and the reconstruction of the map to resemble the real environment are included in this thesis

    Multimodal headpose estimation and applications

    Get PDF
    This thesis presents new research into human headpose estimation and its applications in multi-modal data. We develop new methods for head pose estimation spanning RGB-D Human Computer Interaction (HCI) to far away "in the wild" surveillance quality data. We present the state-of-the-art solution in both head detection and head pose estimation through a new end-to-end Convolutional Neural Network architecture that reuses all of the computation for detection and pose estimation. In contrast to prior work, our method successfully spans close up HCI to low-resolution surveillance data and is cross modality: operating on both RGB and RGB-D data. We further address the problem of limited amount of standard data, and different quality of annotations by semi supervised learning and novel data augmentation. (This latter contribution also finds application in the domain of life sciences.) We report the highest accuracy by a large margin: 60% improvement; and demonstrate leading performance on multiple standardized datasets. In HCI we reduce the angular error by 40% relative to the previous reported literature. Furthermore, by defining a probabilistic spatial gaze model from the head pose we show application in human-human, human-scene interaction understanding. We present the state-of-the art results on the standard interaction datasets. A new metric to model "social mimicry" through the temporal correlation of the headpose signal is contributed and shown to be valid qualitatively and intuitively. As an application in surveillance, it is shown that with the robust headpose signal as a prior, state-of-the-art results in tracking under occlusion using a Kalman filter can be achieved. This model is named the Intentional Tracker and it improves visual tracking metrics by up to 15%. We also apply the ALICE loss that was developed for the end-to-end detection and classification, to dense classiffication of underwater coral reefs imagery. The objective of this work is to solve the challenging task of recognizing and segmenting underwater coral imagery in the wild with sparse point-based ground truth labelling. To achieve this, we propose an integrated Fully Convolutional Neural Network (FCNN) and Fully-Connected Conditional Random Field (CRF) based classification and segmentation algorithm. Our major contributions lie in four major areas. First, we show that multi-scale crop based training is useful in learning of the initial weights in the canonical one class classiffication problem. Second, we propose a modified ALICE loss for training the FCNN on sparse labels with class imbalance and establish its signi cance empirically. Third we show that by arti cially enhancing the point labels to small regions based on class distance transform, we can improve the classification accuracy further. Fourth, we improve the segmentation results using fully connected CRFs by using a bilateral message passing prior. We improve upon state-of-the-art results on all publicly available datasets by a significant margin
    corecore