133 research outputs found

    From Optimal Synthesis to Optimal Visual Servoing for Autonomous Vehicles

    Get PDF
    This thesis focuses on the characterization of optimal (shortest) paths to a desired position for a robot with unicycle kinematics and an on-board camera with limited Field-Of-View (FOV), which must keep a given feature in sight. In particular, I provide a complete optimal synthesis for the problem, i.e., a language of optimal control words, and a global partition of the motion plane induced by shortest paths, such that a word in the optimal language is univocally associated to a region and completely describes the shortest path from any starting point in that region to the goal point. Moreover, I provide a generalization to the case of arbitrary FOVs, including the case that the direction of motion is not an axis of symmetry for the FOV, and even that it is not contained in the FOV. Finally, based on the shortest path synthesis available, feedback control laws are defined for any point on the motion plane exploiting geometric properties of the synthesis itself. Moreover, by using a slightly generalized stability analysis setting, which is that of stability on a manifold, a proof of stability is given for the controlled system. At the end, simulation results are reported to demonstrate the effectiveness of the proposed technique

    Visual Navigation in Unknown Environments

    Get PDF
    Navigation in mobile robotics involves two tasks, keeping track of the robot's position and moving according to a control strategy. In addition, when no prior knowledge of the environment is available, the problem is even more difficult, as the robot has to build a map of its surroundings as it moves. These three problems ought to be solved in conjunction since they depend on each other. This thesis is about simultaneously controlling an autonomous vehicle, estimating its location and building the map of the environment. The main objective is to analyse the problem from a control theoretical perspective based on the EKF-SLAM implementation. The contribution of this thesis is the analysis of system's properties such as observability, controllability and stability, which allow us to propose an appropriate navigation scheme that produces well-behaved estimators, controllers, and consequently, the system as a whole. We present a steady state analysis of the SLAM problem, identifying the conditions that lead to partial observability. It is shown that the effects of partial observability appear even in the ideal linear Gaussian case. This indicates that linearisation alone is not the only cause of SLAM inconsistency, and that observability must be achieved as a prerequisite to tackling the effects of linearisation. Additionally, full observability is also shown to be necessary during diagonalisation of the covariance matrix, an approach often used to reduce the computational complexity of the SLAM algorithm, and which leads to full controllability as we show in this work.Focusing specifically on the case of a system with a single monocular camera, we present an observability analysis using the nullspace basis of the stripped observability matrix. The aim is to get a better understanding of the well known intuitive behaviour of this type of systems, such as the need for triangulation to features from different positions in order to get accurate relative pose estimates between vehicle and camera. Through characterisation the unobservable directions in monocular SLAM, we are able to identify the vehicle motions required to maximise the number of observable states in the system. When closing the control loop of the SLAM system, both the feedback controller and the estimator are shown to be asymptotically stable. Furthermore, we show that the tracking error does not influence the estimation performance of a fully observable system and viceversa, that control is not affected by the estimation. Because of this, a higher level motion strategy is required in order to enhance estimation, specially needed while performing SLAM with a single camera. Considering a real-time application, we propose a control strategy to optimise both the localisation of the vehicle and the feature map by computing the most appropriate control actions or movements. The actions are chosen in order to maximise an information theoretic metric. Simulations and real-time experiments are performed to demonstrate the feasibility of the proposed control strategy

    Navigation without localisation: reliable teach and repeat based on the convergence theorem

    Full text link
    We present a novel concept for teach-and-repeat visual navigation. The proposed concept is based on a mathematical model, which indicates that in teach-and-repeat navigation scenarios, mobile robots do not need to perform explicit localisation. Rather than that, a mobile robot which repeats a previously taught path can simply `replay' the learned velocities, while using its camera information only to correct its heading relative to the intended path. To support our claim, we establish a position error model of a robot, which traverses a taught path by only correcting its heading. Then, we outline a mathematical proof which shows that this position error does not diverge over time. Based on the insights from the model, we present a simple monocular teach-and-repeat navigation method. The method is computationally efficient, it does not require camera calibration, and it can learn and autonomously traverse arbitrarily-shaped paths. In a series of experiments, we demonstrate that the method can reliably guide mobile robots in realistic indoor and outdoor conditions, and can cope with imperfect odometry, landmark deficiency, illumination variations and naturally-occurring environment changes. Furthermore, we provide the navigation system and the datasets gathered at http://www.github.com/gestom/stroll_bearnav.Comment: The paper will be presented at IROS 2018 in Madri

    Perception Based Navigation for Underactuated Robots.

    Full text link
    Robot autonomous navigation is a very active field of robotics. In this thesis we propose a hierarchical approach to a class of underactuated robots by composing a collection of local controllers with well understood domains of attraction. We start by addressing the problem of robot navigation with nonholonomic motion constraints and perceptual cues arising from onboard visual servoing in partially engineered environments. We propose a general hybrid procedure that adapts to the constrained motion setting the standard feedback controller arising from a navigation function in the fully actuated case. This is accomplished by switching back and forth between moving "down" and "across" the associated gradient field toward the stable manifold it induces in the constrained dynamics. Guaranteed to avoid obstacles in all cases, we provide conditions under which the new procedure brings initial configurations to within an arbitrarily small neighborhood of the goal. We summarize with simulation results on a sample of visual servoing problems with a few different perceptual models. We document the empirical effectiveness of the proposed algorithm by reporting the results of its application to outdoor autonomous visual registration experiments with the robot RHex guided by engineered beacons. Next we explore the possibility of adapting the resulting first order hybrid feedback controller to its dynamical counterpart by introducing tunable damping terms in the control law. Just as gradient controllers for standard quasi-static mechanical systems give rise to generalized "PD-style" controllers for dynamical versions of those standard systems, we show that it is possible to construct similar "lifts" in the presence of non-holonomic constraints notwithstanding the necessary absence of point attractors. Simulation results corroborate the proposed lift. Finally we present an implementation of a fully autonomous navigation application for a legged robot. The robot adapts its leg trajectory parameters by recourse to a discrete gradient descent algorithm, while managing its experiments and outcome measurements autonomously via the navigation visual servoing algorithms proposed in this thesis.Ph.D.Electrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/58412/1/glopes_1.pd

    Cooperative Material Handling by Human and Robotic Agents:Module Development and System Synthesis

    Get PDF
    In this paper we present the results of a collaborative effort to design and implement a system for cooperative material handling by a small team of human and robotic agents in an unstructured indoor environment. Our approach makes fundamental use of human agents\u27 expertise for aspects of task planning, task monitoring, and error recovery. Our system is neither fully autonomous nor fully teleoperated. It is designed to make effective use of human abilities within the present state of the art of autonomous systems. It is designed to allow for and promote cooperative interaction between distributed agents with various capabilities and resources. Our robotic agents refer to systems which are each equipped with at least one sensing modality and which possess some capability for self-orientation and/or mobility. Our robotic agents are not required to be homogeneous with respect to either capabilities or function. Our research stresses both paradigms and testbed experimentation. Theory issues include the requisite coordination principles and techniques which are fundamental to the basic functioning of such a cooperative multi-agent system. We have constructed a testbed facility for experimenting with distributed multi-agent architectures. The required modular components of this testbed are currently operational and have been tested individually. Our current research focuses on the integration of agents in a scenario for cooperative material handling

    Lidar-based teach-and-repeat of mobile robot trajectories

    Full text link
    International audienceAutomation of logistics tasks for small lot sizes and flexible production processes requires the development of intuitive and easy-to-use systems that allow non-expert shop floor workers to naturally instruct transportation systems in changing environments. To this end, we present a novel laser-based scheme for teach-and-repeat of mobile robot trajectories that relies on scan matching to localize the robot relative to a taught trajectory, which is represented by a sequence of raw odometry and 2D laser data. This approach has two advantages. First, it does not require to build a globally consistent metrical map of the environment, which reduces setup time. Second, the direct use of raw sensor data avoids additional errors that might be introduced by the fact that grid maps only provide an approximation of the environment. Real-world experiments carried out with a holonomic and a differential drive platform demonstrate that our approach repeats trajectories with an accuracy of a few millimeters. A comparison with a standard Monte Carlo localization approach on grid maps furthermore reveals that our method yields lower tracking errors for teach-and-repeat tasks

    Underwater Robots Part II: Existing Solutions and Open Issues

    Get PDF
    National audienceThis paper constitutes the second part of a general overview of underwater robotics. The first part is titled: Underwater Robots Part I: current systems and problem pose. The works referenced as (Name*, year) have been already cited on the first part of the paper, and the details of these references can be found in the section 7 of the paper titled Underwater Robots Part I: current systems and problem pose. The mathematical notation used in this paper is defined in section 4 of the paper Underwater Robots Part I: current systems and problem pose
    corecore