48 research outputs found

    Visual servoing of a car-like vehicle - an application of omnidirectional vision

    Get PDF
    In this paper, we develop the switching controller presented by Lee et al. for the pose control of a car-like vehicle, to allow the use of an omnidirectional vision sensor. To this end we incorporate an extension to a hypothesis on the navigation behaviour of the desert ant, cataglyphis bicolor, which leads to a correspondence free landmark based vision technique. The method we present allows positioning to a learnt location based on feature bearing angle and range discrepancies between the robot's current view of the environment, and that at a learnt location. We present simulations and experimental results, the latter obtained using our outdoor mobile platform

    Technical report on Optimization-Based Bearing-Only Visual Homing with Applications to a 2-D Unicycle Model

    Full text link
    We consider the problem of bearing-based visual homing: Given a mobile robot which can measure bearing directions with respect to known landmarks, the goal is to guide the robot toward a desired "home" location. We propose a control law based on the gradient field of a Lyapunov function, and give sufficient conditions for global convergence. We show that the well-known Average Landmark Vector method (for which no convergence proof was known) can be obtained as a particular case of our framework. We then derive a sliding mode control law for a unicycle model which follows this gradient field. Both controllers do not depend on range information. Finally, we also show how our framework can be used to characterize the sensitivity of a home location with respect to noise in the specified bearings. This is an extended version of the conference paper [1].Comment: This is an extender version of R. Tron and K. Daniilidis, "An optimization approach to bearing-only visual homing with applications to a 2-D unicycle model," in IEEE International Conference on Robotics and Automation, 2014, containing additional proof

    An optimization approach to bearing-only visual homing with applications to a 2-D unicycle model

    Get PDF
    Abstract-We consider the problem of bearing-based visual homing: Given a mobile robot which can measure bearing directions corresponding to known landmarks, the goal is to guide the robot toward a desired "home" location. We propose a control law based on the gradient field of a Lyapunov function, and give sufficient conditions for global convergence. We show that the well-known Average Landmark Vector method (for which no convergence proof was known) can be obtained as a particular case of our framework. We then derive a sliding mode control law for a unicycle model which follows this gradient field. Both controllers do not depend on range information. Finally, we also show how our framework can be used to characterize the sensitivity of a home location with respect to noise in the specified bearings

    An Incremental Navigation Localization Methodology for Application to Semi-Autonomous Mobile Robotic Platforms to Assist Individuals Having Severe Motor Disabilities.

    Get PDF
    In the present work, the author explores the issues surrounding the design and development of an intelligent wheelchair platform incorporating the semi-autonomous system paradigm, to meet the needs of individuals with severe motor disabilities. The author presents a discussion of the problems of navigation that must be solved before any system of this type can be instantiated, and enumerates the general design issues that must be addressed by the designers of systems of this type. This discussion includes reviews of various methodologies that have been proposed as solutions to the problems considered. Next, the author introduces a new navigation method, called Incremental Signature Recognition (ISR), for use by semi-autonomous systems in structured environments. This method is based on the recognition, recording, and tracking of environmental discontinuities: sensor reported anomalies in measured environmental parameters. The author then proposes a robust, redundant, dynamic, self-diagnosing sensing methodology for detecting and compensating for hidden failures of single sensors and sensor idiosyncrasies. This technique is optimized for the detection of spatial discontinuity anomalies. Finally, the author gives details of an effort to realize a prototype ISR based system, along with insights into the various implementation choices made

    Mobile Robots Navigation

    Get PDF
    Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described

    Lidar-based teach-and-repeat of mobile robot trajectories

    Full text link

    GPS-denied multi-agent localization and terrain classification for autonomous parafoil systems

    Full text link
    Guided airdrop parafoil systems depend on GPS for localization and landing. In some scenarios, GPS may be unreliable (jammed, spoofed, or disabled), or unavailable (indoor, or extraterrestrial environments). In the context of guided parafoils, landing locations for each system must be pre-programmed manually with global coordinates, which may be inaccurate or outdated, and offer no in-flight adaptability. Parafoil systems in particular have constrained motion, communication, and on-board computation and storage capabilities, and must operate in harsh conditions. These constraints necessitate a comprehensive approach to address the fundamental limitations of these systems when GPS cannot be used reliably. A novel and minimalist approach to visual navigation and multi-agent communication using semantic machine learning classification and geometric constraints is introduced. This approach enables localization and landing site identification for multiple communicating parafoil systems deployed in GPS-denied environments

    GPS-denied multi-agent localization and terrain classification for autonomous parafoil systems

    Full text link
    Guided airdrop parafoil systems depend on GPS for localization and landing. In some scenarios, GPS may be unreliable (jammed, spoofed, or disabled), or unavailable (indoor, or extraterrestrial environments). In the context of guided parafoils, landing locations for each system must be pre-programmed manually with global coordinates, which may be inaccurate or outdated, and offer no in-flight adaptability. Parafoil systems in particular have constrained motion, communication, and on-board computation and storage capabilities, and must operate in harsh conditions. These constraints necessitate a comprehensive approach to address the fundamental limitations of these systems when GPS cannot be used reliably. A novel and minimalist approach to visual navigation and multi-agent communication using semantic machine learning classification and geometric constraints is introduced. This approach enables localization and landing site identification for multiple communicating parafoil systems deployed in GPS-denied environments

    Visual Navigation in Unknown Environments

    Get PDF
    Navigation in mobile robotics involves two tasks, keeping track of the robot's position and moving according to a control strategy. In addition, when no prior knowledge of the environment is available, the problem is even more difficult, as the robot has to build a map of its surroundings as it moves. These three problems ought to be solved in conjunction since they depend on each other. This thesis is about simultaneously controlling an autonomous vehicle, estimating its location and building the map of the environment. The main objective is to analyse the problem from a control theoretical perspective based on the EKF-SLAM implementation. The contribution of this thesis is the analysis of system's properties such as observability, controllability and stability, which allow us to propose an appropriate navigation scheme that produces well-behaved estimators, controllers, and consequently, the system as a whole. We present a steady state analysis of the SLAM problem, identifying the conditions that lead to partial observability. It is shown that the effects of partial observability appear even in the ideal linear Gaussian case. This indicates that linearisation alone is not the only cause of SLAM inconsistency, and that observability must be achieved as a prerequisite to tackling the effects of linearisation. Additionally, full observability is also shown to be necessary during diagonalisation of the covariance matrix, an approach often used to reduce the computational complexity of the SLAM algorithm, and which leads to full controllability as we show in this work.Focusing specifically on the case of a system with a single monocular camera, we present an observability analysis using the nullspace basis of the stripped observability matrix. The aim is to get a better understanding of the well known intuitive behaviour of this type of systems, such as the need for triangulation to features from different positions in order to get accurate relative pose estimates between vehicle and camera. Through characterisation the unobservable directions in monocular SLAM, we are able to identify the vehicle motions required to maximise the number of observable states in the system. When closing the control loop of the SLAM system, both the feedback controller and the estimator are shown to be asymptotically stable. Furthermore, we show that the tracking error does not influence the estimation performance of a fully observable system and viceversa, that control is not affected by the estimation. Because of this, a higher level motion strategy is required in order to enhance estimation, specially needed while performing SLAM with a single camera. Considering a real-time application, we propose a control strategy to optimise both the localisation of the vehicle and the feature map by computing the most appropriate control actions or movements. The actions are chosen in order to maximise an information theoretic metric. Simulations and real-time experiments are performed to demonstrate the feasibility of the proposed control strategy
    corecore