5 research outputs found

    From Optimal Synthesis to Optimal Visual Servoing for Autonomous Vehicles

    Get PDF
    This thesis focuses on the characterization of optimal (shortest) paths to a desired position for a robot with unicycle kinematics and an on-board camera with limited Field-Of-View (FOV), which must keep a given feature in sight. In particular, I provide a complete optimal synthesis for the problem, i.e., a language of optimal control words, and a global partition of the motion plane induced by shortest paths, such that a word in the optimal language is univocally associated to a region and completely describes the shortest path from any starting point in that region to the goal point. Moreover, I provide a generalization to the case of arbitrary FOVs, including the case that the direction of motion is not an axis of symmetry for the FOV, and even that it is not contained in the FOV. Finally, based on the shortest path synthesis available, feedback control laws are defined for any point on the motion plane exploiting geometric properties of the synthesis itself. Moreover, by using a slightly generalized stability analysis setting, which is that of stability on a manifold, a proof of stability is given for the controlled system. At the end, simulation results are reported to demonstrate the effectiveness of the proposed technique

    Isn't Appearance Enough? - Nonlinear Observability and Observers for Appearance Localization, Mapping, Motion Reconstruction and Servoing Problems and their application to Vehicle Navigation

    Get PDF
    In this thesis we investigate how monocular image measurements can be used as the single source of information for a vehicle to sense and navigate through its surroundings. First we investigate what is the subset of vehicle location, environment mapping and vehicle motion that can be retrieved from images only. In particular, results apply to the case where no model of the vehicle, nor odometry or acceleration measurements are available. Then, we investigate the use of the information that can be extracted from images on visual servoing tasks and we define a servoing approach, named {\em Appearance Servoing}, that explicitly imposes the existing control constraints in the navigation of an Appearance Map. Finally, we present an experimental study case of the use of appearance where a sequence of images is used to construct a simple topological map of an office environment and then navigate a robot within it

    Recovering Scale in Relative Pose and Target Model Estimation Using Monocular Vision

    Get PDF
    A combined relative pose and target object model estimation framework using a monocular camera as the primary feedback sensor has been designed and validated in a simulated robotic environment. The monocular camera is mounted on the end-effector of a robot manipulator and measures the image plane coordinates of a set of point features on a target workpiece object. Using this information, the relative position and orientation, as well as the geometry, of the target object are recovered recursively by a Kalman filter process. The Kalman filter facilitates the fusion of supplemental measurements from range sensors, with those gathered with the camera. This process allows the estimated system state to be accurate and recover the proper environment scale. Current approaches in the research areas of visual servoing control and mobile robotics are studied in the case where the target object feature point geometry is well-known prior to the beginning of the estimation. In this case, only the relative pose of target object frames is estimated over a sequence of frames from a single monocular camera. An observability analysis was carried out to identify the physical configurations of camera and target object for which the relative pose cannot be recovered by measuring only the camera image plane coordinates of the object point features. A popular extension to this is to concurrently estimate the target object model concurrently with the relative pose of the camera frame, a process known as Simultaneous Localization and Mapping (SLAM). The recursive framework was augmented to facilitate this larger estimation problem. The scale of the recovered solution is ambiguous using measurements from a single camera. A second observability analysis highlights more configurations for which the relative pose and target object model are unrecoverable from camera measurements alone. Instead, measurements which contain the global scale are required to obtain an accurate solution. A set of additional sensors are detailed, including range finders and additional cameras. Measurement models for each are given, which facilitate the fusion of this supplemental data with the original monocular camera image measurements. A complete framework is then derived to combine a set of such sensor measurements to recover an accurate relative pose and target object model estimate. This proposed framework is tested in a simulation environment with a virtual robot manipulator tracking a target object workpiece through a relative trajectory. All of the detailed estimation schemes are executed: the single monocular camera cases when the target object geometry are known and unknown, respectively; a two camera system in which the measurements are fused within the Kalman filter to recover the scale of the environment; a camera and point range sensor combination which provides a single range measurement at each system time step; and a laser pointer and camera hybrid which concurrently tries to measure the feature point images and a single range metric. The performance of the individual test cases are compared to determine which set of sensors is able to provide robust and reliable estimates for use in real world robotic applications. Finally, some conclusions on the performance of the estimators are drawn and directions for future work are suggested. The camera and range finder combination is shown to accurately recover the proper scale for the estimate and warrants further investigation. Further, early results from the multiple monocular camera setup show superior performance to the other sensor combinations and interesting possibilities are available for wide field-of-view super sensors with high frame rates, built from many inexpensive devices

    Visual Servoing in the Large

    No full text
    In this paper we consider the problem of maneuvering an autonomous robot in complex unknown environments using vision. The goal is to accurately servo a wheeled vehicle to a desired posture using only feedback from an on-board camera, taking into account the nonholonomic nature of the vehicle kinematics and the limited field-ofview of the camera. With respect to existing visual servoing schemes, which achieve similar goals locally (i.e. when the desired and actual camera views are sufficiently similar), we propose a method to visually navigate the robot through an extended visual map before eventually reaching the desired goal. The map comprises a set of images, previously stored in an exploratory phase, that convey both topological and metric information regarding the connectivity through feasible robot paths and the geometry of the environment, respectively. Experimental results on a laboratory setup are reported showing the practicality of the proposed approach
    corecore