1,509 research outputs found

    Positional estimation techniques for an autonomous mobile robot

    Get PDF
    Techniques for positional estimation of a mobile robot navigation in an indoor environment are described. A comprehensive review of the various positional estimation techniques studied in the literature is first presented. The techniques are divided into four different types and each of them is discussed briefly. Two different kinds of environments are considered for positional estimation; mountainous natural terrain and an urban, man-made environment with polyhedral buildings. In both cases, the robot is assumed to be equipped with single visual camera that can be panned and tilted and also a 3-D description (world model) of the environment is given. Such a description could be obtained from a stereo pair of aerial images or from the architectural plans of the buildings. Techniques for positional estimation using the camera input and the world model are presented

    A vision system planner for increasing the autonomy of the Extravehicular Activity Helper/Retriever

    Get PDF
    The Extravehicular Activity Retriever (EVAR) is a robotic device currently being developed by the Automation and Robotics Division at the NASA Johnson Space Center to support activities in the neighborhood of the Space Shuttle or Space Station Freedom. As the name implies, the Retriever's primary function will be to provide the capability to retrieve tools and equipment or other objects which have become detached from the spacecraft, but it will also be able to rescue a crew member who may have become inadvertently de-tethered. Later goals will include cooperative operations between a crew member and the Retriever such as fetching a tool that is required for servicing or maintenance operations. This paper documents a preliminary design for a Vision System Planner (VSP) for the EVAR that is capable of achieving visual objectives provided to it by a high level task planner. Typical commands which the task planner might issue to the VSP relate to object recognition, object location determination, and obstacle detection. Upon receiving a command from the task planner, the VSP then plans a sequence of actions to achieve the specified objective using a model-based reasoning approach. This sequence may involve choosing an appropriate sensor, selecting an algorithm to process the data, reorienting the sensor, adjusting the effective resolution of the image using lens zooming capability, and/or requesting the task planner to reposition the EVAR to obtain a different view of the object. An initial version of the Vision System Planner which realizes the above capabilities using simulated images has been implemented and tested. The remaining sections describe the architecture and capabilities of the VSP and its relationship to the high level task planner. In addition, typical plans that are generated to achieve visual goals for various scenarios are discussed. Specific topics to be addressed will include object search strategies, repositioning of the EVAR to improve the quality of information obtained from the sensors, and complementary usage of the sensors and redundant capabilities

    Pose estimation for objects with planar surfaces using eigenimage and range data analysis

    Get PDF
    In this paper we present a novel method for estimating the object pose for 3D objects with well defined planar surfaces. Specifically, we investigate the feasibility of estimating the object pose using an approach that combines the standard eigenspace analysis technique with range data analysis. In this sense, eigenspace analysis was employed to constrain one object rotation and reject surfaces that are not compatible with a model object. The remaining two object rotations are estimated by computing the normal to the surface from the range data. The proposed pose estimation scheme has been successfully applied to scenes defined by polyhedral objects and experimental results are reported

    A Flexible Image Processing Framework for Vision-based Navigation Using Monocular Image Sensors

    Get PDF
    On-Orbit Servicing (OOS) encompasses all operations related to servicing satellites and performing other work on-orbit, such as reduction of space debris. Servicing satellites includes repairs, refueling, attitude control and other tasks, which may be needed to put a failed satellite back into working condition. A servicing satellite requires accurate position and orientation (pose) information about the target spacecraft. A large quantity of different sensor families is available to accommodate this need. However, when it comes to minimizing mass, space and power required for a sensor system, mostly monocular imaging sensors perform very well. A disadvantage is- when comparing to LIDAR sensors- that costly computations are needed to process the data of the sensor. The method presented in this paper is addressing these problems by aiming to implement three different design principles; First: keep the computational burden as low as possible. Second: utilize different algorithms and choose among them, depending on the situation, to retrieve the most stable results. Third: Stay modular and flexible. The software is designed primarily for utilization in On-Orbit Servicing tasks, where- for example- a servicer spacecraft approaches an uncooperative client spacecraft, which can not aid in the process in any way as it is assumed to be completely passive. Image processing is used for navigating to the client spacecraft. In this specific scenario, it is vital to obtain accurate distance and bearing information until, in the last few meters, all six degrees of freedom are needed to be known. The smaller the distance between the spacecraft, the more accurate pose estimates are required. The algorithms used here are tested and optimized on a sophisticated Rendezvous and Docking Simulation facility (European Proximity Operations Simulator- EPOS 2.0) in its second-generation form located at the German Space Operations Center (GSOC) in Weßling, Germany. This particular simulation environment is real-time capable and provides an interface to test sensor system hardware in closed loop configuration. The results from these tests are summarized in the paper as well. Finally, an outlook on future work is given, with the intention of providing some long-term goals as the paper is presenting a snapshot of ongoing, by far not yet completed work. Moreover, it serves as an overview of additions which can improve the presented method further

    Trifocal Relative Pose from Lines at Points and its Efficient Solution

    Full text link
    We present a new minimal problem for relative pose estimation mixing point features with lines incident at points observed in three views and its efficient homotopy continuation solver. We demonstrate the generality of the approach by analyzing and solving an additional problem with mixed point and line correspondences in three views. The minimal problems include correspondences of (i) three points and one line and (ii) three points and two lines through two of the points which is reported and analyzed here for the first time. These are difficult to solve, as they have 216 and - as shown here - 312 solutions, but cover important practical situations when line and point features appear together, e.g., in urban scenes or when observing curves. We demonstrate that even such difficult problems can be solved robustly using a suitable homotopy continuation technique and we provide an implementation optimized for minimal problems that can be integrated into engineering applications. Our simulated and real experiments demonstrate our solvers in the camera geometry computation task in structure from motion. We show that new solvers allow for reconstructing challenging scenes where the standard two-view initialization of structure from motion fails.Comment: This material is based upon work supported by the National Science Foundation under Grant No. DMS-1439786 while most authors were in residence at Brown University's Institute for Computational and Experimental Research in Mathematics -- ICERM, in Providence, R
    corecore