12 research outputs found

    Robot Acting on Moving Bodies (RAMBO): Interaction with tumbling objects

    Get PDF
    Interaction with tumbling objects will become more common as human activities in space expand. Attempting to interact with a large complex object translating and rotating in space, a human operator using only his visual and mental capacities may not be able to estimate the object motion, plan actions or control those actions. A robot system (RAMBO) equipped with a camera, which, given a sequence of simple tasks, can perform these tasks on a tumbling object, is being developed. RAMBO is given a complete geometric model of the object. A low level vision module extracts and groups characteristic features in images of the object. The positions of the object are determined in a sequence of images, and a motion estimate of the object is obtained. This motion estimate is used to plan trajectories of the robot tool to relative locations rearby the object sufficient for achieving the tasks. More specifically, low level vision uses parallel algorithms for image enhancement by symmetric nearest neighbor filtering, edge detection by local gradient operators, and corner extraction by sector filtering. The object pose estimation is a Hough transform method accumulating position hypotheses obtained by matching triples of image features (corners) to triples of model features. To maximize computing speed, the estimate of the position in space of a triple of features is obtained by decomposing its perspective view into a product of rotations and a scaled orthographic projection. This allows use of 2-D lookup tables at each stage of the decomposition. The position hypotheses for each possible match of model feature triples and image feature triples are calculated in parallel. Trajectory planning combines heuristic and dynamic programming techniques. Then trajectories are created using dynamic interpolations between initial and goal trajectories. All the parallel algorithms run on a Connection Machine CM-2 with 16K processors

    Robot acting on moving bodies (RAMBO): Preliminary results

    Get PDF
    A robot system called RAMBO is being developed. It is equipped with a camera, which, given a sequence of simple tasks, can perform these tasks on a moving object. RAMBO is given a complete geometric model of the object. A low level vision module extracts and groups characteristic features in images of the object. The positions of the object are determined in a sequence of images, and a motion estimate of the object is obtained. This motion estimate is used to plan trajectories of the robot tool to relative locations nearby the object sufficient for achieving the tasks. More specifically, low level vision uses parallel algorithms for image enchancement by symmetric nearest neighbor filtering, edge detection by local gradient operators, and corner extraction by sector filtering. The object pose estimation is a Hough transform method accumulating position hypotheses obtained by matching triples of image features (corners) to triples of model features. To maximize computing speed, the estimate of the position in space of a triple of features is obtained by decomposing its perspective view into a product of rotations and a scaled orthographic projection. This allows the use of 2-D lookup tables at each stage of the decomposition. The position hypotheses for each possible match of model feature triples and image feature triples are calculated in parallel. Trajectory planning combines heuristic and dynamic programming techniques. Then trajectories are created using parametric cubic splines between initial and goal trajectories. All the parallel algorithms run on a Connection Machine CM-2 with 16K processors

    A Goal Oriented Navigation System Using Vision

    Get PDF
    This paper addresses a goal oriented navigation framework in a behavior-based manner for autonomous systems. The framework is mainly designed based on a behavioral architecture and relies on a monocular vision camera to obtain the location of goal. The framework employs a virt ual physic based method to steer the robot towards the goal while avoiding unknown obstacles, located along its path. Simulation results validate the performance of the proposed framework

    Dimensional Measurement of Objects in Single Images Independent from Restrictive Camera Parameters

    Get PDF
    Recent advances in microelectronics have produced new generations of digital cameras with variable focal lengths and pixel sizes which facilitate automatic and high-quality imaging. However, without knowing the values of these critical camera parameters, it is difficult to measure objects in images using existing algorithms. This work investigates this important problem aiming at dimensional measurements (e.g., diameter, length, width and height) of regularly shaped physical objects in a single 2-D image free from restrictive camera parameters. Traditionally, such measurements usually require determinations of the poses of a certain reference feature, i.e., the location and orientation of the feature relative to the camera, in order to establish a geometric model for the dimensional calculation. Points or lines associated with certain shapes (including triangles and rectangles) are often used as reference features for the pose estimation. However, with only a single image as the input, these methods assume the availability of 3-D spatial relationships of the points or lines, which limits the applications of these methods to practical problems where this knowledge is unavailable or difficult to estimate, such as in the problem of image-based food portion size estimation in dietary assessment. In addition to points and lines, the circle has also been used as a reference feature because it has a single elliptic perspective projection in images. However, almost all the existing approaches treat the parameters of focal length and pixel size as the necessary prior information. Here, we propose a new approach to dimensional estimation based on single image input using the circular reference feature and a pin-hole model without considering camera distortion. Without knowing the focal length and pixel size, our approach provides a closed-form solution for the orientation estimation of the circular feature. With additional information provided, such as the size of the circular reference feature, analytical solutions are provided for physical length estimation between an arbitrary pair of points on the reference plane. Studies using both synthetic and actual objects have been conducted to evaluate this new method, which exhibited satisfactory results. This method has also been applied to the measurement of food dimensions based on digital pictures of foods in circular dining plates

    Map-Based Localization for Unmanned Aerial Vehicle Navigation

    Get PDF
    Unmanned Aerial Vehicles (UAVs) require precise pose estimation when navigating in indoor and GNSS-denied / GNSS-degraded outdoor environments. The possibility of crashing in these environments is high, as spaces are confined, with many moving obstacles. There are many solutions for localization in GNSS-denied environments, and many different technologies are used. Common solutions involve setting up or using existing infrastructure, such as beacons, Wi-Fi, or surveyed targets. These solutions were avoided because the cost should be proportional to the number of users, not the coverage area. Heavy and expensive sensors, for example a high-end IMU, were also avoided. Given these requirements, a camera-based localization solution was selected for the sensor pose estimation. Several camera-based localization approaches were investigated. Map-based localization methods were shown to be the most efficient because they close loops using a pre-existing map, thus the amount of data and the amount of time spent collecting data are reduced as there is no need to re-observe the same areas multiple times. This dissertation proposes a solution to address the task of fully localizing a monocular camera onboard a UAV with respect to a known environment (i.e., it is assumed that a 3D model of the environment is available) for the purpose of navigation for UAVs in structured environments. Incremental map-based localization involves tracking a map through an image sequence. When the map is a 3D model, this task is referred to as model-based tracking. A by-product of the tracker is the relative 3D pose (position and orientation) between the camera and the object being tracked. State-of-the-art solutions advocate that tracking geometry is more robust than tracking image texture because edges are more invariant to changes in object appearance and lighting. However, model-based trackers have been limited to tracking small simple objects in small environments. An assessment was performed in tracking larger, more complex building models, in larger environments. A state-of-the art model-based tracker called ViSP (Visual Servoing Platform) was applied in tracking outdoor and indoor buildings using a UAVs low-cost camera. The assessment revealed weaknesses at large scales. Specifically, ViSP failed when tracking was lost, and needed to be manually re-initialized. Failure occurred when there was a lack of model features in the cameras field of view, and because of rapid camera motion. Experiments revealed that ViSP achieved positional accuracies similar to single point positioning solutions obtained from single-frequency (L1) GPS observations standard deviations around 10 metres. These errors were considered to be large, considering the geometric accuracy of the 3D model used in the experiments was 10 to 40 cm. The first contribution of this dissertation proposes to increase the performance of the localization system by combining ViSP with map-building incremental localization, also referred to as simultaneous localization and mapping (SLAM). Experimental results in both indoor and outdoor environments show sub-metre positional accuracies were achieved, while reducing the number of tracking losses throughout the image sequence. It is shown that by integrating model-based tracking with SLAM, not only does SLAM improve model tracking performance, but the model-based tracker alleviates the computational expense of SLAMs loop closing procedure to improve runtime performance. Experiments also revealed that ViSP was unable to handle occlusions when a complete 3D building model was used, resulting in large errors in its pose estimates. The second contribution of this dissertation is a novel map-based incremental localization algorithm that improves tracking performance, and increases pose estimation accuracies from ViSP. The novelty of this algorithm is the implementation of an efficient matching process that identifies corresponding linear features from the UAVs RGB image data and a large, complex, and untextured 3D model. The proposed model-based tracker improved positional accuracies from 10 m (obtained with ViSP) to 46 cm in outdoor environments, and improved from an unattainable result using VISP to 2 cm positional accuracies in large indoor environments. The main disadvantage of any incremental algorithm is that it requires the camera pose of the first frame. Initialization is often a manual process. The third contribution of this dissertation is a map-based absolute localization algorithm that automatically estimates the camera pose when no prior pose information is available. The method benefits from vertical line matching to accomplish a registration procedure of the reference model views with a set of initial input images via geometric hashing. Results demonstrate that sub-metre positional accuracies were achieved and a proposed enhancement of conventional geometric hashing produced more correct matches - 75% of the correct matches were identified, compared to 11%. Further the number of incorrect matches was reduced by 80%

    Localisation et commande d'un robot mobile autonome

    Get PDF

    Environmental maps generation using LIDAR - 3D perception system

    Get PDF
    Orientador: Pablo Siqueira MeirellesTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia MecânicaResumo: Este trabalho apresenta o estudo e desenvolvimento de um Sistema de Percepção baseado na utilização de sensores telemétricos tipo LIDAR. Uma plataforma de escaneamento a laser em três dimensões LMS-3D é construída a fim da navegação autônoma de robôs. A área navegável é obtida a partir de mapas telemétricos, caracterizados com algoritmos de grades de ocupação (GO) (em duas dimensões com a terceira colorida e 3D) e com o cálculo de gradientes vetoriais. Dois tipos de áreas navegáveis são caracterizadas: (i) área de navegação primária representada por uma área livre dentro da GO; e (ii) área de navegação continua representada pela soma das áreas continuas e gradientes classificados com um determinado limiar. Este limiar indica se uma área é passível de navegação considerando as características do robô. A proposta foi avaliada experimentalmente em ambiente real, contemplou a detecção de obstáculos e a identificação de descontinuidadesAbstract: This thesis was proposed to demonstrate the study and development of a Perception System based on the utilization of a LIDAR telemetric sensors. It was proposed to create a LMS-3D three dimension laser scanning platform, in an attempt to promote the Autonomous Robot Navigation. The scanned area was obtained based on telemetric maps, which was characterized with Occupancy Grid algorithms (OG) (in two dimensions with the third colored and 3D) and Vector Gradients calculation. Two different navigation areas were characterized: (i) primary area of navigation, that represents the free area inside a OG, and (ii) continuous navigation area, that represents the navigated area composed by the sum of continuous areas and the gradients classified by a determined threshold, which indicates the possible navigated area, based on the robot characteristics. The proposition of this thesis was evaluated in a real environment and was able to identify the obstacles detection and also the discontinuanceDoutoradoMecanica dos Sólidos e Projeto MecanicoDoutor em Engenharia Mecânic

    Service Robots for Hospitals:Key Technical issues

    Get PDF

    The 1989 Goddard Conference on Space Applications of Artificial Intelligence

    Get PDF
    The following topics are addressed: mission operations support; planning and scheduling; fault isolation/diagnosis; image processing and machine vision; data management; and modeling and simulation
    corecore