622 research outputs found

    3D Spectral Domain Registration-Based Visual Servoing

    Full text link
    This paper presents a spectral domain registration-based visual servoing scheme that works on 3D point clouds. Specifically, we propose a 3D model/point cloud alignment method, which works by finding a global transformation between reference and target point clouds using spectral analysis. A 3D Fast Fourier Transform (FFT) in R3 is used for the translation estimation, and the real spherical harmonics in SO(3) are used for the rotations estimation. Such an approach allows us to derive a decoupled 6 degrees of freedom (DoF) controller, where we use gradient ascent optimisation to minimise translation and rotational costs. We then show how this methodology can be used to regulate a robot arm to perform a positioning task. In contrast to the existing state-of-the-art depth-based visual servoing methods that either require dense depth maps or dense point clouds, our method works well with partial point clouds and can effectively handle larger transformations between the reference and the target positions. Furthermore, the use of spectral data (instead of spatial data) for transformation estimation makes our method robust to sensor-induced noise and partial occlusions. We validate our approach by performing experiments using point clouds acquired by a robot-mounted depth camera. Obtained results demonstrate the effectiveness of our visual servoing approach.Comment: Accepted to 2023 IEEE International Conference on Robotics and Automation (ICRA'23

    Analysis and Observations from the First Amazon Picking Challenge

    Full text link
    This paper presents a overview of the inaugural Amazon Picking Challenge along with a summary of a survey conducted among the 26 participating teams. The challenge goal was to design an autonomous robot to pick items from a warehouse shelf. This task is currently performed by human workers, and there is hope that robots can someday help increase efficiency and throughput while lowering cost. We report on a 28-question survey posed to the teams to learn about each team's background, mechanism design, perception apparatus, planning and control approach. We identify trends in this data, correlate it with each team's success in the competition, and discuss observations and lessons learned based on survey results and the authors' personal experiences during the challenge

    Human Supervised Semi-Autonomous Approach for the DARPA Robotics Challenge Door Task

    Get PDF
    As the field of autonomous robots continue to advance, there is still a tremendous benefit to research human-supervised robot systems for fielding them in practical applications. The DRC inspired by the Fukushima nuclear power plant disaster has been a major research and development program for the past three years, to advance the field of human supervised control of robots for responding to natural and man-made disasters. The overall goal of the research presented in this thesis is to realise a new approach for semi-autonomous control of the Atlas humanoid robot under discrete commands from the human operator. A combination of autonomous and semi-autonomous perception and manipulation techniques to accomplish the task of detecting, opening and walking through a door are presented. The methods are validated in various different scenarios relevant to DRC door task

    Visual 3-D SLAM from UAVs

    Get PDF
    The aim of the paper is to present, test and discuss the implementation of Visual SLAM techniques to images taken from Unmanned Aerial Vehicles (UAVs) outdoors, in partially structured environments. Every issue of the whole process is discussed in order to obtain more accurate localization and mapping from UAVs flights. Firstly, the issues related to the visual features of objects in the scene, their distance to the UAV, and the related image acquisition system and their calibration are evaluated for improving the whole process. Other important, considered issues are related to the image processing techniques, such as interest point detection, the matching procedure and the scaling factor. The whole system has been tested using the COLIBRI mini UAV in partially structured environments. The results that have been obtained for localization, tested against the GPS information of the flights, show that Visual SLAM delivers reliable localization and mapping that makes it suitable for some outdoors applications when flying UAVs

    Map-Based Localization for Unmanned Aerial Vehicle Navigation

    Get PDF
    Unmanned Aerial Vehicles (UAVs) require precise pose estimation when navigating in indoor and GNSS-denied / GNSS-degraded outdoor environments. The possibility of crashing in these environments is high, as spaces are confined, with many moving obstacles. There are many solutions for localization in GNSS-denied environments, and many different technologies are used. Common solutions involve setting up or using existing infrastructure, such as beacons, Wi-Fi, or surveyed targets. These solutions were avoided because the cost should be proportional to the number of users, not the coverage area. Heavy and expensive sensors, for example a high-end IMU, were also avoided. Given these requirements, a camera-based localization solution was selected for the sensor pose estimation. Several camera-based localization approaches were investigated. Map-based localization methods were shown to be the most efficient because they close loops using a pre-existing map, thus the amount of data and the amount of time spent collecting data are reduced as there is no need to re-observe the same areas multiple times. This dissertation proposes a solution to address the task of fully localizing a monocular camera onboard a UAV with respect to a known environment (i.e., it is assumed that a 3D model of the environment is available) for the purpose of navigation for UAVs in structured environments. Incremental map-based localization involves tracking a map through an image sequence. When the map is a 3D model, this task is referred to as model-based tracking. A by-product of the tracker is the relative 3D pose (position and orientation) between the camera and the object being tracked. State-of-the-art solutions advocate that tracking geometry is more robust than tracking image texture because edges are more invariant to changes in object appearance and lighting. However, model-based trackers have been limited to tracking small simple objects in small environments. An assessment was performed in tracking larger, more complex building models, in larger environments. A state-of-the art model-based tracker called ViSP (Visual Servoing Platform) was applied in tracking outdoor and indoor buildings using a UAVs low-cost camera. The assessment revealed weaknesses at large scales. Specifically, ViSP failed when tracking was lost, and needed to be manually re-initialized. Failure occurred when there was a lack of model features in the cameras field of view, and because of rapid camera motion. Experiments revealed that ViSP achieved positional accuracies similar to single point positioning solutions obtained from single-frequency (L1) GPS observations standard deviations around 10 metres. These errors were considered to be large, considering the geometric accuracy of the 3D model used in the experiments was 10 to 40 cm. The first contribution of this dissertation proposes to increase the performance of the localization system by combining ViSP with map-building incremental localization, also referred to as simultaneous localization and mapping (SLAM). Experimental results in both indoor and outdoor environments show sub-metre positional accuracies were achieved, while reducing the number of tracking losses throughout the image sequence. It is shown that by integrating model-based tracking with SLAM, not only does SLAM improve model tracking performance, but the model-based tracker alleviates the computational expense of SLAMs loop closing procedure to improve runtime performance. Experiments also revealed that ViSP was unable to handle occlusions when a complete 3D building model was used, resulting in large errors in its pose estimates. The second contribution of this dissertation is a novel map-based incremental localization algorithm that improves tracking performance, and increases pose estimation accuracies from ViSP. The novelty of this algorithm is the implementation of an efficient matching process that identifies corresponding linear features from the UAVs RGB image data and a large, complex, and untextured 3D model. The proposed model-based tracker improved positional accuracies from 10 m (obtained with ViSP) to 46 cm in outdoor environments, and improved from an unattainable result using VISP to 2 cm positional accuracies in large indoor environments. The main disadvantage of any incremental algorithm is that it requires the camera pose of the first frame. Initialization is often a manual process. The third contribution of this dissertation is a map-based absolute localization algorithm that automatically estimates the camera pose when no prior pose information is available. The method benefits from vertical line matching to accomplish a registration procedure of the reference model views with a set of initial input images via geometric hashing. Results demonstrate that sub-metre positional accuracies were achieved and a proposed enhancement of conventional geometric hashing produced more correct matches - 75% of the correct matches were identified, compared to 11%. Further the number of incorrect matches was reduced by 80%

    Ultra high frequency (UHF) radio-frequency identification (RFID) for robot perception and mobile manipulation

    Get PDF
    Personal robots with autonomy, mobility, and manipulation capabilities have the potential to dramatically improve quality of life for various user populations, such as older adults and individuals with motor impairments. Unfortunately, unstructured environments present many challenges that hinder robot deployment in ordinary homes. This thesis seeks to address some of these challenges through a new robotic sensing modality that leverages a small amount of environmental augmentation in the form of Ultra High Frequency (UHF) Radio-Frequency Identification (RFID) tags. Previous research has demonstrated the utility of infrastructure tags (affixed to walls) for robot localization; in this thesis, we specifically focus on tagging objects. Owing to their low-cost and passive (battery-free) operation, users can apply UHF RFID tags to hundreds of objects throughout their homes. The tags provide two valuable properties for robots: a unique identifier and receive signal strength indicator (RSSI, the strength of a tag's response). This thesis explores robot behaviors and radio frequency perception techniques using robot-mounted UHF RFID readers that enable a robot to efficiently discover, locate, and interact with UHF RFID tags applied to objects and people of interest. The behaviors and algorithms explicitly rely on the robot's mobility and manipulation capabilities to provide multiple opportunistic views of the complex electromagnetic landscape inside a home environment. The electromagnetic properties of RFID tags change when applied to common household objects. Objects can have varied material properties, can be placed in diverse orientations, and be relocated to completely new environments. We present a new class of optimization-based techniques for RFID sensing that are robust to the variation in tag performance caused by these complexities. We discuss a hybrid global-local search algorithm where a robot employing long-range directional antennas searches for tagged objects by maximizing expected RSSI measurements; that is, the robot attempts to position itself (1) near a desired tagged object and (2) oriented towards it. The robot first performs a sparse, global RFID search to locate a pose in the neighborhood of the tagged object, followed by a series of local search behaviors (bearing estimation and RFID servoing) to refine the robot's state within the local basin of attraction. We report on RFID search experiments performed in Georgia Tech's Aware Home (a real home). Our optimization-based approach yields superior performance compared to state of the art tag localization algorithms, does not require RF sensor models, is easy to implement, and generalizes to other short-range RFID sensor systems embedded in a robot's end effector. We demonstrate proof of concept applications, such as medication delivery and multi-sensor fusion, using these techniques. Through our experimental results, we show that UHF RFID is a complementary sensing modality that can assist robots in unstructured human environments.PhDCommittee Chair: Kemp, Charles C.; Committee Member: Abowd, Gregory; Committee Member: Howard, Ayanna; Committee Member: Ingram, Mary Ann; Committee Member: Reynolds, Matt; Committee Member: Tentzeris, Emmanoui
    corecore