1,760 research outputs found

    Monte Carlo Localization in Hand-Drawn Maps

    Full text link
    Robot localization is a one of the most important problems in robotics. Most of the existing approaches assume that the map of the environment is available beforehand and focus on accurate metrical localization. In this paper, we address the localization problem when the map of the environment is not present beforehand, and the robot relies on a hand-drawn map from a non-expert user. We addressed this problem by expressing the robot pose in the pixel coordinate and simultaneously estimate a local deformation of the hand-drawn map. Experiments show that we are able to localize the robot in the correct room with a robustness up to 80

    Extended LTLvis Motion Planning interface (Extended Technical Report)

    Full text link
    This paper introduces an extended version of the Linear Temporal Logic (LTL) graphical interface. It is a sketch based interface built on the Android platform which makes the LTL control interface more straightforward and friendly to nonexpert users. By predefining a set of areas of interest, this interface can quickly and efficiently create plans that satisfy extended plan goals in LTL. The interface can also allow users to customize the paths for this plan by sketching a set of reference trajectories. Given the custom paths by the user, the LTL specification and the environment, the interface generates a plan balancing the customized paths and the LTL specifications. We also show experimental results with the implemented interface.Comment: 8 pages, 15 figures, a technical report for the 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC 2016

    IndoorSim-to-OutdoorReal: Learning to Navigate Outdoors without any Outdoor Experience

    Full text link
    We present IndoorSim-to-OutdoorReal (I2O), an end-to-end learned visual navigation approach, trained solely in simulated short-range indoor environments, and demonstrates zero-shot sim-to-real transfer to the outdoors for long-range navigation on the Spot robot. Our method uses zero real-world experience (indoor or outdoor), and requires the simulator to model no predominantly-outdoor phenomenon (sloped grounds, sidewalks, etc). The key to I2O transfer is in providing the robot with additional context of the environment (i.e., a satellite map, a rough sketch of a map by a human, etc.) to guide the robot's navigation in the real-world. The provided context-maps do not need to be accurate or complete -- real-world obstacles (e.g., trees, bushes, pedestrians, etc.) are not drawn on the map, and openings are not aligned with where they are in the real-world. Crucially, these inaccurate context-maps provide a hint to the robot about a route to take to the goal. We find that our method that leverages Context-Maps is able to successfully navigate hundreds of meters in novel environments, avoiding novel obstacles on its path, to a distant goal without a single collision or human intervention. In comparison, policies without the additional context fail completely. Lastly, we test the robustness of the Context-Map policy by adding varying degrees of noise to the map in simulation. We find that the Context-Map policy is surprisingly robust to noise in the provided context-map. In the presence of significantly inaccurate maps (corrupted with 50% noise, or entirely blank maps), the policy gracefully regresses to the behavior of a policy with no context. Videos are available at https://www.joannetruong.com/projects/i2o.htm

    A pipeline framework for robot maze navigation using computer vision, path planning and communication protocols.

    Get PDF
    Maze navigation is a recurring challenge in robotics competitions, where the aim is to design a strategy for one or several entities to traverse the optimal path in a fast and efficient way. To do so, numerous alternatives exist, relying on different sensing systems. Recently, camera-based approaches are becoming increasingly popular to address this scenario due to their reliability and given the possibility of migrating the resulting technologies to other application areas, mostly related to human-robot interaction. The aim of this paper is to present a pipeline methodology towards enabling a robot solving maze autonomously, by means of computer vision and path planning. Afterwards, the robot is capable of communicating the learned experience to a second robot, which then will solve the same challenge considering its own mechanical characteristics which may differ from the first robot. The pipeline is divided into four steps: (1) camera calibration (2) maze mapping (3) path planning and (4) communication. Experimental validation shows the efficiency of each step towards building this pipeline
    • …
    corecore