2,192 research outputs found

    On the Calibration of Active Binocular and RGBD Vision Systems for Dual-Arm Robots

    Get PDF
    This paper describes a camera and hand-eye calibration methodology for integrating an active binocular robot head within a dual-arm robot. For this purpose, we derive the forward kinematic model of our active robot head and describe our methodology for calibrating and integrating our robot head. This rigid calibration provides a closedform hand-to-eye solution. We then present an approach for updating dynamically camera external parameters for optimal 3D reconstruction that are the foundation for robotic tasks such as grasping and manipulating rigid and deformable objects. We show from experimental results that our robot head achieves an overall sub millimetre accuracy of less than 0.3 millimetres while recovering the 3D structure of a scene. In addition, we report a comparative study between current RGBD cameras and our active stereo head within two dual-arm robotic testbeds that demonstrates the accuracy and portability of our proposed methodology

    3D environment mapping using the Kinect V2 and path planning based on RRT algorithms

    Get PDF
    This paper describes a 3D path planning system that is able to provide a solution trajectory for the automatic control of a robot. The proposed system uses a point cloud obtained from the robot workspace, with a Kinect V2 sensor to identify the interest regions and the obstacles of the environment. Our proposal includes a collision-free path planner based on the Rapidly-exploring Random Trees variant (RRT*), for a safe and optimal navigation of robots in 3D spaces. Results on RGB-D segmentation and recognition, point cloud processing, and comparisons between different RRT* algorithms, are presented.Peer ReviewedPostprint (published version

    Comparative Study of Indoor Navigation Systems for Autonomous Flight

    Get PDF
    Recently, Unmanned Aerial Vehicles (UAVs) have attracted the society and researchers due to the capability to perform in economic, scientific and emergency scenarios, and are being employed in large number of applications especially during the hostile environments. They can operate autonomously for both indoor and outdoor applications mainly including search and rescue, manufacturing, forest fire tracking, remote sensing etc. For both environments, precise localization plays a critical role in order to achieve high performance flight and interacting with the surrounding objects. However, for indoor areas with degraded or denied Global Navigation Satellite System (GNSS) situation, it becomes challenging to control UAV autonomously especially where obstacles are unidentified. A large number of techniques by using various technologies are proposed to get rid of these limits. This paper provides a comparison of such existing solutions and technologies available for this purpose with their strengths and limitations. Further, a summary of current research status with unresolved issues and opportunities is provided that would provide research directions to the researchers of the similar interests

    Challenges and solutions for autonomous ground robot scene understanding and navigation in unstructured outdoor environments: A review

    Get PDF
    The capabilities of autonomous mobile robotic systems have been steadily improving due to recent advancements in computer science, engineering, and related disciplines such as cognitive science. In controlled environments, robots have achieved relatively high levels of autonomy. In more unstructured environments, however, the development of fully autonomous mobile robots remains challenging due to the complexity of understanding these environments. Many autonomous mobile robots use classical, learning-based or hybrid approaches for navigation. More recent learning-based methods may replace the complete navigation pipeline or selected stages of the classical approach. For effective deployment, autonomous robots must understand their external environments at a sophisticated level according to their intended applications. Therefore, in addition to robot perception, scene analysis and higher-level scene understanding (e.g., traversable/non-traversable, rough or smooth terrain, etc.) are required for autonomous robot navigation in unstructured outdoor environments. This paper provides a comprehensive review and critical analysis of these methods in the context of their applications to the problems of robot perception and scene understanding in unstructured environments and the related problems of localisation, environment mapping and path planning. State-of-the-art sensor fusion methods and multimodal scene understanding approaches are also discussed and evaluated within this context. The paper concludes with an in-depth discussion regarding the current state of the autonomous ground robot navigation challenge in unstructured outdoor environments and the most promising future research directions to overcome these challenges
    • …
    corecore