1,073 research outputs found

    Hand-eye calibration, constraints and source synchronisation for robotic-assisted minimally invasive surgery

    Get PDF
    In robotic-assisted minimally invasive surgery (RMIS), the robotic system allows surgeons to remotely control articulated instruments to perform surgical interventions and introduces a potential to implement computer-assisted interventions (CAI). However, the information in the camera must be correctly transformed into the robot coordinate as its movement is controlled by the robot kinematic. Therefore, determining the rigid transformation connecting the coordinates is necessary. Such process is called hand-eye calibration. One of the challenges in solving the hand-eye problem in the RMIS setup is data asynchronicity, which occurs when tracking equipments are integrated into a robotic system and create temporal misalignment. For the calibration itself, noise in the robot and camera motions can be propagated to the calibrated result and as a result of a limited motion range, the error cannot be fully suppressed. Finally, the calibration procedure must be adaptive and simple so a disruption in a surgical workflow is minimal since any change in the setup may require another calibration procedure. We propose solutions to deal with the asynchronicity, noise sensitivity, and a limited motion range. We also propose a potential to use a surgical instrument as the calibration target to reduce the complexity in the calibration procedure. The proposed algorithms are validated through extensive experiments with synthetic and real data from the da Vinci Research Kit and the KUKA robot arms. The calibration performance is compared with existing hand-eye algorithms and it shows promising results. Although the calibration using a surgical instrument as the calibration target still requires a further development, results indicate that the proposed methods increase the calibration performance, and contribute to finding an optimal solution to the hand-eye problem in robotic surgery

    2D position system for a mobile robot in unstructured environments

    Get PDF
    Nowadays, several sensors and mechanisms are available to estimate a mobile robot trajectory and location with respect to its surroundings. Usually absolute positioning mechanisms are the most accurate, but they also are the most expensive ones, and require pre installed equipment in the environment. Therefore, a system capable of measuring its motion and location within the environment (relative positioning) has been a research goal since the beginning of autonomous vehicles. With the increasing of the computational performance, computer vision has become faster and, therefore, became possible to incorporate it in a mobile robot. In visual odometry feature based approaches, the model estimation requires absence of feature association outliers for an accurate motion. Outliers rejection is a delicate process considering there is always a trade-off between speed and reliability of the system. This dissertation proposes an indoor 2D position system using Visual Odometry. The mobile robot has a camera pointed to the ceiling, for image analysis. As requirements, the ceiling and the oor (where the robot moves) must be planes. In the literature, RANSAC is a widely used method for outlier rejection. However, it might be slow in critical circumstances. Therefore, it is proposed a new algorithm that accelerates RANSAC, maintaining its reliability. The algorithm, called FMBF, consists on comparing image texture patterns between pictures, preserving the most similar ones. There are several types of comparisons, with different computational cost and reliability. FMBF manages those comparisons in order to optimize the trade-off between speed and reliability

    Aspects of an open architecture robot controller and its integration with a stereo vision sensor.

    Get PDF
    The work presented in this thesis attempts to improve the performance of industrial robot systems in a flexible manufacturing environment by addressing a number of issues related to external sensory feedback and sensor integration, robot kinematic positioning accuracy, and robot dynamic control performance. To provide a powerful control algorithm environment and the support for external sensor integration, a transputer based open architecture robot controller is developed. It features high computational power, user accessibility at various robot control levels and external sensor integration capability. Additionally, an on-line trajectory adaptation scheme is devised and implemented in the open architecture robot controller, enabling a real-time trajectory alteration of robot motion to be achieved in response to external sensory feedback. An in depth discussion is presented on integrating a stereo vision sensor with the robot controller to perform external sensor guided robot operations. Key issues for such a vision based robot system are precise synchronisation between the vision system and the robot controller, and correct target position prediction to counteract the inherent time delay in image processing. These were successfully addressed in a demonstrator system based on a Puma robot. Efforts have also been made to improve the Puma robot kinematic and dynamic performance. A simple, effective, on-line algorithm is developed for solving the inverse kinematics problem of a calibrated industrial robot to improve robot positioning accuracy. On the dynamic control aspect, a robust adaptive robot tracking control algorithm is derived that has an improved performance compared to a conventional PID controller as well as exhibiting relatively modest computational complexity. Experiments have been carried out to validate the open architecture robot controller and demonstrate the performance of the inverse kinematics algorithm, the adaptive servo control algorithm, and the on-line trajectory generation. By integrating the open architecture robot controller with a stereo vision sensor system, robot visual guidance has been achieved with experimental results showing that the integrated system is capable of detecting, tracking and intercepting random objects moving in 3D trajectory at a velocity up to 40mm/s

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    Vision Based Control of Model Helicopters

    Get PDF

    Nonlinear Deterministic Observer for Inertial Navigation using Ultra-wideband and IMU Sensor Fusion

    Full text link
    Navigation in Global Positioning Systems (GPS)-denied environments requires robust estimators reliant on fusion of inertial sensors able to estimate rigid-body's orientation, position, and linear velocity. Ultra-wideband (UWB) and Inertial Measurement Unit (IMU) represent low-cost measurement technology that can be utilized for successful Inertial Navigation. This paper presents a nonlinear deterministic navigation observer in a continuous form that directly employs UWB and IMU measurements. The estimator is developed on the extended Special Euclidean Group SE2(3)\mathbb{SE}_{2}\left(3\right) and ensures exponential convergence of the closed loop error signals starting from almost any initial condition. The discrete version of the proposed observer is tested using a publicly available real-world dataset of a drone flight. Keywords: Ultra-wideband, Inertial measurement unit, Sensor Fusion, Positioning system, GPS-denied navigation.Comment: 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS

    Portable and Scalable In-vehicle Laboratory Instrumentation for the Design of i-ADAS

    Get PDF
    According to the WHO (World Health Organization), world-wide deaths from injuries are projected to rise from 5.1 million in 1990 to 8.4 million in 2020, with traffic-related incidents as the major cause for this increase. Intelligent, Advanced Driving Assis­ tance Systems (i-ADAS) provide a number of solutions to these safety challenges. We developed a scalable in-vehicle mobile i-ADAS research platform for the purpose of traffic context analysis and behavioral prediction designed for understanding fun­ damental issues in intelligent vehicles. We outline our approach and describe the in-vehicle instrumentation
    corecore