8,574 research outputs found

    Extrinsic Parameter Calibration for Line Scanning Cameras on Ground Vehicles with Navigation Systems Using a Calibration Pattern

    Full text link
    Line scanning cameras, which capture only a single line of pixels, have been increasingly used in ground based mobile or robotic platforms. In applications where it is advantageous to directly georeference the camera data to world coordinates, an accurate estimate of the camera's 6D pose is required. This paper focuses on the common case where a mobile platform is equipped with a rigidly mounted line scanning camera, whose pose is unknown, and a navigation system providing vehicle body pose estimates. We propose a novel method that estimates the camera's pose relative to the navigation system. The approach involves imaging and manually labelling a calibration pattern with distinctly identifiable points, triangulating these points from camera and navigation system data and reprojecting them in order to compute a likelihood, which is maximised to estimate the 6D camera pose. Additionally, a Markov Chain Monte Carlo (MCMC) algorithm is used to estimate the uncertainty of the offset. Tested on two different platforms, the method was able to estimate the pose to within 0.06 m / 1.05^{\circ} and 0.18 m / 2.39^{\circ}. We also propose several approaches to displaying and interpreting the 6D results in a human readable way.Comment: Published in MDPI Sensors, 30 October 201

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    Automated pick-up of suturing needles for robotic surgical assistance

    Get PDF
    Robot-assisted laparoscopic prostatectomy (RALP) is a treatment for prostate cancer that involves complete or nerve sparing removal prostate tissue that contains cancer. After removal the bladder neck is successively sutured directly with the urethra. The procedure is called urethrovesical anastomosis and is one of the most dexterity demanding tasks during RALP. Two suturing instruments and a pair of needles are used in combination to perform a running stitch during urethrovesical anastomosis. While robotic instruments provide enhanced dexterity to perform the anastomosis, it is still highly challenging and difficult to learn. In this paper, we presents a vision-guided needle grasping method for automatically grasping the needle that has been inserted into the patient prior to anastomosis. We aim to automatically grasp the suturing needle in a position that avoids hand-offs and immediately enables the start of suturing. The full grasping process can be broken down into: a needle detection algorithm; an approach phase where the surgical tool moves closer to the needle based on visual feedback; and a grasping phase through path planning based on observed surgical practice. Our experimental results show examples of successful autonomous grasping that has the potential to simplify and decrease the operational time in RALP by assisting a small component of urethrovesical anastomosis

    Study of optical techniques for the Ames unitary wind tunnels. Part 4: Model deformation

    Get PDF
    A survey of systems capable of model deformation measurements was conducted. The survey included stereo-cameras, scanners, and digitizers. Moire, holographic, and heterodyne interferometry techniques were also looked at. Stereo-cameras with passive or active targets are currently being deployed for model deformation measurements at NASA Ames and LaRC, Boeing, and ONERA. Scanners and digitizers are widely used in robotics, motion analysis, medicine, etc., and some of the scanner and digitizers can meet the model deformation requirements. Commercial stereo-cameras, scanners, and digitizers are being improved in accuracy, reliability, and ease of operation. A number of new systems are coming onto the market

    3D Perception-based Collision-Free Robotic Leaf Probing for Automated Indoor Plant Phenotyping

    Get PDF
    Various instrumentation devices for plant physiology study such as spectrometer, chlorophyll fluorimeter, and Raman spectroscopy sensor require accurate placement of their sensor probes toward the leaf surface to meet specific requirements of probe-to-target distance and orientation. In this work, a Kinect V2 sensor, a high-precision 2D laser profilometer, and a six-axis robotic manipulator were used to automate the leaf probing task. The relatively wide field of view and high resolution of Kinect V2 allowed rapid capture of the full 3D environment in front of the robot. The location and size of each plant were estimated by k-means clustering where “k” was the user-defined number of plants. A real-time collision-free motion planning framework based on Probabilistic Roadmaps was adapted to maneuver the robotic manipulator without colliding with the plants. Each plant was scanned from the top with the short-range profilometer to obtain high-precision 3D point cloud data. Potential leaf clusters were extracted by a 3D region growing segmentation scheme. Each leaf segment was further partitioned into small patches by a Voxel Cloud Connectivity Segmentation method. Only the patches with low root mean square errors of plane fitting were used to compute leaf probing poses of the robot. Experiments conducted inside a growth chamber mock-up showed that the developed robotic leaf probing system achieved an average motion planning time of 0.4 seconds with an average end-effector travel distance of 1.0 meter. To examine the probing accuracy, a square surface was scanned at different angles, and its centroid was probed perpendicularly. The average absolute probing errors of distance and angle were 1.5 mm and 0.84 degrees, respectively. These results demonstrate the utility of the proposed robotic leaf probing system for automated non-contact deployment of spectroscopic sensor probes for indoor plant phenotyping under controlled environmental conditions
    corecore