11,688 research outputs found

    Multi-view monocular pose estimation for spacecraft relative navigation

    Get PDF
    This paper presents a method of estimating the pose of a non-cooperative target for spacecraft rendezvous applications employing exclusively a monocular camera and a threedimensional model of the target. This model is used to build an offline database of prerendered keyframes with known poses. An online stage solves the model-to-image registration problem by matching two-dimensional point and edge features from the camera to the database. We apply our method to retrieve the motion of the now inoperational satellite ENVISAT. The combination of both feature types is shown to produce a robust pose solution even for large displacements respective to the keyframes which does not rely on real-time rendering, making it attractive for autonomous systems applications

    Structured Light-Based 3D Reconstruction System for Plants.

    Get PDF
    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance

    Online 4D ultrasound guidance for real-time motion compensation by MLC tracking

    Get PDF
    PURPOSE: With the trend in radiotherapy moving toward dose escalation and hypofractionation, the need for highly accurate targeting increases. While MLC tracking is already being successfully used for motion compensation of moving targets in the prostate, current real-time target localization methods rely on repeated x-ray imaging and implanted fiducial markers or electromagnetic transponders rather than direct target visualization. In contrast, ultrasound imaging can yield volumetric data in real-time (3D + time = 4D) without ionizing radiation. The authors report the first results of combining these promising techniques-online 4D ultrasound guidance and MLC tracking-in a phantom. METHODS: A software framework for real-time target localization was installed directly on a 4D ultrasound station and used to detect a 2 mm spherical lead marker inside a water tank. The lead marker was rigidly attached to a motion stage programmed to reproduce nine characteristic tumor trajectories chosen from large databases (five prostate, four lung). The 3D marker position detected by ultrasound was transferred to a computer program for MLC tracking at a rate of 21.3 Hz and used for real-time MLC aperture adaption on a conventional linear accelerator. The tracking system latency was measured using sinusoidal trajectories and compensated for by applying a kernel density prediction algorithm for the lung traces. To measure geometric accuracy, static anterior and lateral conformal fields as well as a 358° arc with a 10 cm circular aperture were delivered for each trajectory. The two-dimensional (2D) geometric tracking error was measured as the difference between marker position and MLC aperture center in continuously acquired portal images. For dosimetric evaluation, VMAT treatment plans with high and low modulation were delivered to a biplanar diode array dosimeter using the same trajectories. Dose measurements with and without MLC tracking were compared to a static reference dose using 3%/3 mm and 2%/2 mm γ-tests. RESULTS: The overall tracking system latency was 172 ms. The mean 2D root-mean-square tracking error was 1.03 mm (0.80 mm prostate, 1.31 mm lung). MLC tracking improved the dose delivery in all cases with an overall reduction in the γ-failure rate of 91.2% (3%/3 mm) and 89.9% (2%/2 mm) compared to no motion compensation. Low modulation VMAT plans had no (3%/3 mm) or minimal (2%/2 mm) residual γ-failures while tracking reduced the γ-failure rate from 17.4% to 2.8% (3%/3 mm) and from 33.9% to 6.5% (2%/2 mm) for plans with high modulation. CONCLUSIONS: Real-time 4D ultrasound tracking was successfully integrated with online MLC tracking for the first time. The developed framework showed an accuracy and latency comparable with other MLC tracking methods while holding the potential to measure and adapt to target motion, including rotation and deformation, noninvasively

    Accuracy assessment of Tri-plane B-mode ultrasound for non-invasive 3D kinematic analysis of knee joints

    Get PDF
    BACKGROUND Currently the clinical standard for measuring the motion of the bones in knee joints with sufficient precision involves implanting tantalum beads into the bones. These beads appear as high intensity features in radiographs and can be used for precise kinematic measurements. This procedure imposes a strong coupling between accuracy and invasiveness. In this paper, a tri-plane B-mode ultrasound (US) based non-invasive approach is proposed for use in kinematic analysis of knee joints in 3D space. METHODS The 3D analysis is performed using image processing procedures on the 2D US slices. The novelty of the proposed procedure and its applicability to the unconstrained 3D kinematic analysis of knee joints is outlined. An error analysis for establishing the method's feasibility is included for different artificial compositions of a knee joint phantom. Some in-vivo and in-vitro scans are presented to demonstrate that US scans reveal enough anatomical details, which further supports the experimental setup used using knee bone phantoms. RESULTS The error between the displacements measured by the registration of the US image slices and the true displacements of the respective slices measured using the precision mechanical stages on the experimental apparatus is evaluated for translation and rotation in two simulated environments. The mean and standard deviation of errors are shown in tabular form. This method provides an average measurement precision of less than 0.1 mm and 0.1 degrees, respectively. CONCLUSION In this paper, we have presented a novel non-invasive approach to measuring the motion of the bones in a knee using tri-plane B-mode ultrasound and image registration. In our study, the image registration method determines the position of bony landmarks relative to a B-mode ultrasound sensor array with sub-pixel accuracy. The advantages of our proposed system over previous techniques are that it is non-invasive, does not require the use of ionizing radiation and can be used conveniently if miniaturized.This work has been supported by School of Engineering & IT, UNSW Canberra, under Research Publication Fellowship

    Stereo vision-based tracking of soft tissue motion with application to online ablation control in laser microsurgery

    Get PDF
    Recent research has revealed that image-based methods can enhance accuracy and safety in laser microsurgery. In this study, non-rigid tracking using surgical stereo imaging and its application to laser ablation is discussed. A recently developed motion estimation framework based on piecewise affine deformation modeling is extended by a mesh refinement step and considering texture information. This compensates for tracking inaccuracies potentially caused by inconsistent feature matches or drift. To facilitate online application of the method, computational load is reduced by concurrent processing and affine-invariant fusion of tracking and refinement results. The residual latency-dependent tracking error is further minimized by Kalman filter-based upsampling, considering a motion model in disparity space. Accuracy is assessed in laparoscopic, beating heart, and laryngeal sequences with challenging conditions, such as partial occlusions and significant deformation. Performance is compared with that of state-of-the-art methods. In addition, the online capability of the method is evaluated by tracking two motion patterns performed by a high-precision parallel-kinematic platform. Related experiments are discussed for tissue substitute and porcine soft tissue in order to compare performances in an ideal scenario and in a setup mimicking clinical conditions. Regarding the soft tissue trial, the tracking error can be significantly reduced from 0.72 mm to below 0.05 mm with mesh refinement. To demonstrate online laser path adaptation during ablation, the non-rigid tracking framework is integrated into a setup consisting of a surgical Er:YAG laser, a three-axis scanning unit, and a low-noise stereo camera. Regardless of the error source, such as laser-to-camera registration, camera calibration, image-based tracking, and scanning latency, the ablation root mean square error is kept below 0.21 mm when the sample moves according to the aforementioned patterns. Final experiments regarding motion-compensated laser ablation of structurally deforming tissue highlight the potential of the method for vision-guided laser surgery.EU/FP/-ICT/28866
    • …
    corecore