144,780 research outputs found

    System for detecting and tracking moving objects

    Get PDF
    This paper considers the construction of a system for detecting and tracking moving objects. It is proposed to pre-process the frame using digital image stabilization algorithms based on optical flow. To detectobjects, it is supposed to use the longest optical flow vectors formed after stabilization, and to implement tracking using several classical algorithms using a prefetch mechanism built on classification neural networks

    UV Exposed Optical Fibers with Frequency Domain Reflectometry for Device Tracking in Intra-Arterial Procedures

    Full text link
    Shape tracking of medical devices using strain sensing properties in optical fibers has seen increased attention in recent years. In this paper, we propose a novel guidance system for intra-arterial procedures using a distributed strain sensing device based on optical frequency domain reflectometry (OFDR) to track the shape of a catheter. Tracking enhancement is provided by exposing a fiber triplet to a focused ultraviolet beam, producing high scattering properties. Contrary to typical quasi-distributed strain sensors, we propose a truly distributed strain sensing approach, which allows to reconstruct a fiber triplet in real-time. A 3D roadmap of the hepatic anatomy integrated with a 4D MR imaging sequence allows to navigate the catheter within the pre-interventional anatomy, and map the blood flow velocities in the arterial tree. We employed Riemannian anisotropic heat kernels to map the sensed data to the pre-interventional model. Experiments in synthetic phantoms and an in vivo model are presented. Results show that the tracking accuracy is suitable for interventional tracking applications, with a mean 3D shape reconstruction errors of 1.6 +/- 0.3 mm. This study demonstrates the promising potential of MR-compatible UV-exposed OFDR optical fibers for non-ionizing device guidance in intra-arterial procedures

    Hybrid tracking approach using optical flow and pose estimation

    Get PDF
    International audienceThis paper proposes an hybrid approach to estimate the 3D pose of an object. The integration of texture information based on image intensities in a more classical non-linear edge-based pose estimation computation has proven to highly increase the reliability of the tracker. We propose in this work to exploit the data provided by an optical flow algorithm for a similar purpose. The advantage of using the optical flow is that it does not require any a priori knowledge on the object appearance. The registration of 2D and 3D cues for monocular tracking is performed by a non linear minimization. Results obtained show that using optical flow enables to perform robust 3D hybrid tracking even without any texture mode

    Deep Retinal Optical Flow: From Synthetic Dataset Generation to Framework Creation and Evaluation

    Get PDF
    Sustained delivery of regenerative retinal therapies by robotic systems requires intra-operative tracking of the retinal fundus. This thesis presents a supervised convolutional neural network to densely predict optical flow of the retinal fundus, using semantic segmentation as an auxiliary task. Retinal flow information missing due to occlusion by surgical tools or other effects is implicitly inpainted, allowing for the robust tracking of surgical targets. As manual annotation of optical flow is infeasible, a flexible algorithm for the generation of large synthetic training datasets on the basis of given intra-operative retinal images and tool templates is developed. The compositing of synthetic images is approached as a layer-wise operation implementing a number of transforms at every level which can be extended as required, mimicking the various phenomena visible in real data. Optical flow ground truth is calculated from motion transforms with the help of oflib, an open-source optical flow library available from the Python Package Index. It enables the user to manipulate, evaluate, and combine flow fields. The PyTorch version of oflib is fully differentiable and therefore suitable for use in deep learning methods requiring back-propagation. The optical flow estimation from the network trained on synthetic data is evaluated using three performance metrics obtained from tracking a grid and sparsely annotated ground truth points. The evaluation benchmark consists of a series of challenging real intra-operative clips obtained from an extensive internally acquired dataset encompassing representative surgical cases. The deep learning approach clearly outperforms variational baseline methods and is shown to generalise well to real data showing scenarios routinely observed during vitreoretinal procedures. This indicates complex synthetic training datasets can be used to specifically guide optical flow estimation, laying the foundation for a robust system which can assist with intra-operative tracking of moving surgical targets even when occluded

    Computational localization microscopy with extended axial range

    Get PDF
    A new single-aperture 3D particle-localization and tracking technique is presented that demonstrates an increase in depth range by more than an order of magnitude without compromising optical resolution and throughput. We exploit the extended depth range and depth-dependent translation of an Airy-beam PSF for 3D localization over an extended volume in a single snapshot. The technique is applicable to all bright-field and fluorescence modalities for particle localization and tracking, ranging from super-resolution microscopy through to the tracking of fluorescent beads and endogenous particles within cells. We demonstrate and validate its application to real-time 3D velocity imaging of fluid flow in capillaries using fluorescent tracer beads. An axial localization precision of 50 nm was obtained over a depth range of 120μm using a 0.4NA, 20× microscope objective. We believe this to be the highest ratio of axial range-to-precision reported to date
    corecore