6,170 research outputs found

    Graphics processing unit accelerating compressed sensing photoacoustic computed tomography with total variation

    Get PDF
    Photoacoustic computed tomography with compressed sensing (CS-PACT) is a commonly used imaging strategy for sparse-sampling PACT. However, it is very time-consuming because of the iterative process involved in the image reconstruction. In this paper, we present a graphics processing unit (GPU)-based parallel computation framework for total-variation-based CS-PACT and adapted into a custom-made PACT system. Specifically, five compute-intensive operators are extracted from the iteration algorithm and are redesigned for parallel performance on a GPU. We achieved an image reconstruction speed 24–31 times faster than the CPU performance. We performed in vivo experiments on human hands to verify the feasibility of our developed method

    EchoFusion: Tracking and Reconstruction of Objects in 4D Freehand Ultrasound Imaging without External Trackers

    Get PDF
    Ultrasound (US) is the most widely used fetal imaging technique. However, US images have limited capture range, and suffer from view dependent artefacts such as acoustic shadows. Compounding of overlapping 3D US acquisitions into a high-resolution volume can extend the field of view and remove image artefacts, which is useful for retrospective analysis including population based studies. However, such volume reconstructions require information about relative transformations between probe positions from which the individual volumes were acquired. In prenatal US scans, the fetus can move independently from the mother, making external trackers such as electromagnetic or optical tracking unable to track the motion between probe position and the moving fetus. We provide a novel methodology for image-based tracking and volume reconstruction by combining recent advances in deep learning and simultaneous localisation and mapping (SLAM). Tracking semantics are established through the use of a Residual 3D U-Net and the output is fed to the SLAM algorithm. As a proof of concept, experiments are conducted on US volumes taken from a whole body fetal phantom, and from the heads of real fetuses. For the fetal head segmentation, we also introduce a novel weak annotation approach to minimise the required manual effort for ground truth annotation. We evaluate our method qualitatively, and quantitatively with respect to tissue discrimination accuracy and tracking robustness.Comment: MICCAI Workshop on Perinatal, Preterm and Paediatric Image analysis (PIPPI), 201

    Towards segmentation and spatial alignment of the human embryonic brain using deep learning for atlas-based registration

    Full text link
    We propose an unsupervised deep learning method for atlas based registration to achieve segmentation and spatial alignment of the embryonic brain in a single framework. Our approach consists of two sequential networks with a specifically designed loss function to address the challenges in 3D first trimester ultrasound. The first part learns the affine transformation and the second part learns the voxelwise nonrigid deformation between the target image and the atlas. We trained this network end-to-end and validated it against a ground truth on synthetic datasets designed to resemble the challenges present in 3D first trimester ultrasound. The method was tested on a dataset of human embryonic ultrasound volumes acquired at 9 weeks gestational age, which showed alignment of the brain in some cases and gave insight in open challenges for the proposed method. We conclude that our method is a promising approach towards fully automated spatial alignment and segmentation of embryonic brains in 3D ultrasound

    Autonomous Tissue Scanning under Free-Form Motion for Intraoperative Tissue Characterisation

    Full text link
    In Minimally Invasive Surgery (MIS), tissue scanning with imaging probes is required for subsurface visualisation to characterise the state of the tissue. However, scanning of large tissue surfaces in the presence of deformation is a challenging task for the surgeon. Recently, robot-assisted local tissue scanning has been investigated for motion stabilisation of imaging probes to facilitate the capturing of good quality images and reduce the surgeon's cognitive load. Nonetheless, these approaches require the tissue surface to be static or deform with periodic motion. To eliminate these assumptions, we propose a visual servoing framework for autonomous tissue scanning, able to deal with free-form tissue deformation. The 3D structure of the surgical scene is recovered and a feature-based method is proposed to estimate the motion of the tissue in real-time. A desired scanning trajectory is manually defined on a reference frame and continuously updated using projective geometry to follow the tissue motion and control the movement of the robotic arm. The advantage of the proposed method is that it does not require the learning of the tissue motion prior to scanning and can deal with free-form deformation. We deployed this framework on the da Vinci surgical robot using the da Vinci Research Kit (dVRK) for Ultrasound tissue scanning. Since the framework does not rely on information from the Ultrasound data, it can be easily extended to other probe-based imaging modalities.Comment: 7 pages, 5 figures, ICRA 202

    An open environment CT-US fusion for tissue segmentation during interventional guidance.

    Get PDF
    Therapeutic ultrasound (US) can be noninvasively focused to activate drugs, ablate tumors and deliver drugs beyond the blood brain barrier. However, well-controlled guidance of US therapy requires fusion with a navigational modality, such as magnetic resonance imaging (MRI) or X-ray computed tomography (CT). Here, we developed and validated tissue characterization using a fusion between US and CT. The performance of the CT/US fusion was quantified by the calibration error, target registration error and fiducial registration error. Met-1 tumors in the fat pads of 12 female FVB mice provided a model of developing breast cancer with which to evaluate CT-based tissue segmentation. Hounsfield units (HU) within the tumor and surrounding fat pad were quantified, validated with histology and segmented for parametric analysis (fat: -300 to 0 HU, protein-rich: 1 to 300 HU, and bone: HU>300). Our open source CT/US fusion system differentiated soft tissue, bone and fat with a spatial accuracy of ∼1 mm. Region of interest (ROI) analysis of the tumor and surrounding fat pad using a 1 mm(2) ROI resulted in mean HU of 68Β±44 within the tumor and -97Β±52 within the fat pad adjacent to the tumor (p<0.005). The tumor area measured by CT and histology was correlated (r(2)β€Š=β€Š0.92), while the area designated as fat decreased with increasing tumor size (r(2)β€Š=β€Š0.51). Analysis of CT and histology images of the tumor and surrounding fat pad revealed an average percentage of fat of 65.3% vs. 75.2%, 36.5% vs. 48.4%, and 31.6% vs. 38.5% for tumors <75 mm(3), 75-150 mm(3) and >150 mm(3), respectively. Further, CT mapped bone-soft tissue interfaces near the acoustic beam during real-time imaging. Combined CT/US is a feasible method for guiding interventions by tracking the acoustic focus within a pre-acquired CT image volume and characterizing tissues proximal to and surrounding the acoustic focus
    • …
    corecore