21 research outputs found

    Landmark Tracking in Liver US images Using Cascade Convolutional Neural Networks with Long Short-Term Memory

    Full text link
    This study proposed a deep learning-based tracking method for ultrasound (US) image-guided radiation therapy. The proposed cascade deep learning model is composed of an attention network, a mask region-based convolutional neural network (mask R-CNN), and a long short-term memory (LSTM) network. The attention network learns a mapping from a US image to a suspected area of landmark motion in order to reduce the search region. The mask R-CNN then produces multiple region-of-interest (ROI) proposals in the reduced region and identifies the proposed landmark via three network heads: bounding box regression, proposal classification, and landmark segmentation. The LSTM network models the temporal relationship among the successive image frames for bounding box regression and proposal classification. To consolidate the final proposal, a selection method is designed according to the similarities between sequential frames. The proposed method was tested on the liver US tracking datasets used in the Medical Image Computing and Computer Assisted Interventions (MICCAI) 2015 challenges, where the landmarks were annotated by three experienced observers to obtain their mean positions. Five-fold cross-validation on the 24 given US sequences with ground truths shows that the mean tracking error for all landmarks is 0.65+/-0.56 mm, and the errors of all landmarks are within 2 mm. We further tested the proposed model on 69 landmarks from the testing dataset that has a similar image pattern to the training pattern, resulting in a mean tracking error of 0.94+/-0.83 mm. Our experimental results have demonstrated the feasibility and accuracy of our proposed method in tracking liver anatomic landmarks using US images, providing a potential solution for real-time liver tracking for active motion management during radiation therapy

    Ultrasound-guided needle tracking with deep learning: A novel approach with photoacoustic ground truth

    Get PDF
    Accurate needle guidance is crucial for safe and effective clinical diagnosis and treatment procedures. Conventional ultrasound (US)-guided needle insertion often encounters challenges in consistency and precisely visualizing the needle, necessitating the development of reliable methods to track the needle. As a powerful tool in image processing, deep learning has shown promise for enhancing needle visibility in US images, although its dependence on manual annotation or simulated data as ground truth can lead to potential bias or difficulties in generalizing to real US images. Photoacoustic (PA) imaging has demonstrated its capability for high-contrast needle visualization. In this study, we explore the potential of PA imaging as a reliable ground truth for deep learning network training without the need for expert annotation. Our network (UIU-Net), trained on ex vivo tissue image datasets, has shown remarkable precision in localizing needles within US images. The evaluation of needle segmentation performance extends across previously unseen ex vivo data and in vivo human data (collected from an open-source data repository). Specifically, for human data, the Modified Hausdorff Distance (MHD) value stands at approximately 3.73, and the targeting error value is around 2.03, indicating the strong similarity and small needle orientation deviation between the predicted needle and actual needle location. A key advantage of our method is its applicability beyond US images captured from specific imaging systems, extending to images from other US imaging systems.This article is published as Hui, Xie, Praveenbalaji Rajendran, Tong Ling, Xianjin Dai, Lei Xing, and Manojit Pramanik. "Ultrasound-guided needle tracking with deep learning: A novel approach with photoacoustic ground truth." Photoacoustics 34 (2023): 100575. doi: https://doi.org/10.1016/j.pacs.2023.100575. © 2023 The Author(s). This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/)

    Targeted Molecular Imaging of Pancreatic Cancer with a Miniature Endoscope

    No full text
    It is highly desirable to develop novel approaches to improve patient survival rate of pancreatic cancer through early detection. Here, we present such an approach based on photoacoustic and fluorescence molecular imaging of pancreatic tumor using a miniature multimodal endoscope in combination with targeted multifunctional iron oxide nanoparticles (IONPs). A novel fan-shaped scanning mechanism was developed to minimize the invasiveness for endoscopic imaging of pancreatic tumors. The results show that the enhancements in photoacoustic and fluorescence signals using amino-terminal fragment (ATF) targeted IONPs were ~four to six times higher compared to that using non-targeted IONPs. Our study indicates the potential of the combination of the multimodal photoacoustic-fluorescence endoscopy and targeted multifunctional nanoparticles as an efficient tool to provide improved specificity and sensitivity for pancreatic cancer detection

    FMTPen: A Miniaturized Handheld Fluorescence Molecular Tomography Probe for Image-Guided Cancer Surgery

    No full text
    We described a novel handheld device (termed FMTPen) for three-dimensional (3D) fluorescence molecular tomography (FMT). The FMTpen is characterized by its bendable structure and miniaturized size (10 mm in diameter) that can be potentially used as an intraoperative tool for the detection of tumor margins and for image-guided surgery. Several phantom experiments based on indocyanine green (ICG), an FDA approved near-infrared (NIR) fluorescent dye, were conducted to evaluate the imaging ability of this device. Two tumor-bearing mice were systematically injected with tumor-targeted NIR fluorescent probes (NIR-830-ATF68-IONP and NIR-830-ZHER2:343-IONP, respectively) and were then imaged to further demonstrate the ability of this FMT probe for imaging small animals

    Miniature multimodal endoscopic probe based on double-clad fiber

    No full text
    International audienceOptical coherence tomography (OCT) can obtain light scattering properties with a high resolution, while photoacoustic imaging (PAI) is ideal for mapping optical absorbers in biological tissues, and ultrasound (US) could penetrate deeply into tissues and provide elastically structural information. It is attractive and challenging to integrate these three imaging modalities into a miniature probe, through which, both optical absorption and scattering information of tissues as well as deep-tissue structure can be obtained. Here, we present a novel side-view probe integrating PAI, OCT and US imaging based on double-clad fiber which is used as a common optical path for PAI (light delivery) and OCT (light delivery/detection), and a 40 MHz unfocused ultrasound transducer for PAI (photoacoustic detection) and US (ultrasound transmission/receiving) with an overall diameter of 1.0 mm. Experiments were conducted to demonstrate the capabilities of the integrated multimodal imaging probe, which is suitable for endoscopic imaging and intravascular imaging

    Miniature Endoscope for Multimodal Imaging

    No full text
    A single miniature endoscope capable of concurrently probing multiple contrast mechanisms of tissue in high resolution is highly attractive, as it makes it possible for providing complementary, more complete tissue information on internal organs hard to access. Here we describe such a miniature endoscope only 1 mm in diameter that integrates photoacoustic imaging (PAI), optical coherence tomography (OCT), and ultrasound (US). The integration of PAI/OCT/US allows for high-resolution imaging of three tissue contrasts including optical absorption (PAI), optical scattering (OCT), and acoustic properties (US). We demonstrate the capabilities of this trimodal endoscope using mouse ear, human hand, and human arteries with atherosclerotic plaques. This 1-mm-diameter trimodal endoscope has the potential to be used for imaging of internal organs such as arteries, GI tracts, esophagus, and prostate in both humans and animals
    corecore