9 research outputs found

    Deep Learning-based Synthetic High-Resolution In-Depth Imaging Using an Attachable Dual-element Endoscopic Ultrasound Probe

    Full text link
    Endoscopic ultrasound (EUS) imaging has a trade-off between resolution and penetration depth. By considering the in-vivo characteristics of human organs, it is necessary to provide clinicians with appropriate hardware specifications for precise diagnosis. Recently, super-resolution (SR) ultrasound imaging studies, including the SR task in deep learning fields, have been reported for enhancing ultrasound images. However, most of those studies did not consider ultrasound imaging natures, but rather they were conventional SR techniques based on downsampling of ultrasound images. In this study, we propose a novel deep learning-based high-resolution in-depth imaging probe capable of offering low- and high-frequency ultrasound image pairs. We developed an attachable dual-element EUS probe with customized low- and high-frequency ultrasound transducers under small hardware constraints. We also designed a special geared structure to enable the same image plane. The proposed system was evaluated with a wire phantom and a tissue-mimicking phantom. After the evaluation, 442 ultrasound image pairs from the tissue-mimicking phantom were acquired. We then applied several deep learning models to obtain synthetic high-resolution in-depth images, thus demonstrating the feasibility of our approach for clinical unmet needs. Furthermore, we quantitatively and qualitatively analyzed the results to find a suitable deep-learning model for our task. The obtained results demonstrate that our proposed dual-element EUS probe with an image-to-image translation network has the potential to provide synthetic high-frequency ultrasound images deep inside tissues.Comment: 10 pages, 9 figure

    Intelligent smartphone-based multimode imaging otoscope for the mobile diagnosis of otitis media

    Get PDF
    Otitis media (OM) is one of the most common ear diseases in children and a common reason for outpatient visits to medical doctors in primary care practices. Adhesive OM (AdOM) is recognized as a sequela of OM with effusion (OME) and often requires surgical intervention. OME and AdOM exhibit similar symptoms, and it is difficult to distinguish between them using a conventional otoscope in a primary care unit. The accuracy of the diagnosis is highly dependent on the experience of the examiner. The development of an advanced otoscope with less variation in diagnostic accuracy by the examiner is crucial for a more accurate diagnosis. Thus, we developed an intelligent smartphone-based multimode imaging otoscope for better diagnosis of OM, even in mobile environments. The system offers spectral and autofluorescence imaging of the tympanic membrane using a smartphone attached to the developed multimode imaging module. Moreover, it is capable of intelligent analysis for distinguishing between normal, OME, and AdOM ears using a machine learning algorithm. Using the developed system, we examined the ears of 69 patients to assess their performance for distinguishing between normal, OME, and AdOM ears. In the classification of ear diseases, the multimode system based on machine learning analysis performed better in terms of accuracy and F1 scores than single RGB image analysis, RGB/fluorescence image analysis, and the analysis of spectral image cubes only, respectively. These results demonstrate that the intelligent multimode diagnostic capability of an otoscope would be beneficial for better diagnosis and management of OM. © 2021 OSA - The Optical Society. All rights reserved.1

    체외막 산소화 장치 모니터링을 위한 스마트 Xero 알고리즘 기반 초음파 혈류계 시스템

    No full text
    Xero, Ultrasonic blood flowmeter, Zero-crossing, Cross-correlation, Extracorporeal Membrane Oxygenation DeviceN1 INTRODUCTION 1 2 METHODS AND MATERIALS 5 2.1 Development of a UFM Sensor Module 5 2.2 Experimental Setup 6 2.3 Descriptions of the Proposed Xero Algorithm 8 2.4 Experimental Procedures 13 3 RESULTS 15 3.1 Flowrate Estimation 15 3.2 Robustness According to the Fluid Temperature 16 3.3 Accuracy via Number of Cycles 19 3.4 Continuous Monitoring Performance of the UFM System 21 4 DISCUSSION 23 5 CONCLUSIONS 27 Bibliography 30 국문초록 34MasterdCollectio

    Deep Learning-Based Framework for Fast and Accurate Acoustic Hologram Generation

    No full text
    Acoustic holography has been gaining attention for various applications such as non-contact particle manipulation, non-invasive neuromodulation, and medical imaging. However, only a few studies on how to generate acoustic holograms have been conducted, and even conventional acoustic hologram algorithms show limited performance in the fast and accurate generation of acoustic holograms, thus hindering the development of novel applications. We here propose a deep learning-based framework to achieve fast and accurate acoustic hologram generation. The framework has an autoencoder-like architecture; thus the unsupervised training is realized without any ground truth. For the framework, we demonstrate a newly developed hologram generator network, the Holographic Ultrasound generation Network (HU-Net), which is suitable for unsupervised learning of hologram generation, and a novel loss function that is devised for energy-efficient holograms. Furthermore, for considering various hologram devices (i.e., ultrasound transducers), we propose a physical constraint layer. Simulation and experimental studies were carried out for two different hologram devices such as a 3D printed lens, attached to a single element transducer, and a 2D ultrasound array. The proposed framework was compared with the iterative angular spectrum approach (IASA) and the state-of-the-art iterative optimization method, Diff-PAT. In the simulation study, our framework showed a few hundred times faster generation speed, along with comparable or even better reconstruction quality, than those of IASA and Diff-PAT. In the experimental study, the framework was validated with 3D-printed lenses fabricated based on different methods, and the physical effect of the lenses on the reconstruction quality was discussed. The outcomes of the proposed framework in various cases (i.e., hologram generator networks, loss functions, hologram devices) suggest that our framework may become a very useful alternative tool for other existing acoustic hologram applications and it can expand novel medical applications. IEEEFALS

    CSS-Net: Classification and Substitution for Segmentation of Rotator Cuff Tear

    No full text
    Magnetic resonance imaging (MRI) has been popularly used to diagnose orthopedic injuries because it offers high spatial resolution in a non-invasive manner. Since the rotator cuff tear (RCT) is a tear of the supraspinatus tendon (ST), a precise comprehension of both is required to diagnose the tear. However, previous deep learning studies have been insufficient in comprehending the correlations between the ST and RCT effectively and accurately. Therefore, in this paper, we propose a new method, substitution learning, wherein an MRI image is used to improve RCT diagnosis based on the knowledge transfer. The substitution learning mainly aims at segmenting RCT from MRI images by using the transferred knowledge while learning the correlations between RCT and ST. In substitution learning, the knowledge of correlations between RCT and ST is acquired by substituting the segmentation target (RCT) with the other target (ST), which has similar properties. To this end, we designed a novel deep learning model based on multi-task learning, which incorporates the newly developed substitution learning, with three parallel pipelines: (1) segmentation of RCT and ST regions, (2) classification of the existence of RCT, and (3) substitution of the ruptured ST regions, which are RCTs, with the recovered ST regions. We validated our developed model through experiments using 889 multi-categorical MRI images. The results exhibit that the proposed deep learning model outperforms other segmentation models to diagnose RCT with 6 ∼ 8% improved IoU values. Remarkably, the ablation study explicates that substitution learning ensured more valid knowledge transfer

    Speckle Reduction via Deep Content-Aware Image Prior for Precise Breast Tumor Segmentation in an Ultrasound Image

    No full text
    The performance of computer-aided diagnosis (CAD) systems that are based on ultrasound imaging has been enhanced owing to the advancement in deep learning. However, because of the inherent speckle noise in ultrasound images, the ambiguous boundaries of lesions deteriorate and are difficult to distinguish, resulting in the performance degradation of CAD. Although several methods have been proposed to reduce speckle noise over decades, this task remains a challenge that must be improved to enhance the performance of CAD. In this paper, we propose a deep content-aware image prior with a content-aware attention module for superior despeckling of ultrasound images without clean images. For the image prior, we developed a content-aware attention module to deal with the content information in an input image. In this module, super-pixel pooling is used to give attention to salient regions in an ultrasound image. Therefore, it can provide more content information regarding the input image when compared to other attention modules. The deep content-aware image prior consists of deep learning networks based on this attention module. The deep content-aware image prior is validated by applying it as a preprocessing step for breast tumor segmentation in ultrasound images, which is one of the tasks in CAD. Our method improved the segmentation performance by 15.89% in terms of the area under the precision-recall curve. The results demonstrate that our method enhances the quality of ultrasound images by effectively reducing speckle noise while preserving important information in the image, promising for the design of superior CAD systems. IEEEFALS

    Ultrasonic blood flowmeter with a novel Xero algorithm for a mechanical circulatory support system

    No full text
    Mechanical circulatory support systems (MCSSs) are crucial devices for transplants in patients with heart failure. The blood flowing through the MCSS can be recirculated or even stagnated in the event of critical blood flow issues. To avoid emergencies due to abnormal changes in the flow, continuous changes of the flowrate should be measured with high accuracy and robustness. For better flowrate measurements, a more advanced ultrasonic blood flowmeter (UFM), which is a noninvasive measurement tool, is needed. In this paper, we propose a novel UFM sensor module using a novel algorithm (Xero) that can exploit the advantages of both conventional cross-correlation (Xcorr) and zero-crossing (Zero) algorithms, using only the zero-crossing-based algorithm. To ensure the capability of our own developed and optimized ultrasonic sensor module for MCSSs, the accuracy, robustness, and continuous monitoring performance of the proposed algorithm were compared to those of conventional algorithms after application to the developed sensor module. The results show that Xero is superior to other algorithms for flowrate measurements under different environments and offers an error rate of at least 0.92%, higher robustness for changing fluid temperatures than conventional algorithms, and sensitive responses to sudden changes in flowrates. Thus, the proposed UFM system with Xero has a great potential for flowrate measurements in MCSSs. © 20211

    Multi-Task and Few-Shot Learning-Based Fully Automatic Deep Learning Platform for Mobile Diagnosis of Skin Diseases

    No full text
    Fluorescence imaging-based diagnostic systems have been widely used to diagnose skin diseases due to their ability to provide detailed information related to the molecular composition of the skin compared to conventional RGB imaging. In addition, recent advances in smartphones have made them suitable for application in biomedical imaging, and various smartphone-based optical imaging systems have been developed for mobile healthcare. However, an advanced analysis algorithm is required to improve the diagnosis of skin diseases. Various deep learning-based algorithms have recently been developed for this purpose. However, deep learning-based algorithms using only white-light reflectance RGB images have exhibited limited diagnostic performance. In this study, we developed an auxiliary deep learning network called fluorescence-aided amplifying network (FAA-Net) to diagnose skin diseases using a developed multi-modal smartphone imaging system that offers RGB and fluorescence images. FAA-Net is equipped with a meta-learning-based algorithm to solve problems that may occur due to the insufficient number of images acquired by the developed system. In addition, we devised a new attention-based module that can learn the location of skin diseases by itself and emphasize potential disease regions, and incorporated it into FAA-Net. We conducted a clinical trial in a hospital to evaluate the performance of FAA-Net and to compare various evaluation metrics of our developed model and other state-of-the-art models for the diagnosis of skin diseases using our multi-modal system. Experimental results demonstrated that our developed model exhibited an 8.61% and 9.83% improvement in mean accuracy and area under the curve in classifying skin diseases, respectively, compared with other advanced models. IEEEFALS

    Forward-Looking Multimodal Endoscopic System Based on Optical Multispectral and High-Frequency Ultrasound Imaging Techniques for Tumor Detection

    No full text
    We developed a forward-looking (FL) multimodal endoscopic system that offers color, spectral classified, high-frequency ultrasound (HFUS) B-mode, and integrated backscattering coefficient (IBC) images for tumor detection in situ. Examination of tumor distributions from the surface of the colon to deeper inside is essential for determining a treatment plan of cancer. For example, the submucosal invasion depth of tumors in addition to the tumor distributions on the colon surface is used as an indicator of whether the endoscopic dissection would be operated. Thus, we devised the FL multimodal endoscopic system to offer information on the tumor distribution from the surface to deep tissue with high accuracy. This system was evaluated with bilayer gelatin phantoms which have different properties at each layer of the phantom in a lateral direction. After evaluating the system with phantoms, it was employed to characterize forty human colon tissues excised from cancer patients. The proposed system could allow us to obtain highly resolved chemical, anatomical, and macro-molecular information on excised colon tissues including tumors, thus enhancing the detection of tumor distributions from the surface to deep tissue. These results suggest that the FL multimodal endoscopic system could be an innovative screening instrument for quantitative tumor characterization. © 1982-2012 IEEE.1
    corecore