8,035 research outputs found

    Deep neural networks for non-linear model-based ultrasound reconstruction

    Full text link
    Ultrasound reflection tomography is widely used to image large complex specimens that are only accessible from a single side, such as well systems and nuclear power plant containment walls. Typical methods for inverting the measurement rely on delay-and-sum algorithms that rapidly produce reconstructions but with significant artifacts. Recently, model-based reconstruction approaches using a linear forward model have been shown to significantly improve image quality compared to the conventional approach. However, even these techniques result in artifacts for complex objects because of the inherent non-linearity of the ultrasound forward model. In this paper, we propose a non-iterative model-based reconstruction method for inverting measurements that are based on non-linear forward models for ultrasound imaging. Our approach involves obtaining an approximate estimate of the reconstruction using a simple linear back-projection and training a deep neural network to refine this to the actual reconstruction. We apply our method to simulated ultrasound data and demonstrate dramatic improvements in image quality compared to the delay-and-sum approach and the linear model-based reconstruction approach

    A Deep Learning Framework for Single-Sided Sound Speed Inversion in Medical Ultrasound

    Full text link
    Objective: Ultrasound elastography is gaining traction as an accessible and useful diagnostic tool for such things as cancer detection and differentiation and thyroid disease diagnostics. Unfortunately, state of the art shear wave imaging techniques, essential to promote this goal, are limited to high-end ultrasound hardware due to high power requirements; are extremely sensitive to patient and sonographer motion, and generally, suffer from low frame rates. Motivated by research and theory showing that longitudinal wave sound speed carries similar diagnostic abilities to shear wave imaging, we present an alternative approach using single sided pressure-wave sound speed measurements from channel data. Methods: In this paper, we present a single-sided sound speed inversion solution using a fully convolutional deep neural network. We use simulations for training, allowing the generation of limitless ground truth data. Results: We show that it is possible to invert for longitudinal sound speed in soft tissue at high frame rates. We validate the method on simulated data. We present highly encouraging results on limited real data. Conclusion: Sound speed inversion on channel data has significant potential, made possible in real time with deep learning technologies. Significance: Specialized shear wave ultrasound systems remain inaccessible in many locations. longitudinal sound speed and deep learning technologies enable an alternative approach to diagnosis based on tissue elasticity. High frame rates are possible

    Efficient B-mode Ultrasound Image Reconstruction from Sub-sampled RF Data using Deep Learning

    Full text link
    In portable, three dimensional, and ultra-fast ultrasound imaging systems, there is an increasing demand for the reconstruction of high quality images from a limited number of radio-frequency (RF) measurements due to receiver (Rx) or transmit (Xmit) event sub-sampling. However, due to the presence of side lobe artifacts from RF sub-sampling, the standard beamformer often produces blurry images with less contrast, which are unsuitable for diagnostic purposes. Existing compressed sensing approaches often require either hardware changes or computationally expensive algorithms, but their quality improvements are limited. To address this problem, here we propose a novel deep learning approach that directly interpolates the missing RF data by utilizing redundancy in the Rx-Xmit plane. Our extensive experimental results using sub-sampled RF data from a multi-line acquisition B-mode system confirm that the proposed method can effectively reduce the data rate without sacrificing image quality.Comment: The title has been changed. This version will appear in IEEE Trans. on Medical Imagin

    Deep Learning-based Universal Beamformer for Ultrasound Imaging

    Full text link
    In ultrasound (US) imaging, individual channel RF measurements are back-propagated and accumulated to form an image after applying specific delays. While this time reversal is usually implemented using a hardware- or software-based delay-and-sum (DAS) beamformer, the performance of DAS decreases rapidly in situations where data acquisition is not ideal. Herein, for the first time, we demonstrate that a single data-driven adaptive beamformer designed as a deep neural network can generate high quality images robustly for various detector channel configurations and subsampling rates. The proposed deep beamformer is evaluated for two distinct acquisition schemes: focused ultrasound imaging and planewave imaging. Experimental results showed that the proposed deep beamformer exhibit significant performance gain for both focused and planar imaging schemes, in terms of contrast-to-noise ratio and structural similarity.Comment: Accepted for MICCAI 2019. arXiv admin note: substantial text overlap with arXiv:1901.0170

    Deep Learning Convolutional Networks for Multiphoton Microscopy Vasculature Segmentation

    Full text link
    Recently there has been an increasing trend to use deep learning frameworks for both 2D consumer images and for 3D medical images. However, there has been little effort to use deep frameworks for volumetric vascular segmentation. We wanted to address this by providing a freely available dataset of 12 annotated two-photon vasculature microscopy stacks. We demonstrated the use of deep learning framework consisting both 2D and 3D convolutional filters (ConvNet). Our hybrid 2D-3D architecture produced promising segmentation result. We derived the architectures from Lee et al. who used the ZNN framework initially designed for electron microscope image segmentation. We hope that by sharing our volumetric vasculature datasets, we will inspire other researchers to experiment with vasculature dataset and improve the used network architectures.Comment: 23 pages, 10 figure

    BIRADS Features-Oriented Semi-supervised Deep Learning for Breast Ultrasound Computer-Aided Diagnosis

    Full text link
    Breast ultrasound (US) is an effective imaging modality for breast cancer detection and diagnosis. US computer-aided diagnosis (CAD) systems have been developed for decades and have employed either conventional hand-crafted features or modern automatic deep-learned features, the former relying on clinical experience and the latter demanding large datasets. In this paper, we have developed a novel BIRADS-SDL network that integrates clinically-approved breast lesion characteristics (BIRADS features) into semi-supervised deep learning (SDL) to achieve accurate diagnoses with a small training dataset. Breast US images are converted to BIRADS-oriented feature maps (BFMs) using a distance-transformation coupled with a Gaussian filter. Then, the converted BFMs are used as the input of an SDL network, which performs unsupervised stacked convolutional auto-encoder (SCAE) image reconstruction guided by lesion classification. We trained the BIRADS-SDL network with an alternative learning strategy by balancing reconstruction error and classification label prediction error. We compared the performance of the BIRADS-SDL network with conventional SCAE and SDL methods that use the original images as inputs, as well as with an SCAE that use BFMs as inputs. Experimental results on two breast US datasets show that BIRADS-SDL ranked the best among the four networks, with classification accuracy around 92.00% and 83.90% on two datasets. These findings indicate that BIRADS-SDL could be promising for effective breast US lesion CAD using small datasets

    Autoencoder-Based Articulatory-to-Acoustic Mapping for Ultrasound Silent Speech Interfaces

    Full text link
    When using ultrasound video as input, Deep Neural Network-based Silent Speech Interfaces usually rely on the whole image to estimate the spectral parameters required for the speech synthesis step. Although this approach is quite straightforward, and it permits the synthesis of understandable speech, it has several disadvantages as well. Besides the inability to capture the relations between close regions (i.e. pixels) of the image, this pixel-by-pixel representation of the image is also quite uneconomical. It is easy to see that a significant part of the image is irrelevant for the spectral parameter estimation task as the information stored by the neighbouring pixels is redundant, and the neural network is quite large due to the large number of input features. To resolve these issues, in this study we train an autoencoder neural network on the ultrasound image; the estimation of the spectral speech parameters is done by a second DNN, using the activations of the bottleneck layer of the autoencoder network as features. In our experiments, the proposed method proved to be more efficient than the standard approach: the measured normalized mean squared error scores were lower, while the correlation values were higher in each case. Based on the result of a listening test, the synthesized utterances also sounded more natural to native speakers. A further advantage of our proposed approach is that, due to the (relatively) small size of the bottleneck layer, we can utilize several consecutive ultrasound images during estimation without a significant increase in the network size, while significantly increasing the accuracy of parameter estimation.Comment: 8 pages, 6 figures, Accepted to IJCNN 201

    Accelerating MR Imaging via Deep Chambolle-Pock Network

    Full text link
    Compressed sensing (CS) has been introduced to accelerate data acquisition in MR Imaging. However, CS-MRI methods suffer from detail loss with large acceleration and complicated parameter selection. To address the limitations of existing CS-MRI methods, a model-driven MR reconstruction is proposed that trains a deep network, named CP-net, which is derived from the Chambolle-Pock algorithm to reconstruct the in vivo MR images of human brains from highly undersampled complex k-space data acquired on different types of MR scanners. The proposed deep network can learn the proximal operator and parameters among the Chambolle-Pock algorithm. All of the experiments show that the proposed CP-net achieves more accurate MR reconstruction results, outperforming state-of-the-art methods across various quantitative metrics.Comment: 4 pages, 5 figures, 1 table, Accepted at 2019 IEEE 41st Engineering in Medicine and Biology Conference (EMBC 2019

    Learning beamforming in ultrasound imaging

    Full text link
    Medical ultrasound (US) is a widespread imaging modality owing its popularity to cost efficiency, portability, speed, and lack of harmful ionizing radiation. In this paper, we demonstrate that replacing the traditional ultrasound processing pipeline with a data-driven, learnable counterpart leads to significant improvement in image quality. Moreover, we demonstrate that greater improvement can be achieved through a learning-based design of the transmitted beam patterns simultaneously with learning an image reconstruction pipeline. We evaluate our method on an in-vivo first-harmonic cardiac ultrasound dataset acquired from volunteers and demonstrate the significance of the learned pipeline and transmit beam patterns on the image quality when compared to standard transmit and receive beamformers used in high frame-rate US imaging. We believe that the presented methodology provides a fundamentally different perspective on the classical problem of ultrasound beam pattern design

    Improving learnability of neural networks: adding supplementary axes to disentangle data representation

    Full text link
    Over-parameterized deep neural networks have proven to be able to learn an arbitrary dataset with 100%\% training accuracy. Because of a risk of overfitting and computational cost issues, we cannot afford to increase the number of network nodes if we want achieve better training results for medical images. Previous deep learning research shows that the training ability of a neural network improves dramatically (for the same epoch of training) when a few nodes with supplementary information are added to the network. These few informative nodes allow the network to learn features that are otherwise difficult to learn by generating a disentangled data representation. This paper analyzes how concatenation of additional information as supplementary axes affects the training of the neural networks. This analysis was conducted for a simple multilayer perceptron (MLP) classification model with a rectified linear unit (ReLU) on two-dimensional training data. We compared the networks with and without concatenation of supplementary information to support our analysis. The model with concatenation showed more robust and accurate training results compared to the model without concatenation. We also confirmed that our findings are valid for deeper convolutional neural networks (CNN) using ultrasound images and for a conditional generative adversarial network (cGAN) using the MNIST data
    corecore