50 research outputs found

    Exploiting flow dynamics for super-resolution in contrast-enhanced ultrasound

    Get PDF
    Ultrasound localization microscopy offers new radiation-free diagnostic tools for vascular imaging deep within the tissue. Sequential localization of echoes returned from inert microbubbles with low-concentration within the bloodstream reveal the vasculature with capillary resolution. Despite its high spatial resolution, low microbubble concentrations dictate the acquisition of tens of thousands of images, over the course of several seconds to tens of seconds, to produce a single super-resolved image. %since each echo is required to be well separated from adjacent microbubbles. Such long acquisition times and stringent constraints on microbubble concentration are undesirable in many clinical scenarios. To address these restrictions, sparsity-based approaches have recently been developed. These methods reduce the total acquisition time dramatically, while maintaining good spatial resolution in settings with considerable microbubble overlap. %Yet, non of the reported methods exploit the fact that microbubbles actually flow within the bloodstream. % to improve recovery. Here, we further improve sparsity-based super-resolution ultrasound imaging by exploiting the inherent flow of microbubbles and utilize their motion kinematics. While doing so, we also provide quantitative measurements of microbubble velocities. Our method relies on simultaneous tracking and super-localization of individual microbubbles in a frame-by-frame manner, and as such, may be suitable for real-time implementation. We demonstrate the effectiveness of the proposed approach on both simulations and {\it in-vivo} contrast enhanced human prostate scans, acquired with a clinically approved scanner.Comment: 11 pages, 9 figure

    Learning Sampling and Model-Based Signal Recovery for Compressed Sensing MRI

    Get PDF
    Compressed sensing (CS) MRI relies on adequate undersampling of the k-space to accelerate the acquisition without compromising image quality. Consequently, the design of optimal sampling patterns for these k-space coefficients has received significant attention, with many CS MRI methods exploiting variable-density probability distributions. Realizing that an optimal sampling pattern may depend on the downstream task (e.g. image reconstruction, segmentation, or classification), we here propose joint learning of both task-adaptive k-space sampling and a subsequent model-based proximal-gradient recovery network. The former is enabled through a probabilistic generative model that leverages the Gumbel-softmax relaxation to sample across trainable beliefs while maintaining differentiability. The proposed combination of a highly flexible sampling model and a model-based (sampling-adaptive) image reconstruction network facilitates exploration and efficient training, yielding improved MR image quality compared to other sampling baselines

    HKF: Hierarchical Kalman Filtering with Online Learned Evolution Priors for Adaptive ECG Denoising

    Full text link
    Electrocardiography (ECG) signals play a pivotal role in many healthcare applications, especially in at-home monitoring of vital signs. Wearable technologies, which these applications often depend upon, frequently produce low-quality ECG signals. While several methods exist for ECG denoising to enhance signal quality and aid clinical interpretation, they often underperform with ECG data from wearable technology due to limited noise tolerance or inadequate flexibility in capturing ECG dynamics. This paper introduces HKF, a hierarchical and adaptive Kalman filter, which uses a proprietary state space model to effectively capture both intra- and inter-heartbeat dynamics for ECG signal denoising. HKF learns a patient-specific structured prior for the ECG signal's intra-heartbeat dynamics in an online manner, resulting in a filter that adapts to the specific ECG signal characteristics of each patient. In an empirical study, HKF demonstrated superior denoising performance (reduced mean-squared error) while preserving the unique properties of the waveform. In a comparative analysis, HKF outperformed previously proposed methods for ECG denoising, such as the model-based Kalman filter and data-driven autoencoders. This makes it a suitable candidate for applications in extramural healthcare settings.Comment: Submitted to Transactions on Signal Processin

    Ultrasound Signal Processing: From Models to Deep Learning

    Get PDF
    Medical ultrasound imaging relies heavily on high-quality signal processing algorithms to provide reliable and interpretable image reconstructions. Hand-crafted reconstruction methods, often based on approximations of the underlying measurement model, are useful in practice, but notoriously fall behind in terms of image quality. More sophisticated solutions, based on statistical modelling, careful parameter tuning, or through increased model complexity, can be sensitive to different environments. Recently, deep learning based methods have gained popularity, which are optimized in a data-driven fashion. These model-agnostic methods often rely on generic model structures, and require vast training data to converge to a robust solution. A relatively new paradigm combines the power of the two: leveraging data-driven deep learning, as well as exploiting domain knowledge. These model-based solutions yield high robustness, and require less trainable parameters and training data than conventional neural networks. In this work we provide an overview of these methods from the recent literature, and discuss a wide variety of ultrasound applications. We aim to inspire the reader to further research in this area, and to address the opportunities within the field of ultrasound signal processing. We conclude with a future perspective on these model-based deep learning techniques for medical ultrasound applications

    SubspaceNet: Deep Learning-Aided Subspace Methods for DoA Estimation

    Full text link
    Direction of arrival (DoA) estimation is a fundamental task in array processing. A popular family of DoA estimation algorithms are subspace methods, which operate by dividing the measurements into distinct signal and noise subspaces. Subspace methods, such as Multiple Signal Classification (MUSIC) and Root-MUSIC, rely on several restrictive assumptions, including narrowband non-coherent sources and fully calibrated arrays, and their performance is considerably degraded when these do not hold. In this work we propose SubspaceNet; a data-driven DoA estimator which learns how to divide the observations into distinguishable subspaces. This is achieved by utilizing a dedicated deep neural network to learn the empirical autocorrelation of the input, by training it as part of the Root-MUSIC method, leveraging the inherent differentiability of this specific DoA estimator, while removing the need to provide a ground-truth decomposable autocorrelation matrix. Once trained, the resulting SubspaceNet serves as a universal surrogate covariance estimator that can be applied in combination with any subspace-based DoA estimation method, allowing its successful application in challenging setups. SubspaceNet is shown to enable various DoA estimation algorithms to cope with coherent sources, wideband signals, low SNR, array mismatches, and limited snapshots, while preserving the interpretability and the suitability of classic subspace methods.Comment: Under review for publication in the IEE

    Learning Sub-Sampling and Signal Recovery with Applications in Ultrasound Imaging

    Full text link
    Limitations on bandwidth and power consumption impose strict bounds on data rates of diagnostic imaging systems. Consequently, the design of suitable (i.e. task- and data-aware) compression and reconstruction techniques has attracted considerable attention in recent years. Compressed sensing emerged as a popular framework for sparse signal reconstruction from a small set of compressed measurements. However, typical compressed sensing designs measure a (non)linearly weighted combination of all input signal elements, which poses practical challenges. These designs are also not necessarily task-optimal. In addition, real-time recovery is hampered by the iterative and time-consuming nature of sparse recovery algorithms. Recently, deep learning methods have shown promise for fast recovery from compressed measurements, but the design of adequate and practical sensing strategies remains a challenge. Here, we propose a deep learning solution termed Deep Probabilistic Sub-sampling (DPS), that learns a task-driven sub-sampling pattern, while jointly training a subsequent task model. Once learned, the task-based sub-sampling patterns are fixed and straightforwardly implementable, e.g. by non-uniform analog-to-digital conversion, sparse array design, or slow-time ultrasound pulsing schemes. The effectiveness of our framework is demonstrated in-silico for sparse signal recovery from partial Fourier measurements, and in-vivo for both anatomical image and tissue-motion (Doppler) reconstruction from sub-sampled medical ultrasound imaging data

    Investigating and Improving Latent Density Segmentation Models for Aleatoric Uncertainty Quantification in Medical Imaging

    Full text link
    Data uncertainties, such as sensor noise or occlusions, can introduce irreducible ambiguities in images, which result in varying, yet plausible, semantic hypotheses. In Machine Learning, this ambiguity is commonly referred to as aleatoric uncertainty. Latent density models can be utilized to address this problem in image segmentation. The most popular approach is the Probabilistic U-Net (PU-Net), which uses latent Normal densities to optimize the conditional data log-likelihood Evidence Lower Bound. In this work, we demonstrate that the PU- Net latent space is severely inhomogenous. As a result, the effectiveness of gradient descent is inhibited and the model becomes extremely sensitive to the localization of the latent space samples, resulting in defective predictions. To address this, we present the Sinkhorn PU-Net (SPU-Net), which uses the Sinkhorn Divergence to promote homogeneity across all latent dimensions, effectively improving gradient-descent updates and model robustness. Our results show that by applying this on public datasets of various clinical segmentation problems, the SPU-Net receives up to 11% performance gains compared against preceding latent variable models for probabilistic segmentation on the Hungarian-Matched metric. The results indicate that by encouraging a homogeneous latent space, one can significantly improve latent density modeling for medical image segmentation.Comment: 12 pages incl. references, 11 figure

    .Blood flow patterns estimation in the left ventricle with low-rate 2D and 3D dynamic contrast-enhanced ultrasound

    Get PDF
    a b s t r a c t Background and Objective : Left ventricle (LV) dysfunction always occurs at early heart-failure stages, pro- ducing variations in the LV flow patterns. Cardiac diagnostics may therefore benefit from flow-pattern analysis. Several visualization tools have been proposed that require ultrafast ultrasound acquisitions. However, ultrafast ultrasound is not standard in clinical scanners. Meanwhile techniques that can handle low frame rates are still lacking. As a result, the clinical translation of these techniques remains limited, especially for 3D acquisitions where the volume rates are intrinsically low. Methods : To overcome these limitations, we propose a novel technique for the estimation of LV blood velocity and relative-pressure fields from dynamic contrast-enhanced ultrasound (DCE-US) at low frame rates. Different from other methods, our method is based on the time-delays between time-intensity curves measured at neighbor pixels in the DCE-US loops. Using Navier-Stokes equation, we regularize the obtained velocity fields and derive relative-pressure estimates. Blood flow patterns were characterized with regard to their vorticity, relative-pressure changes (dp/dt) in the LV outflow tract, and viscous energy loss, as these reflect the ejection efficiency. Results : We evaluated the proposed method on 18 patients (9 responders and 9 non-responders) who un- derwent cardiac resynchronization therapy (CRT). After CRT, the responder group evidenced a significant (p < 0.05) increase in vorticity and peak dp/dt, and a non-significant decrease in viscous energy loss. No significant difference was found in the non-responder group. Relative feature variation before and after CRT evidenced a significant difference (p < 0.05) between responders and non-responders for vorticity and peak dp/dt. Finally, the method feasibility is also shown with 3D DCE-US. Conclusions : Using the proposed method, adequate visualization and quantification of blood flow patterns are successfully enabled based on low-rate DCE-US of the LV, facilitating the clinical adoption of the method using standard ultrasound scanners. The clinical value of the method in the context of CRT is also shown

    Ultrasonic Array Doppler Sensing for Human Movement Classification

    Full text link
    corecore