6 research outputs found

    Analytical Estimation of Beamforming Speed-of-Sound Using Transmission Geometry

    Full text link
    Most ultrasound imaging techniques necessitate the fundamental step of converting temporal signals received from transducer elements into a spatial echogenecity map. This beamforming (BF) step requires the knowledge of speed-of-sound (SoS) value in the imaged medium. An incorrect assumption of BF SoS leads to aberration artifacts, not only deteriorating the quality and resolution of conventional brightness mode (B-mode) images, hence limiting their clinical usability, but also impairing other ultrasound modalities such as elastography and spatial SoS reconstructions, which rely on faithfully beamformed images as their input. In this work, we propose an analytical method for estimating BF SoS. We show that pixel-wise relative shifts between frames beamformed with an assumed SoS is a function of geometric disparities of the transmission paths and the error in such SoS assumption. Using this relation, we devise an analytical model, the closed form solution of which yields the difference between the assumed and the true SoS in the medium. Based on this, we correct the BF SoS, which can also be applied iteratively. Both in simulations and experiments, lateral B-mode resolution is shown to be improved by \approx25% compared to that with an initial SoS assumption error of 3.3% (50 m/s), while localization artifacts from beamforming are also corrected. After 5 iterations, our method achieves BF SoS errors of under 0.6 m/s in simulations and under 1 m/s in experimental phantom data. Residual time-delay errors in beamforming 32 numerical phantoms are shown to reduce down to 0.07 μ\mus, with average improvements of up to 21 folds compared to initial inaccurate assumptions. We additionally show the utility of the proposed method in imaging local SoS maps, where using our correction method reduces reconstruction root-mean-square errors substantially, down to their lower-bound with actual BF SoS

    Learning the Imaging Model of Speed-of-Sound Reconstruction via a Convolutional Formulation

    Full text link
    Speed-of-sound (SoS) is an emerging ultrasound contrast modality, where pulse-echo techniques using conventional transducers offer multiple benefits. For estimating tissue SoS distributions, spatial domain reconstruction from relative speckle shifts between different beamforming sequences is a promising approach. This operates based on a forward model that relates the sought local values of SoS to observed speckle shifts, for which the associated image reconstruction inverse problem is solved. The reconstruction accuracy thus highly depends on the hand-crafted forward imaging model. In this work, we propose to learn the SoS imaging model based on data. We introduce a convolutional formulation of the pulse-echo SoS imaging problem such that the entire field-of-view requires a single unified kernel, the learning of which is then tractable and robust. We present least-squares estimation of such convolutional kernel, which can further be constrained and regularized for numerical stability. In experiments, we show that a forward model learned from k-Wave simulations improves the median contrast of SoS reconstructions by 63%, compared to a conventional hand-crafted line-based wave-path model. This simulation-learned model generalizes successfully to acquired phantom data, nearly doubling the SoS contrast compared to the conventional hand-crafted alternative. We demonstrate equipment-specific and small-data regime feasibility by learning a forward model from a single phantom image, where our learned model quadruples the SoS contrast compared to the conventional hand-crafted model. On in-vivo data, the simulation- and phantom-learned models respectively exhibit impressive 7 and 10 folds contrast improvements over the conventional model

    High-resolution Multi-spectral Imaging with Diffractive Lenses and Learned Reconstruction

    Full text link
    Spectral imaging is a fundamental diagnostic technique with widespread application. Conventional spectral imaging approaches have intrinsic limitations on spatial and spectral resolutions due to the physical components they rely on. To overcome these physical limitations, in this paper, we develop a novel multi-spectral imaging modality that enables higher spatial and spectral resolutions. In the developed computational imaging modality, we exploit a diffractive lens, such as a photon sieve, for both dispersing and focusing the optical field, and achieve measurement diversity by changing the focusing behavior of this lens. Because the focal length of a diffractive lens is wavelength-dependent, each measurement is a superposition of differently blurred spectral components. To reconstruct the individual spectral images from these superimposed and blurred measurements, model-based fast reconstruction algorithms are developed with deep and analytical priors using alternating minimization and unrolling. Finally, the effectiveness and performance of the developed technique is illustrated for an application in astrophysical imaging under various observation scenarios in the extreme ultraviolet (EUV) regime. The results demonstrate that the technique provides not only diffraction-limited high spatial resolution, as enabled by diffractive lenses, but also the capability of resolving close-by spectral sources that would not otherwise be possible with the existing techniques. This work enables high resolution multi-spectral imaging with low cost designs for a variety of applications and spectral regimes.Comment: accepted for publication in IEEE Transactions on Computational Imaging, see DOI belo

    HESAPLAMALI GÖRÜNTÜLEME İÇİN DERİN ÖĞRENME TABANLI GERİÇATIM YÖNTEMLERİ

    Get PDF
    Computational imaging is the process of forming images from indirect measurements using computation. In this thesis, we develop deep learning-based unrolled reconstruction methods for various computational imaging modalities. Firstly, we develop two deep learning-based reconstruction methods for diffractive multi-spectral imaging. The first approach is based on plug-and-play regularization with deep denoisers whereas the second one is an end-to-end learned reconstruction based on unrolling. Secondly, we consider general multidimensional imaging systems whose measurements involve convolution and superposition. We formulate a realistic truncated image formation model and develop a deep learning-based unrolled reconstruction method to solve the associated inverse problem. The performance is illustrated for single-frame deconvolution and diffractive multi-spectral imaging applications. Thirdly, we extend the deep learning-based unrolled reconstruction methods to a compressive spectral imaging modality by considering both the conventional and truncated image formation models. The performance of all the developed methods is comparatively evaluated for sample applications using different convolutional neural network (CNN) architectures, including U-Net. Results illustrate the superior performance of the developed unrolled learned reconstruction methods over purely analytical methods for all simulation settings. We also demonstrate the generalization capability of the developed unrolled reconstruction methods through extensive simulations and perform model mismatch analyses to illustrate the improvement gained with the truncated image formation model.Hesaplamalı görüntüleme, görüntülerin hesaplama kullanılarak dolaylı ölçümlerden elde edilmesidir. Bu tezde, çeşitli hesaplamalı görüntüleme metotları için derin öğrenme tabanlı geriçatım yöntemleri geliştirilmektedir. İlk olarak, kırınımlı çoklu spektral görüntüleme için derin öğrenme tabanlı iki geriçatım yöntemi geliştirilmektedir. İlk yaklaşım, derin gürültüsüzleştiriciler ile koy ve oynat düzenlileştirmesine dayanan özyineli bir yöntem ve ikinci yaklaşım, özyineli yöntemin iterasyonlar boyunca açılmasına dayanan baştan sonra öğrenilmiş bir yöntemdir. İkinci olarak, ölçümleri evrişim ve üstdüşüm içeren genel çok boyutlu görüntüleme sistemleri dikkate alınmaktadır. Gerçekçi kırpılmış görüntüleme modeli formüle edilmektedir ve ilgili ters problemi çözmek için derin öğrenme tabanlı geriçatım yöntemi geliştirilmektedir. Yöntemin performansı, tek-çerçeve ters evrişim ve kırınımlı çoklu spektral görüntüleme uygulamaları için gösterilmektedir. Üçüncü olarak, derin öğrenme tabanlı geriçatım yöntemleri hem klasik hem de kırpılmış görüntüleme modelleri kullanılarak sıkıştırılmış spektral görüntüleme metoduna uygulanmaktadır. Geliştirilen bütün yöntemlerin performansları, U-Net'i de içeren farklı evrişimsel sinir ağı mimarileri kullanılarak örnek uygulamalar için karşılaştırmalı olarak değerlendirilmektedir. Sonuçlar, tüm benzetim durumlarında, geliştirilen baştan sona öğrenilmiş yöntemin tamamen analitik olan yöntemlere karşı olan üstünlüğünü göstermektedir. Ayrıca, kapsamlı benzetimlerle, geliştirilen baştan sona öğrenilmiş yöntemin genelleştirme yeteneği ve model uyumsuzluğu analizleriyle, kırpılmış görüntüleme modeli kullanılarak elde edilen iyileşmeler gösterilmektedir.M.S. - Master of Scienc

    Çoklu spektral görüntüleme için derin CNN önseline dayalı görüntü oluşturma

    No full text
    Spektral görüntüleme, fizik, kimya, biyoloji, tıp, astronomi ve uzaktan algılama gibi farklı alanlarda yaygın olarak kullanılan temel bir tanılayıcı tekniktir. Bu bildiride, hesaplamalı görüntüleme prensibine dayanan ve kırınımlı lens içeren birçoklu spektral görüntüleme tekniğine odaklanılmakta, bunun için evrişimsel sinir ağlarından yararlanan görüntü geriçatım yöntemi geliştirilmektedir. Sistemin elde ettiği ham verilerden spektral görüntülerin geriçatılması için, ters problem düzenlileştirme içeren bir eniyileme problemi olarak formüle edilir. Bu eniyileme problemi yön değiştiren çarpanlar yöntemi ile alt problemlere ayrılır ve gürültüden arındırma problemine karşılık gelen alt problem analitik yöntemler yerine, öğrenme tabanlı evrişimsel gürültüsüzleştirme sinir ağı ile çözülür. Elde edilen sonuçlar önerilen yöntemin umut verici geriçatım performansını ortaya koymaktadır.Spectral imaging is a widely used diagnostic technique in various fields such as physics, chemistry, biology, medicine, astronomy, and remote sensing. In this work, we focus on a multi-spectral imaging technique with a diffractive lens, which relies on computational imaging, and we develop a novel image reconstruction method that exploits convolutional neural networks. To reconstruct the spectral images from the measurements of the imaging system, the inverse problem is first formulated as a proper optimization problem with regularization. This optimization problem is then divided to subproblems by using the alternating direction method of multipliers, and the subproblem corresponding to a denoising problem is solved with a learning-based convolutional denoising neural network instead of an analytical method. The obtained results illustrate that the proposed method can achieve promising reconstruction performance

    Ballistocardiographic Coupling of Triboelectric Charges into Capacitive ECG

    No full text
    © 2019 IEEE.Capacitive ECG (cECG), a technique older than 50, is able to replace the gold standard ECG only in certain applications where unobtrusiveness and conformity are aimed at the expense of reduced signal quality. Triboelectric surface charges, motion artifacts, and resulting time-variant coupling capacitances are among the reasons for the signal deformations in cECG. In this paper, the mechanical vibrations of the human body are proposed as the source of time-variant coupling capacitances causing motion artifacts, which questions the applicability of adaptive filtering approaches to the problem of time-variant coupling capacitance. Ballistocardiogram (BCG) measurements on cECG electrodes are recorded and analyzed to investigate how these mechanical vibrations reflect on a differential biopotential measurement. Furthermore, using measured signals in a human experiment, numerical and test bench simulations were conducted to replicate how triboelectric surface charges might cause deformation on cECG signals
    corecore