479 research outputs found

    Photoacoustic fluctuation imaging: theory and application to blood flow imaging

    Full text link
    Photoacoustic fluctuation imaging, which exploits randomness in photoacoustic generation, provides enhanced images in terms of resolution and visibility, as compared to conventional photoacoustic images. While a few experimental demonstrations of photoacoustic fluctuation imaging have been reported, it has to date not been described theoretically. In the first part of this work, we propose a theory relevant to fluctuations induced either by random illumination patterns or by random distributions of absorbing particles. The theoretical predictions are validated by Monte Carlo finite-difference time-domain simulations of photoacoustic generation in random particle media. We provide a physical insight into why visibility artefacts are absent from second-order fluctuation images. In the second part, we demonstrate experimentally that harnessing randomness induced by the flow of red blood cells produce photoacoustic fluctuation images free of visibility artefacts. As a first proof of concept, we obtain two-dimensional images of blood vessel phantoms. Photoacoustic fluctuation imaging is finally applied in vivo to obtain 3D images of the vascularization in a chicken embryo

    Phase Aberration Correction: A Deep Learning-Based Aberration to Aberration Approach

    Full text link
    One of the primary sources of suboptimal image quality in ultrasound imaging is phase aberration. It is caused by spatial changes in sound speed over a heterogeneous medium, which disturbs the transmitted waves and prevents coherent summation of echo signals. Obtaining non-aberrated ground truths in real-world scenarios can be extremely challenging, if not impossible. This challenge hinders training of deep learning-based techniques' performance due to the presence of domain shift between simulated and experimental data. Here, for the first time, we propose a deep learning-based method that does not require ground truth to correct the phase aberration problem, and as such, can be directly trained on real data. We train a network wherein both the input and target output are randomly aberrated radio frequency (RF) data. Moreover, we demonstrate that a conventional loss function such as mean square error is inadequate for training such a network to achieve optimal performance. Instead, we propose an adaptive mixed loss function that employs both B-mode and RF data, resulting in more efficient convergence and enhanced performance. Finally, we publicly release our dataset, including 161,701 single plane-wave images (RF data). This dataset serves to mitigate the data scarcity problem in the development of deep learning-based techniques for phase aberration correction.Comment: arXiv admin note: text overlap with arXiv:2303.0574

    Image processing in medical ultrasound

    Get PDF
    This Ph.D project addresses image processing in medical ultrasound and seeks to achieve two major scientific goals: First to develop an understanding of the most significant factors influencing image quality in medical ultrasound, and secondly to use this knowledge to develop image processing methods for enhancing the diagnostic value of medical ultrasound. The project is an industrial Ph.D project co-sponsored by BK Medical ApS., with the commercial goal to improve the image quality of BK Medicals scanners. Currently BK Medical employ a simple conventional delay-and-sum beamformer to generate B-mode images. This is a simple and well understood method that allows dynamic receive focusing for an improved resolution, the drawback is that only optimal focus is achieved in the transmit focus point. Synthetic aperture techniques can overcome this drawback, but at a cost of increased system complexity and computational demands. The development goal of this project is to implement, Synthetic Aperture Sequential Beamforming (SASB), a new synthetic aperture (SA) beamforming method. The benefit of SASB is an improved image quality compared to conventional beamforming and a reduced system complexity compared to conventional synthetic aperture techniques. The implementation is evaluated using both simulations and measurements for technical and clinical evaluations. During the course of the project three sub-projects were conducted. The first project were development and implementation of a real-time data acquisition system. The system were implemented using the commercial available 2202 ProFocus BK Medical ultrasound scanner equipped with a research interface and a standard PC. The main feature of the system is the possibility to acquire several seconds of interleaved data, switching between multiple imaging setups. This makes the system well suited for development of new processing methods and for clinical evaluations, where acquisition of the exact same scan location for multiple methods is important. The second project addressed implementation, development and evaluation of SASB using a convex array transducer. The evaluation were performed as a three phased clinical trial. In the first phase, the prototype phase, the technical performance of SASB were evaluated using the ultrasound simulation software Field II and Beamformation toolbox III (BFT3) and subsequently evaluated using phantom and in-vivo measurements. The technical performance were compared to conventional beamforming and gave motivation to continue to phase two. The second phase evaluated the clinical performance of abdominal imaging in a pre-clinical trial in comparison with conventional imaging, and were conducted as a double blinded study. The result of the pre-clinical trialmotivated for a larger scale clinical trial. Each of the two clinical trials were performed in collaboration with Copenhagen University Hospital, Rigshospitalet, and Copenhagen University, Department of Biostatistic. Evaluations were performed by medical doctors and experts in ultrasound, using the developed Image Quality assessment program (IQap). The study concludes that the image quality in terms of spatial resolution, contrast and unwanted artifacts is statistically better using SASB imaging than conventional imaging. The third and final project concerned simulation of the acoustic field for high quality imaging systems. During the simulation study of SASB, it was noted that the simulated results did not predict the measured responses with an appropriate confidence for simulated systemperformance evaluation. Closer inspection of themeasured transducer characteristics showed a sever time-offlight phase error, sensitivity deviations, and deviating frequency responses between elements. Simulations combined with experimentally determined element pulse echo wavelets, showed that conventional simulation using identical pulse echo wavelets for all elements is too simplistic to capture the true performance of the imaging system, and that the simulations can be improved by including individual pulse echo wavelets for each element. Using the improved model the accuracy of the simulated response is improved significantly and is useful for simulated systemevaluation. Itwas further shown that conventional imaging is less sensitive to phase and sensitivity errors than SASB imaging. This shows that for simulated performance evaluation a realistic simulation model is important for a reliable evaluation of new high quality imaging systems

    Exploiting Temporal Image Information in Minimally Invasive Surgery

    Get PDF
    Minimally invasive procedures rely on medical imaging instead of the surgeons direct vision. While preoperative images can be used for surgical planning and navigation, once the surgeon arrives at the target site real-time intraoperative imaging is needed. However, acquiring and interpreting these images can be challenging and much of the rich temporal information present in these images is not visible. The goal of this thesis is to improve image guidance for minimally invasive surgery in two main areas. First, by showing how high-quality ultrasound video can be obtained by integrating an ultrasound transducer directly into delivery devices for beating heart valve surgery. Secondly, by extracting hidden temporal information through video processing methods to help the surgeon localize important anatomical structures. Prototypes of delivery tools, with integrated ultrasound imaging, were developed for both transcatheter aortic valve implantation and mitral valve repair. These tools provided an on-site view that shows the tool-tissue interactions during valve repair. Additionally, augmented reality environments were used to add more anatomical context that aids in navigation and in interpreting the on-site video. Other procedures can be improved by extracting hidden temporal information from the intraoperative video. In ultrasound guided epidural injections, dural pulsation provides a cue in finding a clear trajectory to the epidural space. By processing the video using extended Kalman filtering, subtle pulsations were automatically detected and visualized in real-time. A statistical framework for analyzing periodicity was developed based on dynamic linear modelling. In addition to detecting dural pulsation in lumbar spine ultrasound, this approach was used to image tissue perfusion in natural video and generate ventilation maps from free-breathing magnetic resonance imaging. A second statistical method, based on spectral analysis of pixel intensity values, allowed blood flow to be detected directly from high-frequency B-mode ultrasound video. Finally, pulsatile cues in endoscopic video were enhanced through Eulerian video magnification to help localize critical vasculature. This approach shows particular promise in identifying the basilar artery in endoscopic third ventriculostomy and the prostatic artery in nerve-sparing prostatectomy. A real-time implementation was developed which processed full-resolution stereoscopic video on the da Vinci Surgical System

    Phase Aberration Correction for in vivo Ultrasound Localization Microscopy Using a Spatiotemporal Complex-Valued Neural Network

    Full text link
    Ultrasound Localization Microscopy (ULM) can map microvessels at a resolution of a few micrometers ({\mu}m). Transcranial ULM remains challenging in presence of aberrations caused by the skull, which lead to localization errors. Herein, we propose a deep learning approach based on recently introduced complex-valued convolutional neural networks (CV-CNNs) to retrieve the aberration function, which can then be used to form enhanced images using standard delay-and-sum beamforming. Complex-valued convolutional networks were selected as they can apply time delays through multiplication with in-phase quadrature input data. Predicting the aberration function rather than corrected images also confers enhanced explainability to the network. In addition, 3D spatiotemporal convolutions were used for the network to leverage entire microbubble tracks. For training and validation, we used an anatomically and hemodynamically realistic mouse brain microvascular network model to simulate the flow of microbubbles in presence of aberration. We then confirmed the capability of our network to generalize to transcranial in vivo data in the mouse brain (n=2). Qualitatively, vascular reconstructions using a pixel-wise predicted aberration function included additional and sharper vessels. The spatial resolution was evaluated by using the Fourier ring correlation (FRC). After correction, we measured a resolution of 16.7 {\mu}m in vivo, representing an improvement of up to 27.5 %. This work leads to different applications for complex-valued convolutions in biomedical imaging and strategies to perform transcranial ULM

    Ultrafast Ultrasound Imaging

    Get PDF
    Among medical imaging modalities, such as computed tomography (CT) and magnetic resonance imaging (MRI), ultrasound imaging stands out due to its temporal resolution. Owing to the nature of medical ultrasound imaging, it has been used for not only observation of the morphology of living organs but also functional imaging, such as blood flow imaging and evaluation of the cardiac function. Ultrafast ultrasound imaging, which has recently become widely available, significantly increases the opportunities for medical functional imaging. Ultrafast ultrasound imaging typically enables imaging frame-rates of up to ten thousand frames per second (fps). Due to the extremely high temporal resolution, this enables visualization of rapid dynamic responses of biological tissues, which cannot be observed and analyzed by conventional ultrasound imaging. This Special Issue includes various studies of improvements to the performance of ultrafast ultrasoun

    Roadmap on signal processing for next generation measurement systems

    Get PDF
    Signal processing is a fundamental component of almost any sensor-enabled system, with a wide range of applications across different scientific disciplines. Time series data, images, and video sequences comprise representative forms of signals that can be enhanced and analysed for information extraction and quantification. The recent advances in artificial intelligence and machine learning are shifting the research attention towards intelligent, data-driven, signal processing. This roadmap presents a critical overview of the state-of-the-art methods and applications aiming to highlight future challenges and research opportunities towards next generation measurement systems. It covers a broad spectrum of topics ranging from basic to industrial research, organized in concise thematic sections that reflect the trends and the impacts of current and future developments per research field. Furthermore, it offers guidance to researchers and funding agencies in identifying new prospects.AerodynamicsMicrowave Sensing, Signals & System

    Hemodynamic Quantifications By Contrast-Enhanced Ultrasound:From In-Vitro Modelling To Clinical Validation

    Get PDF
    corecore