7 research outputs found
Ultrasound Elastography using Machine Learning
This thesis aims at solving two main problems that we face in ultrasound elastography, namely fast strain estimation and radio frequency (RF) frame selection. We rely on machine learning concepts such as Principal Components Analysis (PCA), multi-layer perceptron (MLP) and convolutional neural networks (CNN) to build 3 models that are trained on both phantom and in vivo data. In our first work, we developed a method to estimate the initial displacement between two ultrasound RF frames using PCA. We first compute an initial displacement estimate of around 1% of the samples, and then decompose the displacement into a linear combination of principal components (obtained offline during the training step). Our method assumes that the initial displacement of the whole image could also be described by this linear combination of principal components. This yields the same result that we could have had if we run dynamic programming (DP). The advantage of using PCA is that we could compute the same initial displacement image more than 10 times faster than DP. We then pass the result to GLobal Ultrasound Elastography (GLUE) for fine-tuning it, so we call the method PCA-GLUE.
In our second work, we developed a novel method to address the problem of RF frame selection in ultrasound elastography. Intuitively, we would like to have a classifier that gives a binary 1 to
RF frame pairs that yield high-quality strain images. We make use of our previous work where we decompose the initial displacement between two RF frames into a weight vector multiplied by some principal components. We consider the weight vector as our input feature vector to an MLP model. Given two RF frames I1 and I2, the MLP model predicts the normalized cross correlation (NCC) between the two RF frames I1 and I2′ (I2′ is I2 after being displaced according to the displacement of GLUE/PCA-GLUE). Our final contribution in this line of research is the introduction of a CNN-based method for RF frame selection as follows. First, we changed the architecture from an MLP model to a CNN that takes the two RF frames on two channels. The CNN has better results compared to the MLP model due to having more features. Second, we improved the automatic labelling of the data by having physical conditions that must be satisfied together in order to consider the pair as a suitable pair of RF frames
Developing Ultrasound-Guided Intervention Technologies Enabled by Sensing Active Acoustic and Photoacoustic Point Sources
Image-guided therapy is a central part of modern medicine. By incorporating
medical imaging into the planning, surgical, and evaluation process, image-guided therapy has helped surgeons perform less invasive and more precise procedures. Of
the most commonly used medical imaging modalities, ultrasound imaging offers a unique combination of cost-effectiveness, safety, and mobility. Advanced ultrasound guided interventional systems will often require calibration and tracking technologies to enable all of their capabilities. Many of these technologies rely on localizing point
based fiducials to accomplish their task.
In this thesis, I investigate how sensing and localizing active acoustic and photoacoustic point sources can have a substantial impact in intraoperative ultrasound. The
goals of these methods are (1) to improve localization and visualization for point targets that are not easily distinguished under conventional ultrasound and (2) to track
and register ultrasound sensors with the use of active point sources as non-physical fiducials or markers.
We applied these methods to three main research topics. The first is an ultrasound calibration framework that utilizes an active acoustic source as the phantom to aid in in-plane segmentation as well as out-of-plane estimation. The second is an interventional photoacoustic surgical system that utilizes the photoacoustic effect to create markers for tracking ultrasound transducers. We demonstrate variations of this idea
to track a wide range of ultrasound transducers (three-dimensional, two-dimensional, bi-planar). The third is a set of interventional tool tracking methods combining the use of acoustic elements embedded onto the tool with the use of photoacoustic markers
Medical Ultrasound Imaging and Interventional Component (MUSiiC) Framework for Advanced Ultrasound Image-guided Therapy
Medical ultrasound (US) imaging is a popular and convenient medical imaging
modality thanks to its mobility, non-ionizing radiation, ease-of-use, and real-time data
acquisition. Conventional US brightness mode (B-Mode) is one type of diagnostic
medical imaging modality that represents tissue morphology by collecting and displaying
the intensity information of a reflected acoustic wave. Moreover, US B-Mode imaging is
frequently integrated with tracking systems and robotic systems in image-guided therapy
(IGT) systems. Recently, these systems have also begun to incorporate advanced US
imaging such as US elasticity imaging, photoacoustic imaging, and thermal imaging.
Several software frameworks and toolkits have been developed for US imaging research
and the integration of US data acquisition, processing and display with existing IGT
systems. However, there is no software framework or toolkit that supports advanced US
imaging research and advanced US IGT systems by providing low-level US data (channel
data or radio-frequency (RF) data) essential for advanced US imaging.
In this dissertation, we propose a new medical US imaging and interventional
component framework for advanced US image-guided therapy based on networkdistributed
modularity, real-time computation and communication, and open-interface
design specifications. Consequently, the framework can provide a modular research
environment by supporting communication interfaces between heterogeneous systems to
allow for flexible interventional US imaging research, and easy reconfiguration of an
entire interventional US imaging system by adding or removing devices or equipment
specific to each therapy. In addition, our proposed framework offers real-time
synchronization between data from multiple data acquisition devices for advanced
iii
interventional US imaging research and integration of the US imaging system with other
IGT systems. Moreover, we can easily implement and test new advanced ultrasound
imaging techniques inside the proposed framework in real-time because our software
framework is designed and optimized for advanced ultrasound research. The system’s
flexibility, real-time performance, and open-interface are demonstrated and evaluated
through performing experimental tests for several applications
REAL-TIME ELASTOGRAPHY SYSTEMS
Ultrasound elastography is a technique that is often used to detect cancerous tumors and monitor ablation therapy by detecting changes in the stiffness of the underlying tissue. This technique is a computationally expensive due to the extensive searching between two raw ultrasound images, that are called radio frequency images. This thesis explores various methods to accelerate the computation required for the elastography technique to allow use during surgery.
This thesis is divided into three parts. We begin by exploring acceleration techniques, including multithreading techniques, asynchronous computing, and acceleration of the graphics processing unit (GPU). Elastography algorithms are often affected by out-of-plane motion due to several external factors, such as hand tremors and incorrect palpation motion, amongst others. In this thesis, we implemented an end-to-end system that integrates an external tracker system to detect the in-plane motion of two radio frequency (RF) data slices. This in-plane detection helps to reduce de-correlated RF slices and produces a consistent elastography output. We also explore the integration of a da Vinci Surgical Robot to provide stable palpation motion during the surgery.
The external tracker system suffers from interference due to ferromagnetic materials present in the operation theater in the case of an electromagnetic tracker, while optical and camera-based tracking systems are restricted due to human, object and patient interference in the path of sight and complete or partial occlusion of the tracking sensors. Additionally, these systems must be calibrated to give the position of the tracked objects with respect to the trackers. Although calibration and trackers are helpful for inter-modality registration, we focus on a tracker-less method to determine the in-plane motion of two RF slices. Our technique divides the two input RF images into regions of interest and performs elastography on RF lines that encapsulate those regions of interest.
Finally, we implemented the world’s first known five-dimensional ultrasound system. We built the five-dimensional ultrasound system by combining a 3D B-mode volume and a 3D elastography volume visualized over time. A user controlled multi-dimensional transfer function is used to differentiate between the 3D B-mode and the 3D elastography volume