435 research outputs found

    Doctor of Philosophy

    Get PDF
    dissertationCongenital heart defects are classes of birth defects that affect the structure and function of the heart. These defects are attributed to the abnormal or incomplete development of a fetal heart during the first few weeks following conception. The overall detection rate of congenital heart defects during routine prenatal examination is low. This is attributed to the insufficient number of trained personnel in many local health centers where many cases of congenital heart defects go undetected. This dissertation presents a system to identify congenital heart defects to improve pregnancy outcomes and increase their detection rates. The system was developed and its performance assessed in identifying the presence of ventricular defects (congenital heart defects that affect the size of the ventricles) using four-dimensional fetal chocardiographic images. The designed system consists of three components: 1) a fetal heart location estimation component, 2) a fetal heart chamber segmentation component, and 3) a detection component that detects congenital heart defects from the segmented chambers. The location estimation component is used to isolate a fetal heart in any four-dimensional fetal echocardiographic image. It uses a hybrid region of interest extraction method that is robust to speckle noise degradation inherent in all ultrasound images. The location estimation method's performance was analyzed on 130 four-dimensional fetal echocardiographic images by comparison with manually identified fetal heart region of interest. The location estimation method showed good agreement with the manually identified standard using four quantitative indexes: Jaccard index, Sørenson-Dice index, Sensitivity index and Specificity index. The average values of these indexes were measured at 80.70%, 89.19%, 91.04%, and 99.17%, respectively. The fetal heart chamber segmentation component uses velocity vector field estimates computed on frames contained in a four-dimensional image to identify the fetal heart chambers. The velocity vector fields are computed using a histogram-based optical flow technique which is formulated on local image characteristics to reduces the effect of speckle noise and nonuniform echogenicity on the velocity vector field estimates. Features based on the velocity vector field estimates, voxel brightness/intensity values, and voxel Cartesian coordinate positions were extracted and used with kernel k-means algorithm to identify the individual chambers. The segmentation method's performance was evaluated on 130 images from 31 patients by comparing the segmentation results with manually identified fetal heart chambers. Evaluation was based on the Sørenson-Dice index, the absolute volume difference and the Hausdorff distance, with each resulting in per patient average values of 69.92%, 22.08%, and 2.82 mm, respectively. The detection component uses the volumes of the identified fetal heart chambers to flag the possible occurrence of hypoplastic left heart syndrome, a type of congenital heart defect. An empirical volume threshold defined on the relative ratio of adjacent fetal heart chamber volumes obtained manually is used in the detection process. The performance of the detection procedure was assessed by comparison with a set of images with confirmed diagnosis of hypoplastic left heart syndrome and a control group of normal fetal hearts. Of the 130 images considered 18 of 20 (90%) fetal hearts were correctly detected as having hypoplastic left heart syndrome and 84 of 110 (76.36%) fetal hearts were correctly detected as normal in the control group. The results show that the detection system performs better than the overall detection rate for congenital heart defect which is reported to be between 30% and 60%

    Basic Science to Clinical Research: Segmentation of Ultrasound and Modelling in Clinical Informatics

    Get PDF
    The world of basic science is a world of minutia; it boils down to improving even a fraction of a percent over the baseline standard. It is a domain of peer reviewed fractions of seconds and the world of squeezing every last ounce of efficiency from a processor, a storage medium, or an algorithm. The field of health data is based on extracting knowledge from segments of data that may improve some clinical process or practice guideline to improve the time and quality of care. Clinical informatics and knowledge translation provide this information in order to reveal insights to the world of improving patient treatments, regimens, and overall outcomes. In my world of minutia, or basic science, the movement of blood served an integral role. The novel detection of sound reverberations map out the landscape for my research. I have applied my algorithms to the various anatomical structures of the heart and artery system. This serves as a basis for segmentation, active contouring, and shape priors. The algorithms presented, leverage novel applications in segmentation by using anatomical features of the heart for shape priors and the integration of optical flow models to improve tracking. The presented techniques show improvements over traditional methods in the estimation of left ventricular size and function, along with plaque estimation in the carotid artery. In my clinical world of data understanding, I have endeavoured to decipher trends in Alzheimer’s disease, Sepsis of hospital patients, and the burden of Melanoma using mathematical modelling methods. The use of decision trees, Markov models, and various clustering techniques provide insights into data sets that are otherwise hidden. Finally, I demonstrate how efficient data capture from providers can achieve rapid results and actionable information on patient medical records. This culminated in generating studies on the burden of illness and their associated costs. A selection of published works from my research in the world of basic sciences to clinical informatics has been included in this thesis to detail my transition. This is my journey from one contented realm to a turbulent one

    Quantitative Analysis of Ultrasound Images of the Preterm Brain

    Get PDF
    In this PhD new algorithms are proposed to better understand and diagnose white matter damage in the preterm Brain. Since Ultrasound imaging is the most suited modality for the inspection of brain pathologies in very low birth weight infants we propose multiple techniques to assist in what is called Computer-Aided Diagnosis. As a main result we are able to increase the qualitative diagnosis from a 70% detectability to a 98% quantitative detectability

    Foetal echocardiographic segmentation

    Get PDF
    Congenital heart disease affects just under one percentage of all live births [1]. Those defects that manifest themselves as changes to the cardiac chamber volumes are the motivation for the research presented in this thesis. Blood volume measurements in vivo require delineation of the cardiac chambers and manual tracing of foetal cardiac chambers is very time consuming and operator dependent. This thesis presents a multi region based level set snake deformable model applied in both 2D and 3D which can automatically adapt to some extent towards ultrasound noise such as attenuation, speckle and partial occlusion artefacts. The algorithm presented is named Mumford Shah Sarti Collision Detection (MSSCD). The level set methods presented in this thesis have an optional shape prior term for constraining the segmentation by a template registered to the image in the presence of shadowing and heavy noise. When applied to real data in the absence of the template the MSSCD algorithm is initialised from seed primitives placed at the centre of each cardiac chamber. The voxel statistics inside the chamber is determined before evolution. The MSSCD stops at open boundaries between two chambers as the two approaching level set fronts meet. This has significance when determining volumes for all cardiac compartments since cardiac indices assume that each chamber is treated in isolation. Comparison of the segmentation results from the implemented snakes including a previous level set method in the foetal cardiac literature show that in both 2D and 3D on both real and synthetic data, the MSSCD formulation is better suited to these types of data. All the algorithms tested in this thesis are within 2mm error to manually traced segmentation of the foetal cardiac datasets. This corresponds to less than 10% of the length of a foetal heart. In addition to comparison with manual tracings all the amorphous deformable model segmentations in this thesis are validated using a physical phantom. The volume estimation of the phantom by the MSSCD segmentation is to within 13% of the physically determined volume

    EFFECTS OF SCATTERING AND ABSORPTION ON LASER SPECKLE CONTRAST IMAGING

    Get PDF
    Laser Speckle Contrast Imaging (LSCI) is a real-time, non-invasive method in used to investigate blood flow and perfusion in biological tissues with high temporal and spatial resolution. A reduction in speckle contrast due to particle motion is the primary contrast mechanism in LSCI. Motion results in speckle fluctuations in time and reduces the contrast over a given camera integration period. There are a variety of parameters that effect contrast besides motion. The optical properties of the scattering medium are one of the parameters effecting LSCI values. Changes in blood hematocrit levels manifest as changes in optical properties. In this work, we explore the effects of different hematocrit levels on LSCI contrast values using fluid phantoms with varying optical properties. Herein, the combined effects of scattering and absorption coefficients on LSCI values are investigated using fluid phantoms. These fluid phantoms were designed to mimic the scattering and absorbing properties of blood with varying levels of hematocrit. The flow phantoms in our experiments contained different concentrations of glass microspheres (brand name Luxil) and India ink mixed with DI water. The different number of scatterers and absorbers in the phantoms mimic the scattering and absorption behaviors of blood with different number of red blood cells. An LSCI setup combined with a simple flow system was used in our experiments in order to investigate the effects of combined scattering and absorption coefficient of 121 samples with different concentrations of Luxil and India ink microspheres. The fluid phantoms were run in 2mm glass tubing on top of a plastic block using a mini peristaltic pump. An LSCI setup imaged the flow using a CCD camera. A MATLAB GUI controlled the pump and camera to provide near real-time contrast images of the flow. An 11x11 matrix of phantoms was created. Scattering coefficient was varied on the columns and absorption coefficient was varied on the rows such that the first element of the matrix is water and the last element contains the phantom with the maximum number of scatterers and absorbers. A hundred raw speckle images were recorded for each phantom experiment using the described optical setup. The experiments were conducted 3 times for each element of the matrix. The 11x11 results matrix displayed the average speckle image of all 300 raw speckle images. Additionally, the matrix was filled by the contrast images where contrast was defined as standard deviation of intensity over mean intensity. In order to compare the results numerically, we calculated the ratio of the contrast from the same size window of moving portion over the static portion of the phantoms. According to the results from LSCI experiments, an increase in scattering and absorption coefficients led to a reduction in contrast values of LSCI images. By increasing the number of scatterers and absorbers (equivalent to changing hematocrit level), the optical properties (scattering and absorption coefficient) increased, which led to a reduction in contrast value in the moving area. A negative slope linear curve describes the relationship between and scattering coefficient and between and absorption coefficient

    Segmentation of 3D Carotid Ultrasound Images Using Weak Geometric Priors

    Get PDF
    Vascular diseases are among the leading causes of death in Canada and around the globe. A major underlying cause of most such medical conditions is atherosclerosis, a gradual accumulation of plaque on the walls of blood vessels. Particularly vulnerable to atherosclerosis is the carotid artery, which carries blood to the brain. Dangerous narrowing of the carotid artery can lead to embolism, a dislodgement of plaque fragments which travel to the brain and are the cause of most strokes. If this pathology can be detected early, such a deadly scenario can be potentially prevented through treatment or surgery. This not only improves the patient's prognosis, but also dramatically lowers the overall cost of their treatment. Medical imaging is an indispensable tool for early detection of atherosclerosis, in particular since the exact location and shape of the plaque need to be known for accurate diagnosis. This can be achieved by locating the plaque inside the artery and measuring its volume or texture, a process which is greatly aided by image segmentation. In particular, the use of ultrasound imaging is desirable because it is a cost-effective and safe modality. However, ultrasonic images depict sound-reflecting properties of tissue, and thus suffer from a number of unique artifacts not present in other medical images, such as acoustic shadowing, speckle noise and discontinuous tissue boundaries. A robust ultrasound image segmentation technique must take these properties into account. Prior to segmentation, an important pre-processing step is the extraction of a series of features from the image via application of various transforms and non-linear filters. A number of such features are explored and evaluated, many of them resulting in piecewise smooth images. It is also proposed to decompose the ultrasound image into several statistically distinct components. These components can be then used as features directly, or other features can be obtained from them instead of the original image. The decomposition scheme is derived using Maximum-a-Posteriori estimation framework and is efficiently computable. Furthermore, this work presents and evaluates an algorithm for segmenting the carotid artery in 3D ultrasound images from other tissues. The algorithm incorporates information from different sources using an energy minimization framework. Using the ultrasound image itself, statistical differences between the region of interest and its background are exploited, and maximal overlap with strong image edges encouraged. In order to aid the convergence to anatomically accurate shapes, as well as to deal with the above-mentioned artifacts, prior knowledge is incorporated into the algorithm by using weak geometric priors. The performance of the algorithm is tested on a number of available 3D images, and encouraging results are obtained and discussed

    Real-time Ultrasound Signals Processing: Denoising and Super-resolution

    Get PDF
    Ultrasound acquisition is widespread in the biomedical field, due to its properties of low cost, portability, and non-invasiveness for the patient. The processing and analysis of US signals, such as images, 2D videos, and volumetric images, allows the physician to monitor the evolution of the patient's disease, and support diagnosis, and treatments (e.g., surgery). US images are affected by speckle noise, generated by the overlap of US waves. Furthermore, low-resolution images are acquired when a high acquisition frequency is applied to accurately characterise the behaviour of anatomical features that quickly change over time. Denoising and super-resolution of US signals are relevant to improve the visual evaluation of the physician and the performance and accuracy of processing methods, such as segmentation and classification. The main requirements for the processing and analysis of US signals are real-time execution, preservation of anatomical features, and reduction of artefacts. In this context, we present a novel framework for the real-time denoising of US 2D images based on deep learning and high-performance computing, which reduces noise while preserving anatomical features in real-time execution. We extend our framework to the denoise of arbitrary US signals, such as 2D videos and 3D images, and we apply denoising algorithms that account for spatio-temporal signal properties into an image-to-image deep learning model. As a building block of this framework, we propose a novel denoising method belonging to the class of low-rank approximations, which learns and predicts the optimal thresholds of the Singular Value Decomposition. While previous denoise work compromises the computational cost and effectiveness of the method, the proposed framework achieves the results of the best denoising algorithms in terms of noise removal, anatomical feature preservation, and geometric and texture properties conservation, in a real-time execution that respects industrial constraints. The framework reduces the artefacts (e.g., blurring) and preserves the spatio-temporal consistency among frames/slices; also, it is general to the denoising algorithm, anatomical district, and noise intensity. Then, we introduce a novel framework for the real-time reconstruction of the non-acquired scan lines through an interpolating method; a deep learning model improves the results of the interpolation to match the target image (i.e., the high-resolution image). We improve the accuracy of the prediction of the reconstructed lines through the design of the network architecture and the loss function. %The design of the deep learning architecture and the loss function allow the network to improve the accuracy of the prediction of the reconstructed lines. In the context of signal approximation, we introduce our kernel-based sampling method for the reconstruction of 2D and 3D signals defined on regular and irregular grids, with an application to US 2D and 3D images. Our method improves previous work in terms of sampling quality, approximation accuracy, and geometry reconstruction with a slightly higher computational cost. For both denoising and super-resolution, we evaluate the compliance with the real-time requirement of US applications in the medical domain and provide a quantitative evaluation of denoising and super-resolution methods on US and synthetic images. Finally, we discuss the role of denoising and super-resolution as pre-processing steps for segmentation and predictive analysis of breast pathologies
    • …
    corecore