32 research outputs found

    3D/4D ultrasound registration of bone

    Full text link
    This paper presents a method to reduce the invasiveness of Computer Assisted Orthopaedic Surgery (CAOS) using ultrasound. In this goal, we need to develop a method for 3D/4D ultrasound registration. The premilinary results of this study suggest that the development of a robust and ``realtime'' 3D/4D ultrasound registration is feasible

    Robust and Fast 3D Scan Alignment using Mutual Information

    Full text link
    This paper presents a mutual information (MI) based algorithm for the estimation of full 6-degree-of-freedom (DOF) rigid body transformation between two overlapping point clouds. We first divide the scene into a 3D voxel grid and define simple to compute features for each voxel in the scan. The two scans that need to be aligned are considered as a collection of these features and the MI between these voxelized features is maximized to obtain the correct alignment of scans. We have implemented our method with various simple point cloud features (such as number of points in voxel, variance of z-height in voxel) and compared the performance of the proposed method with existing point-to-point and point-to- distribution registration methods. We show that our approach has an efficient and fast parallel implementation on GPU, and evaluate the robustness and speed of the proposed algorithm on two real-world datasets which have variety of dynamic scenes from different environments

    Determining spaatial patterns in gene expression using in situ hybridization and RNA sequencing data

    Get PDF
    Determining spatial patterns in gene expression is crucial for understanding physiological function. Image analysis and machine learning play an important role in deriving these patterns from biological data. We first focus on the analysis of single molecule fluorescence in situ hybridization (smFISH) data, obtained from the Human Cell Atlas project. Image registration is an important step in data analysis pipelines which take in image data and output spatially resolved expression of genes. We demonstrate an efficient method to register smFISH images by using a parametric representation of images based on finite rate of innovation sampling, and by optimizing empirical multivariate information measures. We then focus on the analysis of single cell RNA-seq data. When this data is collected, precise spatial information for cells is lost. We compare different approaches to reconstruct the spatial location of cells using RNA-seq data and a reference gene expression atlas. We first compare the predictions obtained by using polynomial regression and a multilayer perceptron regressor. Using polynomial regression we obtain R2 scores of over 0.99 for the prediction of x, y, and z coordinates. Using our multilayer perceptron regressor we obtain R2 scores of 0.96-0.98. We then preselect subsets of informative genes from our original dataset and test the accuracy of our multilayer perceptron regressor using these smaller sized inputs. If we select a subset of 60 genes from our original set of 84 genes, the perceptron can predict location with only a slight loss of precision.Ope

    Automatic PET-CT Image Registration Method Based on Mutual Information and Genetic Algorithms

    Get PDF
    Hybrid PET/CT scanners can simultaneously visualize coronary artery disease as revealed by computed tomography (CT) and myocardial perfusion as measured by positron emission tomography (PET). Manual registration is usually required in clinical practice to compensate spatial mismatch between datasets. In this paper, we present a registration algorithm that is able to automatically align PET/CT cardiac images. The algorithm bases on mutual information (MI) as registration metric and on genetic algorithm as optimization method. A multiresolution approach was used to optimize the processing time. The algorithm was tested on computerized models of volumetric PET/CT cardiac data and on real PET/CT datasets. The proposed automatic registration algorithm smoothes the pattern of the MI and allows it to reach the global maximum of the similarity function. The implemented method also allows the definition of the correct spatial transformation that matches both synthetic and real PET and CT volumetric datasets

    Enhanced phase congruency feature-based image registration for multimodal remote sensing imagery

    Get PDF
    Multimodal image registration is an essential image processing task in remote sensing. Basically, multimodal image registration searches for optimal alignment between images captured by different sensors for the same scene to provide better visualization and more informative images. Manual image registration is a tedious task and requires more effort, hence developing an automated image registration is very crucial to provide a faster and reliable solution. However, image registration faces many challenges from the nature of remote sensing image, the environment, and the technical shortcoming of the current methods that cause three issues, namely intensive processing power, local intensity variation, and rotational distortion. Since not all image details are significant, relying on the salient features will be more efficient in terms of processing power. Thus, the feature-based registration method was adopted as an efficient method to avoid intensive processing. The proposed method resolves rotation distortion issue using Oriented FAST and Rotated BRIEF (ORB) to produce invariant rotation features. However, since it is not intensity invariant, it cannot support multimodal data. To overcome the intensity variations issue, Phase Congruence (PC) was integrated with ORB to introduce ORB-PC feature extraction to generate feature invariance to rotation distortion and local intensity variation. However, the solution is not complete since the ORB-PC matching rate is below the expectation. Enhanced ORB-PC was proposed to solve the matching issue by modifying the feature descriptor. While better feature matches were achieved, a high number of outliers from multimodal data makes the common outlier removal methods unsuccessful. Therefore, the Normalized Barycentric Coordinate System (NBCS) outlier removal was utilized to find precise matches even with a high number of outliers. The experiments were conducted to verify the registration qualitatively and quantitatively. The qualitative experiment shows the proposed method has a broader and better features distribution, while the quantitative evaluation indicates improved performance in terms of registration accuracy by 18% compared to the related works

    Ultrasound and computed tomography cardiac image registration

    Get PDF
    As the trend of the medical intervention moves towards becoming minimally invasive, the role of medical imaging has grown increasingly important. Medical images acquired from a variety of imaging modalities require image preprocessing, information extraction and data analysis algorithms in order for the potentially useful information to be delivered to clinicians so as to facilitate better diagnosis, treatment planning and surgical intervention. This thesis investigates the employment of an affine registration method to register the pre-operative Computed Tomography (CT) and intra-operative Ultrasound cardiac images. The main benefit of registering Ultrasound and CT cardiac images is to compensate the weaknesses and combine the advantages from both modalities. However, the multimodal registration is a complex and challenging task since there is no specific relationship between the intensity values of the corresponding pixels. Image preprocessing methods such as image denoising, edge detection and contour delineation are implemented to obtain the salient and significant features before the registration process. The features-based Scale Invariant Feature Transform (SIFT) method and homography transformation are then applied to find the transformation that aligns the floating image to the reference image. The registration results of three different patient datasets are assessed by the objective performance measures to ensure that the clinically meaningful result are obtained. Furthermore, the relationship between the preoperative CT image and the transformed intra-operative Ultrasound image are evaluated using joint histogram, MI and NMI. Although the proposed framework falls slightly short of achieving the perfect compensation of cardiac movements and deformation, it can be legitimately implemented as an initialisation step for further studies in dynamic and deformable cardiac registration

    Segmentation and Fracture Detection in CT Images for Traumatic Pelvic Injuries

    Get PDF
    In recent decades, more types and quantities of medical data have been collected due to advanced technology. A large number of significant and critical information is contained in these medical data. High efficient and automated computational methods are urgently needed to process and analyze all available medical data in order to provide the physicians with recommendations and predictions on diagnostic decisions and treatment planning. Traumatic pelvic injury is a severe yet common injury in the United States, often caused by motor vehicle accidents or fall. Information contained in the pelvic Computed Tomography (CT) images is very important for assessing the severity and prognosis of traumatic pelvic injuries. Each pelvic CT scan includes a large number of slices. Meanwhile, each slice contains a large quantity of data that may not be thoroughly and accurately analyzed via simple visual inspection with the desired accuracy and speed. Hence, a computer-assisted pelvic trauma decision-making system is needed to assist physicians in making accurate diagnostic decisions and determining treatment planning in a short period of time. Pelvic bone segmentation is a vital step in analyzing pelvic CT images and assisting physicians with diagnostic decisions in traumatic pelvic injuries. In this study, a new hierarchical segmentation algorithm is proposed to automatically extract multiplelevel bone structures using a combination of anatomical knowledge and computational techniques. First, morphological operations, image enhancement, and edge detection are performed for preliminary bone segmentation. The proposed algorithm then uses a template-based best shape matching method that provides an entirely automated segmentation process. This is followed by the proposed Registered Active Shape Model (RASM) algorithm that extracts pelvic bone tissues using more robust training models than the Standard ASM algorithm. In addition, a novel hierarchical initialization process for RASM is proposed in order to address the shortcoming of the Standard ASM, i.e. high sensitivity to initialization. Two suitable measures are defined to evaluate the segmentation results: Mean Distance and Mis-segmented Area to quantify the segmentation accuracy. Successful segmentation results indicate effectiveness and robustness of the proposed algorithm. Comparison of segmentation performance is also conducted using both the proposed method and the Snake method. A cross-validation process is designed to demonstrate the effectiveness of the training models. 3D pelvic bone models are built after pelvic bone structures are segmented from consecutive 2D CT slices. Automatic and accurate detection of the fractures from segmented bones in traumatic pelvic injuries can help physicians detect the severity of injuries in patients. The extraction of fracture features (such as presence and location of fractures) as well as fracture displacement measurement, are vital for assisting physicians in making faster and more accurate decisions. In this project, after bone segmentation, fracture detection is performed using a hierarchical algorithm based on wavelet transformation, adaptive windowing, boundary tracing and masking. Also, a quantitative measure of fracture severity based on pelvic CT scans is defined and explored. The results are promising, demonstrating that the proposed method not only capable of automatically detecting both major and minor fractures, but also has potentials to be used for clinical applications

    Validation of an in vivo model for monitoring trabecular bone quality changes using finite element analysis.

    Get PDF
    A combination of three techniques – high resolution micro computed tomography (micro CT) scanning, Archimedes-based volume fraction measurement and serial sectioning or milling – were used to determine the volume fraction, trabecular thickness, trabecular separation, trabecular number and micro finite element analysis combined with mechanical testing was used to determine the apparent stiffness and tissue modulus to quantify bone quality in rabbit distal femur trabecular bone. The objectives of this dissertation were two-fold. First, to develop the capabilities of micro CT scanning and micro CT image segmentation based on a slice-by-slice global thresholding technique to investigate trabecular microstructural changes in vivo and in vitro; and second, to develop the capability of translating micro CT scans into three dimensional finite element models based on direct voxel conversion technique. These results were validated within the in vivo and in vitro scans at the same time, and validated with the Archimedes-based volume fraction measurements and serial sectioning or milling experiments. The micro FE models were executed as linear analyses and the same bone cubes of the models were mechanically tested (compressive testing) to determine the correct tissue modulus of the bone specimens. The apparent stiffness of these micro FE models was recalculated using the average tissue modulus. A total of six six-month-old New Zealand white rabbits were utilized in this study. Three rabbits were scanned twice in vivo seven days apart (T1 and T7) and three rabbits were only scanned once in vivo. All of the femurs were scanned in vitro. All micro CT images were obtained at 28 um (in vivo) or 14 um (in vitro) nominal resolutions. Specimens from six left and right rabbit distal femurs (medial and lateral) were measured based on Archimedes\u27 principle and serial milling. The volume fraction for lateral condyles between two in vivo scans T1 (0.401+0.015) and T7 (0.397+0.021), between in vitro micro CT (0.352+0.035) and Archimedes (0.365+0.031) and between in vitro micro (0.352+0.035) and serial milling (0.369+0.031) were not significantly different. The medial condyles were also not significantly different: T1 (0.513+0.010), T7 (0.515+0.011), in vitro micro CT (0.454+0.049), Archimedes (0.460+0.060) and serial milling (0.467+0.505). Specimens from another six left and right distal femurs (medial and lateral) were mechanically tested along the anterior-posterior directions. The tissue modulus of each specimen was determined by making the calculated apparent stiffness values from FEA to be equal to mechanical compressive testing (MTS). Based on a new constant tissue modulus, the recalculated FEA apparent stiffness (1.77E9+6.45E8) and MTS apparent stiffness (1.76E9+7.37E8) were linearly correlated (r-value = 0.8721). These findings suggest that the capabilities of slice-by-slice global thresholding and direct voxel conversion are sensitive, reliable and consistent for the study of trabecular bone microstructural changes in vivo utilizing high resolution (\u3c 28 um) micro CT scanning and micro FEA
    corecore