1,053 research outputs found

    Combined Mutual Information of Intensity and Gradient for Multi-modal Medical Image Registration

    Get PDF
    In this thesis, registration methods for multi-modal medical images are reviewed with mutual information-based methods discussed in detail. Since it was proposed, mutual information has gained intensive research and is getting very popular, however its robustness is questionable and may fail in some cases. The possible reason might be it does not consider the spatial information in the image pair. In order to improve this measure, the thesis proposes to use combined mutual information of intensity and gradient for multi-modal medical image registration. The proposed measure utilizes both the intensity and gradient information of an image pair. Maximization of this measure is assumed to correctly register an image pair. Optimization of the registration measure in a multi-dimensional space is another major issue in multi-modal medical image registration. The thesis first briefly reviews the commonly used optimization techniques and then discusses in detail the Powell\u27s conjugate direction set method, which is implemented to find the maximum of the combined mutual information of an image pair. In the experiment, we first register slice images scanned in a single patient in the same or different scanning sessions by the proposed method. Then 20 pairs of co-registered CT and PET slice images at three different resolutions are used to study the performance of the proposed measure and four other measures discussed in this thesis. Experimental results indicate that the proposed combined measure produces reliable registrations and it outperforms the intensity- and gradient-based measures at all three resolutions

    Improvements in the registration of multimodal medical imaging : application to intensity inhomogeneity and partial volume corrections

    Get PDF
    Alignment or registration of medical images has a relevant role on clinical diagnostic and treatment decisions as well as in research settings. With the advent of new technologies for multimodal imaging, robust registration of functional and anatomical information is still a challenge, particular in small-animal imaging given the lesser structural content of certain anatomical parts, such as the brain, than in humans. Besides, patient-dependent and acquisition artefacts affecting the images information content further complicate registration, as is the case of intensity inhomogeneities (IIH) showing in MRI and the partial volume effect (PVE) attached to PET imaging. Reference methods exist for accurate image registration but their performance is severely deteriorated in situations involving little images Overlap. While several approaches to IIH and PVE correction exist these methods still do not guarantee or rely on robust registration. This Thesis focuses on overcoming current limitations af registration to enable novel IIH and PVE correction methods.El registre d'imatges mèdiques té un paper rellevant en les decisions de diagnòstic i tractament clíniques així com en la recerca. Amb el desenvolupament de noves tecnologies d'imatge multimodal, el registre robust d'informació funcional i anatòmica és encara avui un repte, en particular, en imatge de petit animal amb un menor contingut estructural que en humans de certes parts anatòmiques com el cervell. A més, els artefactes induïts pel propi pacient i per la tècnica d'adquisició que afecten el contingut d'informació de les imatges complica encara més el procés de registre. És el cas de les inhomogeneïtats d'intensitat (IIH) que apareixen a les RM i de l'efecte de volum parcial (PVE) característic en PET. Tot i que existeixen mètodes de referència pel registre acurat d'imatges la seva eficàcia es veu greument minvada en casos de poc solapament entre les imatges. De la mateixa manera, també existeixen mètodes per la correcció d'IIH i de PVE però que no garanteixen o que requereixen un registre robust. Aquesta tesi es centra en superar aquestes limitacions sobre el registre per habilitar nous mètodes per la correcció d'IIH i de PVE

    Shape/image registration for medical imaging : novel algorithms and applications.

    Get PDF
    This dissertation looks at two different categories of the registration approaches: Shape registration, and Image registration. It also considers the applications of these approaches into the medical imaging field. Shape registration is an important problem in computer vision, computer graphics and medical imaging. It has been handled in different manners in many applications like shapebased segmentation, shape recognition, and tracking. Image registration is the process of overlaying two or more images of the same scene taken at different times, from different viewpoints, and/or by different sensors. Many image processing applications like remote sensing, fusion of medical images, and computer-aided surgery need image registration. This study deals with two different applications in the field of medical image analysis. The first one is related to shape-based segmentation of the human vertebral bodies (VBs). The vertebra consists of the VB, spinous, and other anatomical regions. Spinous pedicles, and ribs should not be included in the bone mineral density (BMD) measurements. The VB segmentation is not an easy task since the ribs have similar gray level information. This dissertation investigates two different segmentation approaches. Both of them are obeying the variational shape-based segmentation frameworks. The first approach deals with two dimensional (2D) case. This segmentation approach starts with obtaining the initial segmentation using the intensity/spatial interaction models. Then, shape model is registered to the image domain. Finally, the optimal segmentation is obtained using the optimization of an energy functional which integrating the shape model with the intensity information. The second one is a 3D simultaneous segmentation and registration approach. The information of the intensity is handled by embedding a Willmore flow into the level set segmentation framework. Then the shape variations are estimated using a new distance probabilistic model. The experimental results show that the segmentation accuracy of the framework are much higher than other alternatives. Applications on BMD measurements of vertebral body are given to illustrate the accuracy of the proposed segmentation approach. The second application is related to the field of computer-aided surgery, specifically on ankle fusion surgery. The long-term goal of this work is to apply this technique to ankle fusion surgery to determine the proper size and orientation of the screws that are used for fusing the bones together. In addition, we try to localize the best bone region to fix these screws. To achieve these goals, the 2D-3D registration is introduced. The role of 2D-3D registration is to enhance the quality of the surgical procedure in terms of time and accuracy, and would greatly reduce the need for repeated surgeries; thus, saving the patients time, expense, and trauma

    Multimodal breast imaging: Registration, visualization, and image synthesis

    Get PDF
    The benefit of registration and fusion of functional images with anatomical images is well appreciated in the advent of combined positron emission tomography and x-ray computed tomography scanners (PET/CT). This is especially true in breast cancer imaging, where modalities such as high-resolution and dynamic contrast-enhanced magnetic resonance imaging (MRI) and F-18-FDG positron emission tomography (PET) have steadily gained acceptance in addition to x-ray mammography, the primary detection tool. The increased interest in combined PET/MRI images has facilitated the demand for appropriate registration and fusion algorithms. A new approach to MRI-to-PET non-rigid breast image registration was developed and evaluated based on the location of a small number of fiducial skin markers (FSMs) visible in both modalities. The observed FSM displacement vectors between MRI and PET, distributed piecewise linearly over the breast volume, produce a deformed Finite-Element mesh that reasonably approximates non-rigid deformation of the breast tissue between the MRI and PET scans. The method does not require a biomechanical breast tissue model, and is robust and fast. The method was evaluated both qualitatively and quantitatively on patients and a deformable breast phantom. The procedure yields quality images with average target registration error (TRE) below 4 mm. The importance of appropriately jointly displaying (i.e. fusing) the registered images has often been neglected and underestimated. A combined MRI/PET image has the benefits of directly showing the spatial relationships between the two modalities, increasing the sensitivity, specificity, and accuracy of diagnosis. Additional information on morphology and on dynamic behavior of the suspicious lesion can be provided, allowing more accurate lesion localization including mapping of hyper- and hypo-metabolic regions as well as better lesion-boundary definition, improving accuracy when grading the breast cancer and assessing the need for biopsy. Eight promising fusion-for-visualization techniques were evaluated by radiologists from University Hospital, in Syracuse, NY. Preliminary results indicate that the radiologists were better able to perform a series of tasks when reading the fused PET/MRI data sets using color tables generated by a newly developed genetic algorithm, as compared to other commonly used schemes. The lack of a known ground truth hinders the development and evaluation of new algorithms for tasks such as registration and classification. A preliminary mesh-based breast phantom containing 12 distinct tissue classes along with tissue properties necessary for the simulation of dynamic positron emission tomography scans was created. The phantom contains multiple components which can be separately manipulated, utilizing geometric transformations, to represent populations or a single individual being imaged in multiple positions. This phantom will support future multimodal breast imaging work

    Multispectral Imaging For Face Recognition Over Varying Illumination

    Get PDF
    This dissertation addresses the advantage of using multispectral narrow-band images over conventional broad-band images for improved face recognition under varying illumination. To verify the effectiveness of multispectral images for improving face recognition performance, three sequential procedures are taken into action: multispectral face image acquisition, image fusion for multispectral and spectral band selection to remove information redundancy. Several efficient image fusion algorithms are proposed and conducted on spectral narrow-band face images in comparison to conventional images. Physics-based weighted fusion and illumination adjustment fusion make good use of spectral information in multispectral imaging process. The results demonstrate that fused narrow-band images outperform the conventional broad-band images under varying illuminations. In the case where multispectral images are acquired over severe changes in daylight, the fused images outperform conventional broad-band images by up to 78%. The success of fusing multispectral images lies in the fact that multispectral images can separate the illumination information from the reflectance of objects which is impossible for conventional broad-band images. To reduce the information redundancy among multispectral images and simplify the imaging system, distance-based band selection is proposed where a quantitative evaluation metric is defined to evaluate and differentiate the performance of multispectral narrow-band images. This method is proved to be exceptionally robust to parameter changes. Furthermore, complexity-guided distance-based band selection is proposed using model selection criterion for an automatic selection. The performance of selected bands outperforms the conventional images by up to 15%. From the significant performance improvement via distance-based band selection and complexity-guided distance-based band selection, we prove that specific facial information carried in certain narrow-band spectral images can enhance face recognition performance compared to broad-band images. In addition, both algorithms are proved to be independent to recognition engines. Significant performance improvement is achieved by proposed image fusion and band selection algorithms under varying illumination including outdoor daylight conditions. Our proposed imaging system and image processing algorithms lead to a new avenue of automatic face recognition system towards a better recognition performance than the conventional peer system over varying illuminations

    Parallel Computation of Nonrigid Image Registration

    Get PDF
    Automatic intensity-based nonrigid image registration brings significant impact in medical applications such as multimodality fusion of images, serial comparison for monitoring disease progression or regression, and minimally invasive image-guided interventions. However, due to memory and compute intensive nature of the operations, intensity-based image registration has remained too slow to be practical for clinical adoption, with its use limited primarily to as a pre-operative too. Efficient registration methods can lead to new possibilities for development of improved and interactive intraoperative tools and capabilities. In this thesis, we propose an efficient parallel implementation for intensity-based three-dimensional nonrigid image registration on a commodity graphics processing unit. Optimization techniques are developed to accelerate the compute-intensive mutual information computation. The study is performed on the hierarchical volume subdivision-based algorithm, which is inherently faster than other nonrigid registration algorithms and structurally well-suited for data-parallel computation platforms. The proposed implementation achieves more than 50-fold runtime improvement over a standard implementation on a CPU. The execution time of nonrigid image registration is reduced from hours to minutes while retaining the same level of registration accuracy

    Deep learning in medical image registration: introduction and survey

    Full text link
    Image registration (IR) is a process that deforms images to align them with respect to a reference space, making it easier for medical practitioners to examine various medical images in a standardized reference frame, such as having the same rotation and scale. This document introduces image registration using a simple numeric example. It provides a definition of image registration along with a space-oriented symbolic representation. This review covers various aspects of image transformations, including affine, deformable, invertible, and bidirectional transformations, as well as medical image registration algorithms such as Voxelmorph, Demons, SyN, Iterative Closest Point, and SynthMorph. It also explores atlas-based registration and multistage image registration techniques, including coarse-fine and pyramid approaches. Furthermore, this survey paper discusses medical image registration taxonomies, datasets, evaluation measures, such as correlation-based metrics, segmentation-based metrics, processing time, and model size. It also explores applications in image-guided surgery, motion tracking, and tumor diagnosis. Finally, the document addresses future research directions, including the further development of transformers

    Rinnan lämpökuvien aikasarjojen stabilointi

    Get PDF
    Dynamic infrared imaging (DIRI) is an emerging technology for the early detection of breast cancer. In this method time-series of thermal breast images are obtained. The patient motion in the time-series can distort the DIRI analysis in such a way that the detection of breast cancer becomes impossible. Image registration can be used to eliminate the patient motion from the time-series data. In this thesis, two different registration algorithms were tested: Thirion's demons algorithm and an algorithm based on an affine transformation. Furthermore, a combined method where the affine method is used as a pre-registration step for the demons method was tested. The algorithms were implemented with Matlab and their performance in the task of registering a time-series of thermal breast images was evaluated using four different performance metrics. The registration algorithms were implemented for time-series data of 20 healthy (no malignant lesions) subjects. The demons method outperformed the affine method and is recommended as a suitable tool for time-series registration of thermal breast images. The combined method achieved slightly improved results compared to the demons method but with significantly increased computation time.Dynaaminen lämpökuvantaminen on lupaava menetelmä rintasyövän aikaiseen havaitsemiseen. Menetelmässä rinnoista otetaan lämpökuvien aikasarja. Kuvantamisen aikana tapahtuva potilaan liike voi vaikeuttaa aikasarjan analysointia niin, että rintasyövän tunnistaminen ei ole mahdollista. Liike voidaan poistaa aikasarjasta kuvastabiloinnin avulla. Tässä työssä tutkittiin kahta kuvastabilointiin kehitettyä algoritmia: Thirionin demons-algoritmia ja algoritmia, joka perustuu affiiniin muunnokseen. Lisäksi tutkittiin yhdistettyä menetelmää, jossa affiinia menetelmää käytetään esiaskeleena demons-menetelmälle. Algoritmien laskenta toteutettiin Matlabilla. Algoritmien tuottaman tuloksen laatua arvioitiin neljällä erillisellä laatumittarilla. Testidatana käytettiin aikasarjoja, jotka oli kuvattu 20:stä terveestä (ei pahanlaatuisia kasvaimia) potilaasta. Demons-menetelmä osoittautui affiinia menetelmää paremmaksi. Demons-menetelmää voidaan suositella rintojen lämpökuvien aikasarjojen stabilointiin. Yhdistetty menetelmä tuotti hiukan parempia tuloksia kuin demons-menetelmä, mutta vaati huomattavasti enemmän laskenta-aikaa
    corecore