4 research outputs found

    System Optimization and Patient Translational Motion Correction for Reduction of Artifacts in a Fan-Beam CT Scanner

    Get PDF
    In computed tomography (CT) systems, many different artifacts may be present in the reconstructed image. These artifacts can greatly reduce image quality. For our laboratory prototype CT system, a fan-beam/cone-beam focal high-resolution computed tomography (fHRCT) scanner, the major artifacts that affect image quality are distortions due to errors in the reconstruction algorithm\u27s geometric parameters, ring artifacts caused by uncalibrated detectors, cupping and streaking created by beam hardening, and patient-based motion artifacts. Optimization of the system was required to reduce the effects of the first three artifact types, and an algorithm for correction of translational motion was developed for the last. System optimization of the system occurred in three parts. First, a multi-step process was developed to determine the geometric parameters of the scanner. The ability of the source-detector gantry to translate allowed a precise method to be created for calculating these parameters. Second, a general flat-field correction was used to linearize the detectors and reduce the ring artifacts. Lastly, beam hardening artifacts were decreased by a preprocessing technique. This technique assumes linear proportionality between the thickness of the calibration material, aluminum, and the experimental measurement of ln(No/N), where No is the total number of photons entering the material and N is the number of photons exiting the material. In addition to system optimization to minimize artifacts, an algorithm for correction of translational motion was developed and implemented. In this method, the integral mass and center of mass at each projection angle was seen to follow a sinusoidal or sinusoidal-like curve. Fits were used on the motion-encoded sinograms to determine both of these curves and, consequently, the amount and direction of motion that occurred. Each projection was individually adjusted to compensate for this motion by widening or narrowing the projection based on the ratio of the actual and calculated ideal projection integrals and shifting the projection to match the actual centroid to the calculated ideal location. A custom imaging phantom with an outer diameter of approximately 16 mm was used to test the motion-correction algorithm in both simulated and experimental cases. A baseline of the error measured, taken as a fraction, was established as 0.16 for motion-free images measured on the scanner. Various motion patterns were tested. These included the distance of motion, the angle at which the motion occurred, and the ratio of the sinograms that was corrupted by motion. Experimental testing showed a maximum error increase of 2.7% from the baseline error for the motion-corrected images at 4 mm motion. The overall optimization provided acceptable results for the reconstructed image and good-quality projections for use in the motion-correction algorithm. Distortion and ring artifacts were almost completely removed, and the beam hardening artifacts were greatly reduced. The motion-correction algorithm implemented in this thesis helps minimize the amount of error due to translational motion and provides a foundation for future corrections of more complex motions

    Medical image registration by neural networks: a regression-based registration approach

    Get PDF
    This thesis focuses on the development and evaluation of a registration-by-regression approach for the 3D/2D registration of coronary Computed Tomography Angiography (CTA) and X-ray angiography. This regression-based method relates image features of 2D projection images to the transformation parameters of the 3D image by a nonlinear regression. It treats registration as a regression problem, as an alternative for the traditional iterative approach that often comes with high computational costs and limited capture range. First we presented a survey of the methods with a regression-based registration approach for medical applications, as well as a summary of their main characteristics (Chapter 2). Second, we studied the registration methodology, addressing the input features and the choice of regression model (Chapter 3 and Chapter 4). For that purpose, we evaluated different options using simulated X-ray images generated from coronary artery tree models derived from 3D CTA scans. We also compared the registration-by-regression results with a method based on iterative optimization. Different image features of 2D projections and seven regression techniques were considered. The regression approach for simulated X-rays was shown to be slightly less accurate, but much more robust than the method based on an iterative optimization approach. Neural Networks obtained accurate results and showed to be robust to large initial misalignment. Third, we evaluated the registration-by-regression method using clinical data, integrating the 3D preoperative CTA of the coronary arteries with intraoperative 2D X-ray angiography images (Chapter 5). For the evaluation of the image registration, a gold standard registration was established using an exhaustive search followed by a multi-observer visual scoring procedure. The influence of preprocessing options for the simulated images and the real X-rays was studied. Several image features were also compared. The coronary registration–by-regression results were not satisfactory, resembling manual initialization accuracy. Therefore, the proposed method for this concrete problem and in its current configuration is not sufficiently accurate to be used in the clinical practice. The framework developed enables us to better understand the dependency of the proposed method on the differences between simulated and real images. The main difficulty lies in the substantial differences in appearance between the images used for training (simulated X-rays from 3D coronary models) and the actual images obtained during the intervention (real X-ray angiography). We suggest alternative solutions and recommend to evaluate the registration-by-regression approach in other applications where training data is available that has similar appearance to the eventual test data

    Learning to extract features for 2D – 3D multimodal registration

    Get PDF
    The ability to capture depth information form an scene has greatly increased in the recent years. 3D sensors, traditionally high cost and low resolution sensors, are being democratized and 3D scans of indoor and outdoor scenes are becoming more and more common. However, there is still a great data gap between the amount of captures being performed with 2D and 3D sensors. Although the 3D sensors provide more information about the scene, 2D sensors are still more accessible and widely used. This trade-off between availability and information between sensors brings us to a multimodal scenario of mixed 2D and 3D data. This thesis explores the fundamental block of this multimodal scenario: the registration between a single 2D image and a single unorganized point cloud. An unorganized 3D point cloud is the basic representation of a 3D capture. In this representation the surveyed points are represented only by their real word coordinates and, optionally, by their colour information. This simplistic representation brings multiple challenges to the registration, since most of the state of the art works leverage the existence of metadata about the scene or prior knowledges. Two different techniques are explored to perform the registration: a keypoint-based technique and an edge-based technique. The keypoint-based technique estimates the transformation by means of correspondences detected using Deep Learning, whilst the edge-based technique refines a transformation using a multimodal edge detection to establish anchor points to perform the estimation. An extensive evaluation of the proposed methodologies is performed. Albeit further research is needed to achieve adequate performances, the obtained results show the potential of the usage of deep learning techniques to learn 2D and 3D similarities. The results also show the good performance of the proposed 2D-3D iterative refinement, up to the state of the art on 3D-3D registration.La capacitat de captar informació de profunditat d’una escena ha augmentat molt els darrers anys. Els sensors 3D, tradicionalment d’alt cost i baixa resolució, s’estan democratitzant i escànners 3D d’escents interiors i exteriors són cada vegada més comuns. Tot i això, encara hi ha una gran bretxa entre la quantitat de captures que s’estan realitzant amb sensors 2D i 3D. Tot i que els sensors 3D proporcionen més informació sobre l’escena, els sensors 2D encara són més accessibles i àmpliament utilitzats. Aquesta diferència entre la disponibilitat i la informació entre els sensors ens porta a un escenari multimodal de dades mixtes 2D i 3D. Aquesta tesi explora el bloc fonamental d’aquest escenari multimodal: el registre entre una sola imatge 2D i un sol núvol de punts no organitzat. Un núvol de punts 3D no organitzat és la representació bàsica d’una captura en 3D. En aquesta representació, els punts mesurats es representen només per les seves coordenades i, opcionalment, per la informació de color. Aquesta representació simplista aporta múltiples reptes al registre, ja que la majoria dels algoritmes aprofiten l’existència de metadades sobre l’escena o coneixements previs. Per realitzar el registre s’exploren dues tècniques diferents: una tècnica basada en punts clau i una tècnica basada en contorns. La tècnica basada en punts clau estima la transformació mitjançant correspondències detectades mitjançant Deep Learning, mentre que la tècnica basada en contorns refina una transformació mitjançant una detecció multimodal de la vora per establir punts d’ancoratge per realitzar l’estimació. Es fa una avaluació àmplia de les metodologies proposades. Tot i que es necessita més investigació per obtenir un rendiment adequat, els resultats obtinguts mostren el potencial de l’ús de tècniques d’aprenentatge profund per aprendre similituds 2D i 3D. Els resultats també mostren l’excel·lent rendiment del perfeccionament iteratiu 2D-3D proposat, similar al dels algoritmes de registre 3D-3D.La capacidad de captar información de profundidad de una escena ha aumentado mucho en los últimos años. Los sensores 3D, tradicionalmente de alto costo y baja resolución, se están democratizando y escáneres 3D de escents interiores y exteriores son cada vez más comunes. Sin embargo, todavía hay una gran brecha entre la cantidad de capturas que se están realizando con sensores 2D y 3D. Aunque los sensores 3D proporcionan más información sobre la escena, los sensores 2D todavía son más accesibles y ampliamente utilizados. Esta diferencia entre la disponibilidad y la información entre los sensores nos lleva a un escenario multimodal de datos mixtos 2D y 3D. Esta tesis explora el bloque fundamental de este escenario multimodal: el registro entre una sola imagen 2D y una sola nube de puntos no organizado. Una nube de puntos 3D no organizado es la representación básica de una captura en 3D. En esta representación, los puntos medidos se representan sólo por sus coordenadas y, opcionalmente, por la información de color. Esta representación simplista aporta múltiples retos en el registro, ya que la mayoría de los algoritmos aprovechan la existencia de metadatos sobre la escena o conocimientos previos. Para realizar el registro se exploran dos técnicas diferentes: una técnica basada en puntos clave y una técnica basada en contornos. La técnica basada en puntos clave estima la transformación mediante correspondencias detectadas mediante Deep Learning, mientras que la técnica basada en contornos refina una transformación mediante una detección multimodal del borde para establecer puntos de anclaje para realizar la estimación. Se hace una evaluación amplia de las metodologías propuestas. Aunque se necesita más investigación para obtener un rendimiento adecuado, los resultados obtenidos muestran el potencial del uso de técnicas de aprendizaje profundo para aprender similitudes 2D y 3D. Los resultados también muestran el excelente rendimiento del perfeccionamiento iterativo 2D-3D propuesto, similar al de los algoritmos de registro 3D-3D

    Learning to extract features for 2D – 3D multimodal registration

    Get PDF
    The ability to capture depth information form an scene has greatly increased in the recent years. 3D sensors, traditionally high cost and low resolution sensors, are being democratized and 3D scans of indoor and outdoor scenes are becoming more and more common. However, there is still a great data gap between the amount of captures being performed with 2D and 3D sensors. Although the 3D sensors provide more information about the scene, 2D sensors are still more accessible and widely used. This trade-off between availability and information between sensors brings us to a multimodal scenario of mixed 2D and 3D data. This thesis explores the fundamental block of this multimodal scenario: the registration between a single 2D image and a single unorganized point cloud. An unorganized 3D point cloud is the basic representation of a 3D capture. In this representation the surveyed points are represented only by their real word coordinates and, optionally, by their colour information. This simplistic representation brings multiple challenges to the registration, since most of the state of the art works leverage the existence of metadata about the scene or prior knowledges. Two different techniques are explored to perform the registration: a keypoint-based technique and an edge-based technique. The keypoint-based technique estimates the transformation by means of correspondences detected using Deep Learning, whilst the edge-based technique refines a transformation using a multimodal edge detection to establish anchor points to perform the estimation. An extensive evaluation of the proposed methodologies is performed. Albeit further research is needed to achieve adequate performances, the obtained results show the potential of the usage of deep learning techniques to learn 2D and 3D similarities. The results also show the good performance of the proposed 2D-3D iterative refinement, up to the state of the art on 3D-3D registration.La capacitat de captar informació de profunditat d’una escena ha augmentat molt els darrers anys. Els sensors 3D, tradicionalment d’alt cost i baixa resolució, s’estan democratitzant i escànners 3D d’escents interiors i exteriors són cada vegada més comuns. Tot i això, encara hi ha una gran bretxa entre la quantitat de captures que s’estan realitzant amb sensors 2D i 3D. Tot i que els sensors 3D proporcionen més informació sobre l’escena, els sensors 2D encara són més accessibles i àmpliament utilitzats. Aquesta diferència entre la disponibilitat i la informació entre els sensors ens porta a un escenari multimodal de dades mixtes 2D i 3D. Aquesta tesi explora el bloc fonamental d’aquest escenari multimodal: el registre entre una sola imatge 2D i un sol núvol de punts no organitzat. Un núvol de punts 3D no organitzat és la representació bàsica d’una captura en 3D. En aquesta representació, els punts mesurats es representen només per les seves coordenades i, opcionalment, per la informació de color. Aquesta representació simplista aporta múltiples reptes al registre, ja que la majoria dels algoritmes aprofiten l’existència de metadades sobre l’escena o coneixements previs. Per realitzar el registre s’exploren dues tècniques diferents: una tècnica basada en punts clau i una tècnica basada en contorns. La tècnica basada en punts clau estima la transformació mitjançant correspondències detectades mitjançant Deep Learning, mentre que la tècnica basada en contorns refina una transformació mitjançant una detecció multimodal de la vora per establir punts d’ancoratge per realitzar l’estimació. Es fa una avaluació àmplia de les metodologies proposades. Tot i que es necessita més investigació per obtenir un rendiment adequat, els resultats obtinguts mostren el potencial de l’ús de tècniques d’aprenentatge profund per aprendre similituds 2D i 3D. Els resultats també mostren l’excel·lent rendiment del perfeccionament iteratiu 2D-3D proposat, similar al dels algoritmes de registre 3D-3D.La capacidad de captar información de profundidad de una escena ha aumentado mucho en los últimos años. Los sensores 3D, tradicionalmente de alto costo y baja resolución, se están democratizando y escáneres 3D de escents interiores y exteriores son cada vez más comunes. Sin embargo, todavía hay una gran brecha entre la cantidad de capturas que se están realizando con sensores 2D y 3D. Aunque los sensores 3D proporcionan más información sobre la escena, los sensores 2D todavía son más accesibles y ampliamente utilizados. Esta diferencia entre la disponibilidad y la información entre los sensores nos lleva a un escenario multimodal de datos mixtos 2D y 3D. Esta tesis explora el bloque fundamental de este escenario multimodal: el registro entre una sola imagen 2D y una sola nube de puntos no organizado. Una nube de puntos 3D no organizado es la representación básica de una captura en 3D. En esta representación, los puntos medidos se representan sólo por sus coordenadas y, opcionalmente, por la información de color. Esta representación simplista aporta múltiples retos en el registro, ya que la mayoría de los algoritmos aprovechan la existencia de metadatos sobre la escena o conocimientos previos. Para realizar el registro se exploran dos técnicas diferentes: una técnica basada en puntos clave y una técnica basada en contornos. La técnica basada en puntos clave estima la transformación mediante correspondencias detectadas mediante Deep Learning, mientras que la técnica basada en contornos refina una transformación mediante una detección multimodal del borde para establecer puntos de anclaje para realizar la estimación. Se hace una evaluación amplia de las metodologías propuestas. Aunque se necesita más investigación para obtener un rendimiento adecuado, los resultados obtenidos muestran el potencial del uso de técnicas de aprendizaje profundo para aprender similitudes 2D y 3D. Los resultados también muestran el excelente rendimiento del perfeccionamiento iterativo 2D-3D propuesto, similar al de los algoritmos de registro 3D-3D.Postprint (published version
    corecore