11 research outputs found

    Novel high performance techniques for high definition computer aided tomography

    Get PDF
    Mención Internacional en el título de doctorMedical image processing is an interdisciplinary field in which multiple research areas are involved: image acquisition, scanner design, image reconstruction algorithms, visualization, etc. X-Ray Computed Tomography (CT) is a medical imaging modality based on the attenuation suffered by the X-rays as they pass through the body. Intrinsic differences in attenuation properties of bone, air, and soft tissue result in high-contrast images of anatomical structures. The main objective of CT is to obtain tomographic images from radiographs acquired using X-Ray scanners. The process of building a 3D image or volume from the 2D radiographs is known as reconstruction. One of the latest trends in CT is the reduction of the radiation dose delivered to patients through the decrease of the amount of acquired data. This reduction results in artefacts in the final images if conventional reconstruction methods are used, making it advisable to employ iterative reconstruction algorithms. There are numerous reconstruction algorithms available, from which we can highlight two specific types: traditional algorithms, which are fast but do not enable the obtaining of high quality images in situations of limited data; and iterative algorithms, slower but more reliable when traditional methods do not reach the quality standard requirements. One of the priorities of reconstruction is the obtaining of the final images in near real time, in order to reduce the time spent in diagnosis. To accomplish this objective, new high performance techniques and methods for accelerating these types of algorithms are needed. This thesis addresses the challenges of both traditional and iterative reconstruction algorithms, regarding acceleration and image quality. One common approach for accelerating these algorithms is the usage of shared-memory and heterogeneous architectures. In this thesis, we propose a novel simulation/reconstruction framework, namely FUX-Sim. This framework follows the hypothesis that the development of new flexible X-ray systems can benefit from computer simulations, which may also enable performance to be checked before expensive real systems are implemented. Its modular design abstracts the complexities of programming for accelerated devices to facilitate the development and evaluation of the different configurations and geometries available. In order to obtain near real execution times, low-level optimizations for the main components of the framework are provided for Graphics Processing Unit (GPU) architectures. Other alternative tackled in this thesis is the acceleration of iterative reconstruction algorithms by using distributed memory architectures. We present a novel architecture that unifies the two most important computing paradigms for scientific computing nowadays: High Performance Computing (HPC). The proposed architecture combines Big Data frameworks with the advantages of accelerated computing. The proposed methods presented in this thesis provide more flexible scanner configurations as they offer an accelerated solution. Regarding performance, our approach is as competitive as the solutions found in the literature. Additionally, we demonstrate that our solution scales with the size of the problem, enabling the reconstruction of high resolution images.El procesamiento de imágenes médicas es un campo interdisciplinario en el que participan múltiples áreas de investigación como la adquisición de imágenes, diseño de escáneres, algoritmos de reconstrucción de imágenes, visualización, etc. La tomografía computarizada (TC) de rayos X es una modalidad de imágen médica basada en el cálculo de la atenuación sufrida por los rayos X a medida que pasan por el cuerpo a escanear. Las diferencias intrínsecas en la atenuación de hueso, aire y tejido blando dan como resultado imágenes de alto contraste de estas estructuras anatómicas. El objetivo principal de la TC es obtener imágenes tomográficas a partir estas radiografías obtenidas mediante escáneres de rayos X. El proceso de construir una imagen o volumen en 3D a partir de las radiografías 2D se conoce como reconstrucción. Una de las últimas tendencias en la tomografía computarizada es la reducción de la dosis de radiación administrada a los pacientes a través de la reducción de la cantidad de datos adquiridos. Esta reducción da como resultado artefactos en las imágenes finales si se utilizan métodos de reconstrucción convencionales, por lo que es aconsejable emplear algoritmos de reconstrucción iterativos. Existen numerosos algoritmos de reconstrucción disponibles a partir de los cuales podemos destacar dos categorías: algoritmos tradicionales, rápidos pero no permiten obtener imágenes de alta calidad en situaciones en las que los datos son limitados; y algoritmos iterativos, más lentos pero más estables en situaciones donde los métodos tradicionales no alcanzan los requisitos en cuanto a la calidad de la imagen. Una de las prioridades de la reconstrucción es la obtención de las imágenes finales en tiempo casi real, con el fin de reducir el tiempo de diagnóstico. Para lograr este objetivo, se necesitan nuevas técnicas y métodos de alto rendimiento para acelerar estos algoritmos. Esta tesis aborda los desafíos de los algoritmos de reconstrucción tradicionales e iterativos, con respecto a la aceleración y la calidad de imagen. Un enfoque común para acelerar estos algoritmos es el uso de arquitecturas de memoria compartida y heterogéneas. En esta tesis, proponemos un nuevo sistema de simulación/reconstrucción, llamado FUX-Sim. Este sistema se construye alrededor de la hipótesis de que el desarrollo de nuevos sistemas de rayos X flexibles puede beneficiarse de las simulaciones por computador, en los que también se puede realizar un control del rendimiento de los nuevos sistemas a desarrollar antes de su implementación física. Su diseño modular abstrae las complejidades de la programación para aceleradores con el objetivo de facilitar el desarrollo y la evaluación de las diferentes configuraciones y geometrías disponibles. Para obtener ejecuciones en casi tiempo real, se proporcionan optimizaciones de bajo nivel para los componentes principales del sistema en las arquitecturas GPU. Otra alternativa abordada en esta tesis es la aceleración de los algoritmos de reconstrucción iterativa mediante el uso de arquitecturas de memoria distribuidas. Presentamos una arquitectura novedosa que unifica los dos paradigmas informáticos más importantes en la actualidad: computación de alto rendimiento (HPC) y Big Data. La arquitectura propuesta combina sistemas Big Data con las ventajas de los dispositivos aceleradores. Los métodos propuestos presentados en esta tesis proporcionan configuraciones de escáner más flexibles y ofrecen una solución acelerada. En cuanto al rendimiento, nuestro enfoque es tan competitivo como las soluciones encontradas en la literatura. Además, demostramos que nuestra solución escala con el tamaño del problema, lo que permite la reconstrucción de imágenes de alta resolución.This work has been mainly funded thanks to a FPU fellowship (FPU14/03875) from the Spanish Ministry of Education. It has also been partially supported by other grants: • DPI2016-79075-R. “Nuevos escenarios de tomografía por rayos X”, from the Spanish Ministry of Economy and Competitiveness. • TIN2016-79637-P Towards unification of HPC and Big Data Paradigms from the Spanish Ministry of Economy and Competitiveness. • Short-term scientific missions (STSM) grant from NESUS COST Action IC1305. • TIN2013-41350-P, Scalable Data Management Techniques for High-End Computing Systems from the Spanish Ministry of Economy and Competitiveness. • RTC-2014-3028-1 NECRA Nuevos escenarios clinicos con radiología avanzada from the Spanish Ministry of Economy and Competitiveness.Programa Oficial de Doctorado en Ciencia y Tecnología InformáticaPresidente: José Daniel García Sánchez.- Secretario: Katzlin Olcoz Herrero.- Vocal: Domenico Tali

    Multi-GPU Acceleration of Iterative X-ray CT Image Reconstruction

    Get PDF
    X-ray computed tomography is a widely used medical imaging modality for screening and diagnosing diseases and for image-guided radiation therapy treatment planning. Statistical iterative reconstruction (SIR) algorithms have the potential to significantly reduce image artifacts by minimizing a cost function that models the physics and statistics of the data acquisition process in X-ray CT. SIR algorithms have superior performance compared to traditional analytical reconstructions for a wide range of applications including nonstandard geometries arising from irregular sampling, limited angular range, missing data, and low-dose CT. The main hurdle for the widespread adoption of SIR algorithms in multislice X-ray CT reconstruction problems is their slow convergence rate and associated computational time. We seek to design and develop fast parallel SIR algorithms for clinical X-ray CT scanners. Each of the following approaches is implemented on real clinical helical CT data acquired from a Siemens Sensation 16 scanner and compared to the straightforward implementation of the Alternating Minimization (AM) algorithm of O’Sullivan and Benac [1]. We parallelize the computationally expensive projection and backprojection operations by exploiting the massively parallel hardware architecture of 3 NVIDIA TITAN X Graphical Processing Unit (GPU) devices with CUDA programming tools and achieve an average speedup of 72X over a straightforward CPU implementation. We implement a multi-GPU based voxel-driven multislice analytical reconstruction algorithm called Feldkamp-Davis-Kress (FDK) [2] and achieve an average overall speedup of 1382X over the baseline CPU implementation by using 3 TITAN X GPUs. Moreover, we propose a novel adaptive surrogate-function based optimization scheme for the AM algorithm, resulting in more aggressive update steps in every iteration. On average, we double the convergence rate of our baseline AM algorithm and also improve image quality by using the adaptive surrogate function. We extend the multi-GPU and adaptive surrogate-function based acceleration techniques to dual-energy reconstruction problems as well. Furthermore, we design and develop a GPU-based deep Convolutional Neural Network (CNN) to denoise simulated low-dose X-ray CT images. Our experiments show significant improvements in the image quality with our proposed deep CNN-based algorithm against some widely used denoising techniques including Block Matching 3-D (BM3D) and Weighted Nuclear Norm Minimization (WNNM). Overall, we have developed novel fast, parallel, computationally efficient methods to perform multislice statistical reconstruction and image-based denoising on clinically-sized datasets

    Geometrical Calibration and Filter Optimization for Cone-Beam Computed Tomography

    Get PDF
    This thesis will discuss the requirements of a software library for tomography and will derive a framework which can be used to realize various applications in cone-beam computed tomography (CBCT). The presented framework is self-contained and is realized using the MATLAB environment in combination with native low-level technologies (C/C++ and CUDA) to improve its computational performance, while providing accessibility and extendability through to use of a scripting language environment. On top of this framework, the realization of Katsevich’s algorithm on multicore hardware will be explained and the resulting implementation will be compared to the Feldkamp, Davis and Kress (FDK) algorithm. It will also be shown that this helical reconstruction method has the potential to reduce the measurement uncertainty. However, misalignment artifacts appear more severe in the helical reconstructions from real data than in the circular ones. Especially for helical CBCT (H-CBCT), this fact suggests that a precise calibration of the computed tomography (CT) system is inevitable. As a consequence, a self-calibration method will be designed that is able to estimate the misalignment parameters from the cone-beam projection data without the need of any additional measurements. The presented method employs a multi-resolution 2D-3D registration technique and a novel volume update scheme in combination with a stochastic reprojection strategy to achieve a reasonable runtime performance. The presented results will show that this method reaches sub-voxel accuracy and can compete with current state-of-the-art online- and offline-calibration approaches. Additionally, for the construction of filters in the area of limited-angle tomography a general scheme which uses the Approximate Inverse (AI) to compute an optimized set of 2D angle-dependent projection filters will be derived. Optimal sets of filters are then precomputed for two angular range setups and will be reused to perform various evaluations on multiple datasets with a filtered backprojection (FBP)-type method. This approach will be compared to the standard FDK algorithm and to the simultaneous iterative reconstruction technique (SIRT). The results of the study show that the introduced filter optimization produces results comparable to those of SIRT with respect to the reduction of reconstruction artifacts, whereby its runtime is comparable to that of the FDK algorithm

    Towards clinical implementation of ultrafast combined kV-MV cone-beam CT for IGRT of lung tumors within breath-hold: evaluation of dosimetry and registration accuracy based on phantom studies

    Get PDF
    Combined ultrafast 90°+90° kV-MV cone-beam computed tomography (CBCT) within breath-hold of 15s is a promising approach to accelerate imaging for patients with lung tumors treated with deep inspiration breath-hold (DIBH). To judge clinical feasibility of kV-MV CBCT, two main properties have to be fulfilled: (1) image quality has to be sufficient for registration within 1mm accuracy, and (2) dose exposure has to be small compared to the prescribed dose. The aim of this thesis was to develop concepts to test these properties of kV-MV CBCT based on a comparison study to clinically established CBCT methods. In particular, the main aspects were accomplished as follows: Dosimetric properties: For a reliable measurement of the absorbed dose in the imaging process, accurate dose calibration was performed for kV and MV energy. Extensive research was done to determine beam quality for both energy ranges. For direct comparison of MV and kV dose output, the relative biological effectiveness was considered. To simulate the patient situation, measurements in various representative locations of an inhomogeneous thorax phantom were performed. Furthermore, the CT dose index (CTDI) was determined for future quality assurance purposes. A measured dose of 20.5mGE in the target region was comparable to the widely-used clinical imaging technique, whereas kV-MV spared healthy tissue and reduced dose to 6.6mGE (30%). These results show that from the dosimetric point of view, kV-MV CBCT is suitable for hypofractionated DIBH. Registration accuracy: A detailed phantom registration study was performed with different tumor-mimicking tumor-shapes in an inhomogeneous thorax phantom. 10 random pre-selected isocenter shifts were applied using optical tracking with high accuracy of 0.05mm. Registration was performed with three methods: (1) manual, (2) automatic software provided by manufacturer, and (3) self-developed automatic registration framework. An objective evaluation was achieved with the self-developed registration method by automatic determination of identical region of interest around the tumor-shapes for all imaging techniques. Registration accuracy was in average maintained below 1mm, with maximum outliers still below 1.5mm. In summary, the comparison studies conceptualized and accomplished in this thesis demonstrated that kV-MV CBCT is feasible for imminent clinical implementation

    Optimization of Operation Sequencing in CAPP Using Hybrid Genetic Algorithm and Simulated Annealing Approach

    Get PDF
    In any CAPP system, one of the most important process planning functions is selection of the operations and corresponding machines in order to generate the optimal operation sequence. In this paper, the hybrid GA-SA algorithm is used to solve this combinatorial optimization NP (Non-deterministic Polynomial) problem. The network representation is adopted to describe operation and sequencing flexibility in process planning and the mathematical model for process planning is described with the objective of minimizing the production time. Experimental results show effectiveness of the hybrid algorithm that, in comparison with the GA and SA standalone algorithms, gives optimal operation sequence with lesser computational time and lesser number of iterations

    Optimization of Operation Sequencing in CAPP Using Hybrid Genetic Algorithm and Simulated Annealing Approach

    Get PDF
    In any CAPP system, one of the most important process planning functions is selection of the operations and corresponding machines in order to generate the optimal operation sequence. In this paper, the hybrid GA-SA algorithm is used to solve this combinatorial optimization NP (Non-deterministic Polynomial) problem. The network representation is adopted to describe operation and sequencing flexibility in process planning and the mathematical model for process planning is described with the objective of minimizing the production time. Experimental results show effectiveness of the hybrid algorithm that, in comparison with the GA and SA standalone algorithms, gives optimal operation sequence with lesser computational time and lesser number of iterations

    Autonomous Navigation of Automated Guided Vehicle Using Monocular Camera

    Get PDF
    This paper presents a hybrid control algorithm for Automated Guided Vehicle (AGV) consisting of two independent control loops: Position Based Control (PBC) for global navigation within manufacturing environment and Image Based Visual Servoing (IBVS) for fine motions needed for accurate steering towards loading/unloading point. The proposed hybrid control separates the initial transportation task into global navigation towards the goal point, and fine motion from the goal point to the loading/unloading point. In this manner, the need for artificial landmarks or accurate map of the environment is bypassed. Initial experimental results show the usefulness of the proposed approach.COBISS.SR-ID 27383808

    Autonomous Navigation of Automated Guided Vehicle Using Monocular Camera

    Get PDF
    This paper presents a hybrid control algorithm for Automated Guided Vehicle (AGV) consisting of two independent control loops: Position Based Control (PBC) for global navigation within manufacturing environment and Image Based Visual Servoing (IBVS) for fine motions needed for accurate steering towards loading/unloading point. The proposed hybrid control separates the initial transportation task into global navigation towards the goal point, and fine motion from the goal point to the loading/unloading point. In this manner, the need for artificial landmarks or accurate map of the environment is bypassed. Initial experimental results show the usefulness of the proposed approach.COBISS.SR-ID 27383808
    corecore