18 research outputs found
Precise and Automated Tomographic Reconstruction with a Limited Number of Projections
This thesis proposed a parameter-optimized iterative reconstruction method (optimized-CGTV) for tomographic reconstruction with limited projections subject to the minimization of the total variation (TV). The reconstruction problem is solved with a parameter optimization applying a discrete L-curve. The optimized-CGTV reconstruction method is incorporated into an automatic framework of parallel 3D reconstruction on a computer cluster to achieve a rapid reconstruction process
Accelerated iterative image reconstruction for cone-beam computed tomography through Big Data frameworks
One of the latest trends in Computed Tomography (CT) is the reduction of the radiation dose delivered to patients through the decrease of the amount of acquired data. This reduction results in artifacts in the final images if conventional reconstruction methods are used, making it advisable to employ iterative algorithms to enhance image quality. Most approaches are built around two main operators, backprojection and projection, which are computationally expensive. In this work, we present an implementation of those operators for iterative reconstruction methods exploiting the Big Data paradigm. We define an architecture based on Apache Spark that supports both Graphical Processing Units (GPU) and CPU-based architectures. The aforementioned are parallelized using a partitioning scheme based on the division of the volume and irregular data structures in order to reduce the cost of communication and computation of the final images. Our solution accelerates the execution of the two most computational expensive components with Apache Spark, improving the programming experience of new iterative reconstruction algorithms and the maintainability of the source code increasing the level of abstraction for non-experienced high performance programmers. Through an experimental evaluation, we show that we can obtain results up to 10 faster for projection and 21 faster for backprojection when using a GPU-based cluster compared to a traditional multi-core version. Although a linear speed up was not reached, the proposed approach can be a good alternative for porting previous medical image reconstruction applications already implemented in C/C++ or even with CUDA or OpenCL programming models. Our solution enables the automatic detection of the GPU devices and execution on CPU and GPU tasks at the same time under the same system, using all the available resources.This work was supported by the NIH, United States under Grant R01-HL-098686 and Grant U01 EB018753, the Spanish Ministerio de Economia y Competitividad (projects TEC2013-47270-R, RTC-2014-3028 and TIN2016-79637-P), the Spanish Ministerio de Educacion (grant FPU14/03875), the Spanish Ministerio de Ciencia, Innovacion y Universidades (Instituto de Salud Carlos III, project DTS17/00122; Agencia Estatal de Investigacion, project DPI2016-79075-R-AEI/FEDER, UE), co-funded by European Regional Development Fund (ERDF), ‘‘A way of making Europe’’. The CNIC is supported by the Ministerio de Ciencia, Spain, Innovacion y Universidades, Spain and the Pro CNIC Foundation, Spain, and is a Severo Ochoa Center of Excellence, Spain (SEV-2015-0505). Finally, this research was partially supported by Madrid regional Government, Spain under the grant ’’Convergencia Big data-Hpc: de los sensores a las Aplicaciones. (CABAHLA-CM)’’. Ref: S2018/TCS-4423
TV-based conjugate gradient method and discrete L-curve for few-view CT reconstruction of X-ray in vivo data
High-resolution, three-dimensional (3D) imaging of soft tissues requires the solution of two inverse problems: phase retrieval and the reconstruction of the 3D image from a tomographic stack of two-dimensional (2D) projections. The number of projections per stack should be small to accommodate fast tomography of rapid processes and to constrain X-ray radiation dose to optimal levels to either increase the duration of in vivo time-lapse series at a given goal for spatial resolution and/or the conservation of structure under X-ray irradiation. In pursuing the 3D reconstruction problem in the sense of compressive sampling theory, we propose to reduce the number of projections by applying an advanced algebraic technique subject to the minimisation of the total variation (TV) in the reconstructed slice. This problem is formulated in a Lagrangian multiplier fashion with the parameter value determined by appealing to a discrete L-curve in conjunction with a conjugate gradient method. The usefulness of this reconstruction modality is demonstrated for simulated and in vivo data, the latter acquired in parallel-beam imaging experiments using synchrotron radiation
Helmholtz Portfolio Theme Large-Scale Data Management and Analysis (LSDMA)
The Helmholtz Association funded the "Large-Scale Data Management and Analysis" portfolio theme from 2012-2016. Four Helmholtz centres, six universities and another research institution in Germany joined to enable data-intensive science by optimising data life cycles in selected scientific communities. In our Data Life cycle Labs, data experts performed joint R&D together with scientific communities. The Data Services Integration Team focused on generic solutions applied by several communities
Novel high performance techniques for high definition computer aided tomography
Mención Internacional en el título de doctorMedical image processing is an interdisciplinary field in which multiple research areas are involved:
image acquisition, scanner design, image reconstruction algorithms, visualization, etc.
X-Ray Computed Tomography (CT) is a medical imaging modality based on the attenuation
suffered by the X-rays as they pass through the body. Intrinsic differences in attenuation properties
of bone, air, and soft tissue result in high-contrast images of anatomical structures. The
main objective of CT is to obtain tomographic images from radiographs acquired using X-Ray
scanners. The process of building a 3D image or volume from the 2D radiographs is known as
reconstruction. One of the latest trends in CT is the reduction of the radiation dose delivered
to patients through the decrease of the amount of acquired data. This reduction results in artefacts
in the final images if conventional reconstruction methods are used, making it advisable to
employ iterative reconstruction algorithms.
There are numerous reconstruction algorithms available, from which we can highlight two
specific types: traditional algorithms, which are fast but do not enable the obtaining of high
quality images in situations of limited data; and iterative algorithms, slower but more reliable
when traditional methods do not reach the quality standard requirements. One of the priorities
of reconstruction is the obtaining of the final images in near real time, in order to reduce the
time spent in diagnosis. To accomplish this objective, new high performance techniques and methods
for accelerating these types of algorithms are needed. This thesis addresses the challenges
of both traditional and iterative reconstruction algorithms, regarding acceleration and image
quality. One common approach for accelerating these algorithms is the usage of shared-memory
and heterogeneous architectures. In this thesis, we propose a novel simulation/reconstruction
framework, namely FUX-Sim. This framework follows the hypothesis that the development of
new flexible X-ray systems can benefit from computer simulations, which may also enable performance
to be checked before expensive real systems are implemented. Its modular design
abstracts the complexities of programming for accelerated devices to facilitate the development
and evaluation of the different configurations and geometries available. In order to obtain near
real execution times, low-level optimizations for the main components of the framework are
provided for Graphics Processing Unit (GPU) architectures.
Other alternative tackled in this thesis is the acceleration of iterative reconstruction algorithms
by using distributed memory architectures. We present a novel architecture that unifies
the two most important computing paradigms for scientific computing nowadays: High Performance
Computing (HPC). The proposed architecture combines Big Data frameworks with the
advantages of accelerated computing.
The proposed methods presented in this thesis provide more flexible scanner configurations
as they offer an accelerated solution. Regarding performance, our approach is as competitive as
the solutions found in the literature. Additionally, we demonstrate that our solution scales with
the size of the problem, enabling the reconstruction of high resolution images.El procesamiento de imágenes médicas es un campo interdisciplinario en el que participan múltiples
áreas de investigación como la adquisición de imágenes, diseño de escáneres, algoritmos de
reconstrucción de imágenes, visualización, etc. La tomografía computarizada (TC) de rayos X es
una modalidad de imágen médica basada en el cálculo de la atenuación sufrida por los rayos X a
medida que pasan por el cuerpo a escanear. Las diferencias intrínsecas en la atenuación de hueso,
aire y tejido blando dan como resultado imágenes de alto contraste de estas estructuras anatómicas.
El objetivo principal de la TC es obtener imágenes tomográficas a partir estas radiografías
obtenidas mediante escáneres de rayos X. El proceso de construir una imagen o volumen en 3D a
partir de las radiografías 2D se conoce como reconstrucción. Una de las últimas tendencias en la
tomografía computarizada es la reducción de la dosis de radiación administrada a los pacientes
a través de la reducción de la cantidad de datos adquiridos. Esta reducción da como resultado
artefactos en las imágenes finales si se utilizan métodos de reconstrucción convencionales, por
lo que es aconsejable emplear algoritmos de reconstrucción iterativos.
Existen numerosos algoritmos de reconstrucción disponibles a partir de los cuales podemos
destacar dos categorías: algoritmos tradicionales, rápidos pero no permiten obtener imágenes de
alta calidad en situaciones en las que los datos son limitados; y algoritmos iterativos, más lentos
pero más estables en situaciones donde los métodos tradicionales no alcanzan los requisitos en
cuanto a la calidad de la imagen. Una de las prioridades de la reconstrucción es la obtención
de las imágenes finales en tiempo casi real, con el fin de reducir el tiempo de diagnóstico. Para
lograr este objetivo, se necesitan nuevas técnicas y métodos de alto rendimiento para acelerar
estos algoritmos.
Esta tesis aborda los desafíos de los algoritmos de reconstrucción tradicionales e iterativos,
con respecto a la aceleración y la calidad de imagen. Un enfoque común para acelerar estos
algoritmos es el uso de arquitecturas de memoria compartida y heterogéneas. En esta tesis,
proponemos un nuevo sistema de simulación/reconstrucción, llamado FUX-Sim. Este sistema se
construye alrededor de la hipótesis de que el desarrollo de nuevos sistemas de rayos X flexibles
puede beneficiarse de las simulaciones por computador, en los que también se puede realizar
un control del rendimiento de los nuevos sistemas a desarrollar antes de su implementación
física. Su diseño modular abstrae las complejidades de la programación para aceleradores con el
objetivo de facilitar el desarrollo y la evaluación de las diferentes configuraciones y geometrías
disponibles. Para obtener ejecuciones en casi tiempo real, se proporcionan optimizaciones de
bajo nivel para los componentes principales del sistema en las arquitecturas GPU.
Otra alternativa abordada en esta tesis es la aceleración de los algoritmos de reconstrucción
iterativa mediante el uso de arquitecturas de memoria distribuidas. Presentamos una arquitectura
novedosa que unifica los dos paradigmas informáticos más importantes en la actualidad:
computación de alto rendimiento (HPC) y Big Data. La arquitectura propuesta combina sistemas
Big Data con las ventajas de los dispositivos aceleradores.
Los métodos propuestos presentados en esta tesis proporcionan configuraciones de escáner
más flexibles y ofrecen una solución acelerada. En cuanto al rendimiento, nuestro enfoque es tan
competitivo como las soluciones encontradas en la literatura. Además, demostramos que nuestra
solución escala con el tamaño del problema, lo que permite la reconstrucción de imágenes de
alta resolución.This work has been mainly funded thanks to a FPU fellowship (FPU14/03875) from the Spanish
Ministry of Education.
It has also been partially supported by other grants:
• DPI2016-79075-R. “Nuevos escenarios de tomografía por rayos X”, from the Spanish Ministry
of Economy and Competitiveness.
• TIN2016-79637-P Towards unification of HPC and Big Data Paradigms from the Spanish
Ministry of Economy and Competitiveness.
• Short-term scientific missions (STSM) grant from NESUS COST Action IC1305.
• TIN2013-41350-P, Scalable Data Management Techniques for High-End Computing Systems
from the Spanish Ministry of Economy and Competitiveness.
• RTC-2014-3028-1 NECRA Nuevos escenarios clinicos con radiología avanzada from the
Spanish Ministry of Economy and Competitiveness.Programa Oficial de Doctorado en Ciencia y Tecnología InformáticaPresidente: José Daniel García Sánchez.- Secretario: Katzlin Olcoz Herrero.- Vocal: Domenico Tali
Konzept zur interaktiven Auswertung multidimensionaler biologischer Bilddaten
Das Thema der Arbeit beinhaltet die Konzeptionierung der Analyse großer multidimensionaler biologischer Bilddatenmengen. Erstellt werden Bausteine für die Filterung, Segmentierung und Merkmalsextraktion. Weiterhin werden Programmpakete bereitgestellt. Dabei wird stets der Spagat zwischen Qualität, Quantität und Berechnungszeit gehalten
New Methods to Improve Large-Scale Microscopy Image Analysis with Prior Knowledge and Uncertainty
Multidimensional imaging techniques provide powerful ways to examine various kinds of scientific questions. The routinely produced data sets in the terabyte-range, however, can hardly be analyzed manually and require an extensive use of automated image analysis. The present work introduces a new concept for the estimation and propagation of uncertainty involved in image analysis operators and new segmentation algorithms that are suitable for terabyte-scale analyses of 3D+t microscopy images
New Methods to Improve Large-Scale Microscopy Image Analysis with Prior Knowledge and Uncertainty
Multidimensional imaging techniques provide powerful ways to examine various
kinds of scientific questions. The routinely produced datasets in the
terabyte-range, however, can hardly be analyzed manually and require an
extensive use of automated image analysis. The present thesis introduces a new
concept for the estimation and propagation of uncertainty involved in image
analysis operators and new segmentation algorithms that are suitable for
terabyte-scale analyses of 3D+t microscopy images.Comment: 218 pages, 58 figures, PhD thesis, Department of Mechanical
Engineering, Karlsruhe Institute of Technology, published online with KITopen
(License: CC BY-SA 3.0, http://dx.doi.org/10.5445/IR/1000057821