333 research outputs found

    Multi-GPU Acceleration of Iterative X-ray CT Image Reconstruction

    Get PDF
    X-ray computed tomography is a widely used medical imaging modality for screening and diagnosing diseases and for image-guided radiation therapy treatment planning. Statistical iterative reconstruction (SIR) algorithms have the potential to significantly reduce image artifacts by minimizing a cost function that models the physics and statistics of the data acquisition process in X-ray CT. SIR algorithms have superior performance compared to traditional analytical reconstructions for a wide range of applications including nonstandard geometries arising from irregular sampling, limited angular range, missing data, and low-dose CT. The main hurdle for the widespread adoption of SIR algorithms in multislice X-ray CT reconstruction problems is their slow convergence rate and associated computational time. We seek to design and develop fast parallel SIR algorithms for clinical X-ray CT scanners. Each of the following approaches is implemented on real clinical helical CT data acquired from a Siemens Sensation 16 scanner and compared to the straightforward implementation of the Alternating Minimization (AM) algorithm of O’Sullivan and Benac [1]. We parallelize the computationally expensive projection and backprojection operations by exploiting the massively parallel hardware architecture of 3 NVIDIA TITAN X Graphical Processing Unit (GPU) devices with CUDA programming tools and achieve an average speedup of 72X over a straightforward CPU implementation. We implement a multi-GPU based voxel-driven multislice analytical reconstruction algorithm called Feldkamp-Davis-Kress (FDK) [2] and achieve an average overall speedup of 1382X over the baseline CPU implementation by using 3 TITAN X GPUs. Moreover, we propose a novel adaptive surrogate-function based optimization scheme for the AM algorithm, resulting in more aggressive update steps in every iteration. On average, we double the convergence rate of our baseline AM algorithm and also improve image quality by using the adaptive surrogate function. We extend the multi-GPU and adaptive surrogate-function based acceleration techniques to dual-energy reconstruction problems as well. Furthermore, we design and develop a GPU-based deep Convolutional Neural Network (CNN) to denoise simulated low-dose X-ray CT images. Our experiments show significant improvements in the image quality with our proposed deep CNN-based algorithm against some widely used denoising techniques including Block Matching 3-D (BM3D) and Weighted Nuclear Norm Minimization (WNNM). Overall, we have developed novel fast, parallel, computationally efficient methods to perform multislice statistical reconstruction and image-based denoising on clinically-sized datasets

    Development of methods for time efficient scatter correction and improved attenuation correction in time-of-flight PET/MR

    Get PDF
    In der vorliegenden Dissertation wurden zwei fortdauernde Probleme der Bildrekonstruktion in der time-of-flight (TOF) PET bearbeitet: Beschleunigung der TOF-Streukorrektur sowie Verbesserung der emissionsbasierten Schwächungskorrektur. Aufgrund der fehlenden Möglichkeit, die Photonenabschwächung direkt zu messen, ist eine Verbesserung der Schwächungskorrektur durch eine gemeinsame Rekonstruktion der Aktivitäts- und Schwächungskoeffizienten-Verteilung mittels der MLAA-Methode von besonderer Bedeutung für die PET/MRT, während eine Beschleunigung der TOF-Streukorrektur gleichermaßen auch für TOF-fähige PET/CT-Systeme relevant ist. Für das Erreichen dieser Ziele wurde in einem ersten Schritt die hochauflösende PET-Bildrekonstruktion THOR, die bereits zuvor in unserer Gruppe entwickelt wurde, angepasst, um die TOF-Information nutzen zu können, welche von allen modernen PET-Systemen zur Verfügung gestellt wird. Die Nutzung der TOF-Information in der Bildrekonstruktion führt zu reduziertem Bildrauschen und zu einer verbesserten Konvergenzgeschwindigkeit. Basierend auf diesen Anpassungen werden in der vorliegenden Arbeit neue Entwicklungen für eine Verbesserung der TOF-Streukorrektur und der MLAA-Rekonstruktion beschrieben. Es werden sodann Ergebnisse vorgestellt, welche mit den neuen Algorithmen am Philips Ingenuity PET/MRT-Gerät erzielt wurden, das gemeinsam vom Helmholtz-Zentrum Dresden-Rossendorf (HZDR) und dem Universitätsklinikum betrieben wird. Eine wesentliche Voraussetzung für eine quantitative TOF-Bildrekonstruktionen ist eine Streukorrektur, welche die TOF-Information mit einbezieht. Die derzeit übliche Referenzmethode hierfür ist eine TOF-Erweiterung des single scatter simulation Ansatzes (TOF-SSS). Diese Methode wurde im Rahmen der TOF-Erweiterung von THOR implementiert. Der größte Nachteil der TOF-SSS ist eine 3–7-fach erhöhte Rechenzeit für die Berechnung der Streuschätzung im Vergleich zur non-TOF-SSS, wodurch die Bildrekonstruktionsdauer deutlich erhöht wird. Um dieses Problem zu beheben, wurde eine neue, schnellere TOF-Streukorrektur (ISA) entwickelt und implementiert. Es konnte gezeigt werden, dass dieser neue Algorithmus eine brauchbare Alternative zur TOF-SSS darstellt, welche die Rechenzeit auf ein Fünftel reduziert, wobei mithilfe von ISA und TOF-SSS rekonstruierte Schnittbilder quantitativ ausgezeichnet übereinstimmen. Die Gesamtrekonstruktionszeit konnte mithilfe ISA bei Ganzkörperuntersuchungen insgesamt um den Faktor Zwei reduziert werden. Dies kann als maßgeblicher Fortschritt betrachtet werden, speziell im Hinblick auf die Nutzung fortgeschrittener Bildrekonstruktionsverfahren im klinischen Umfeld. Das zweite große Thema dieser Arbeit ist ein Beitrag zur verbesserten Schwächungskorrektur in der PET/MRT mittels MLAA-Rekonstruktion. Hierfür ist zunächst eine genaue Kenntnis der tatsächlichen Zeitauflösung in der betrachten PET-Aufnahme zwingend notwendig. Da die vom Hersteller zur Verfügung gestellten Zahlen nicht immer verlässlich sind und zudem die Zählratenabhängigkeit nicht berücksichtigen, wurde ein neuer Algorithmus entwickelt und implementiert, um die Zeitauflösung in Abhängigkeit von der Zählrate zu bestimmen. Dieser Algorithmus (MLRES) basiert auf dem maximum likelihood Prinzip und erlaubt es, die funktionale Abhängigkeit der Zeitauflösung des Philips Ingenuity PET/MRT von der Zählrate zu bestimmen. In der vorliegenden Arbeit konnte insbesondere gezeigt werden, dass sich die Zeitauflösung des Ingenuity PET/MRT im klinisch relevanten Zählratenbereich um mehr als 250 ps gegenüber der vom Hersteller genannten Auflösung von 550 ps verschlechtern kann, welche tatsächlich nur bei extrem niedrigen Zählraten erreicht wird. Basierend auf den oben beschrieben Entwicklungen konnte MLAA in THOR integriert werden. Die MLAA-Implementierung erlaubt die Generierung realistischer patientenspezifischer Schwächungsbilder. Es konnte insbesondere gezeigt werden, dass auch Knochen und Hohlräume korrekt identifiziert werden, was mittels MRT-basierter Schwächungskorrektur sehr schwierig oder sogar unmöglich ist. Zudem konnten wir bestätigen, dass es mit MLAA möglich ist, metallbedingte Artefakte zu reduzieren, die ansonsten in den MRT-basierten Schwächungsbildern immer zu finden sind. Eine detaillierte Analyse der Ergebnisse zeigte allerdings verbleibende Probleme bezüglich der globalen Skalierung und des lokalen Übersprechens zwischen Aktivitäts- und Schwächungsschätzung auf. Daher werden zusätzliche Entwicklungen erforderlich sein, um auch diese Defizite zu beheben.The present work addresses two persistent issues of image reconstruction for time-of-flight (TOF) PET: acceleration of TOF scatter correction and improvement of emission-based attenuation correction. Due to the missing capability to measure photon attenuation directly, improving attenuation correction by joint reconstruction of the activity and attenuation coefficient distribution using the MLAA technique is of special relevance for PET/MR while accelerating TOF scatter correction is of equal importance for TOF-capable PET/CT systems as well. To achieve the stated goals, in a first step the high-resolution PET image reconstruction THOR, previously developed in our group, was adapted to take advantage of the TOF information delivered by state-of-the-art PET systems. TOF-aware image reconstruction reduces image noise and improves convergence rate both of which is highly desirable. Based on these adaptations, this thesis describes new developments for improvement of TOF scatter correction and MLAA reconstruction and reports results obtained with the new algorithms on the Philips Ingenuity PET/MR jointly operated by the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) and the University Hospital. A crucial requirement for quantitative TOF image reconstruction is TOF-aware scatter correction. The currently accepted reference method — the TOF extension of the single scatter simulation approach (TOF-SSS) — was implemented as part of the TOF-related modifications of THOR. The major drawback of TOF-SSS is a 3–7 fold increase in computation time required for the scatter estimation, compared to regular SSS, which in turn does lead to a considerable image reconstruction slowdown. This problem was addressed by development and implementation of a novel accelerated TOF scatter correction algorithm called ISA. This new algorithm proved to be a viable alternative to TOF-SSS and speeds up scatter correction by a factor of up to five in comparison to TOF-SSS. Images reconstructed using ISA are in excellent quantitative agreement with those obtained when using TOF-SSS while overall reconstruction time is reduced by a factor of two in whole-body investigations. This can be considered a major achievement especially with regard to the use of advanced image reconstruction in a clinical context. The second major topic of this thesis is contribution to improved attenuation correction in PET/MR by utilization of MLAA reconstruction. First of all, knowledge of the actual time resolution operational in the considered PET scan is mandatory for a viable MLAA implementation. Since vendor-provided figures regarding the time resolution are not necessarily reliable and do not cover count-rate dependent effects at all, a new algorithm was developed and implemented to determine the time resolution as a function of count rate. This algorithm (MLRES) is based on the maximum likelihood principle and allows to determine the functional dependency of the time resolution of the Philips Ingenuity PET/MR on the given count rate and to integrate this information into THOR. Notably, the present work proves that the time resolution of the Ingenuity PET/MR can degrade by more than 250 ps for the clinically relevant range of count rates in comparison to the vendor-provided figure of 550 ps which is only realized in the limit of extremely low count rates. Based on the previously described developments, MLAA could be integrated into THOR. The performed list-mode MLAA implementation is capable of deriving realistic, patient-specific attenuation maps. Especially, correct identification of osseous structures and air cavities could be demonstrated which is very difficult or even impossible with MR-based approaches to attenuation correction. Moreover, we have confirmed that MLAA is capable of reducing metal-induced artifacts which are otherwise present in MR-based attenuation maps. However, the detailed analysis of the obtained MLAA results revealed remaining problems regarding stability of global scaling as well as local cross-talk between activity and attenuation estimates. Therefore, further work beyond the scope of the present work will be necessary to address these remaining issues

    Investigation of the Effects of Image Signal-to-Noise Ratio on TSPO PET Quantification of Neuroinflammation

    Get PDF
    Neuroinflammation may be imaged using positron emission tomography (PET) and the tracer [11C]-PK11195. Accurate and precise quantification of 18 kilodalton Translocator Protein (TSPO) binding parameters in the brain has proven difficult with this tracer, due to an unfavourable combination of low target concentration in tissue, low brain uptake of the tracer and relatively high non-specific binding, all of which leads to higher levels of relative image noise. To address these limitations, research into new radioligands for the TSPO, with higher brain uptake and lower non-specific binding relative to [11C]-PK11195, is being conducted world-wide. However, factors other than radioligand properties are known to influence signal-to-noise ratio in quantitative PET studies, including the scanner sensitivity, image reconstruction algorithms and data analysis methodology. The aim of this thesis was to investigate and validate computational tools for predicting image noise in dynamic TSPO PET studies, and to employ those tools to investigate the factors that affect image SNR and reliability of TSPO quantification in the human brain. The feasibility of performing multiple (n≥40) independent Monte Carlo simulations for each dynamic [11C]-PK11195 frame- with realistic modelling of the radioactivity source, attenuation and PET tomograph geometries- was investigated. A Beowulf-type high performance computer cluster, constructed from commodity components, was found to be well suited to this task. Timing tests on a single desktop computer system indicated that a computer cluster capable of simulating an hour-long dynamic [11C]-PK11195 PET scan, with 40 independent repeats, and with a total simulation time of less than 6 weeks, could be constructed for less than 10,000 Australian dollars. A computer cluster containing 44 computing cores was therefore assembled, and a peak simulation rate of 2.84x105 photon pairs per second was achieved using the GEANT4 Application for Tomographic Emission (GATE) Monte Carlo simulation software. A simulated PET tomograph was developed in GATE that closely modelled the performance characteristics of several real-world clinical PET systems in terms of spatial resolution, sensitivity, scatter fraction and counting rate performance. The simulated PET system was validated using adaptations of the National Electrical Manufacturers Association (NEMA) quality assurance procedures within GATE. Image noise in dynamic TSPO PET scans was estimated by performing n=40 independent Monte Carlo simulations of an hour-long [11C]-PK11195 scan, and of an hour- long dynamic scan for a hypothetical TSPO ligand with double the brain activity concentration of [11C]-PK11195. From these data an analytical noise model was developed that allowed image noise to be predicted for any combination of brain tissue activity concentration and scan duration. The noise model was validated for the purpose of determining the precision of kinetic parameter estimates for TSPO PET. An investigation was made into the effects of activity concentration in tissue, radionuclide half-life, injected dose and compartmental model complexity on the reproducibility of kinetic parameters. Injecting 555 MBq of carbon-11 labelled TSPO tracer produced similar binding parameter precision to 185 MBq of fluorine-18, and a moderate (20%) reduction in precision was observed for the reduced carbon-11 dose of 370 MBq. Results indicated that a factor of 2 increase in frame count level (relative to [11C]-PK11195, and due for example to higher ligand uptake, injected dose or absolute scanner sensitivity) is required to obtain reliable binding parameter estimates for small regions of interest when fitting a two-tissue compartment, four-parameter compartmental model. However, compartmental model complexity had a similarly large effect, with the reduction of model complexity from the two-tissue compartment, four-parameter to a one-tissue compartment, two-parameter model producing a 78% reduction in coefficient of variation of the binding parameter estimates at each tissue activity level and region size studied. In summary, this thesis describes the development and validation of Monte Carlo methods for estimating image noise in dynamic TSPO PET scans, and analytical methods for predicting relative image noise for a wide range of tissue activity concentration and acquisition durations. The findings of this research suggest that a broader consideration of the kinetic properties of novel TSPO radioligands, with a view to selection of ligands that are potentially amenable to analysis with a simple one-tissue compartment model, is at least as important as efforts directed towards reducing image noise, such as higher brain uptake, in the search for the next generation of TSPO PET tracers

    Accelerating Permutation Testing in Voxel-wise Analysis through Subspace Tracking: A new plugin for SnPM

    Get PDF
    Permutation testing is a non-parametric method for obtaining the max null distribution used to compute corrected pp-values that provide strong control of false positives. In neuroimaging, however, the computational burden of running such an algorithm can be significant. We find that by viewing the permutation testing procedure as the construction of a very large permutation testing matrix, TT, one can exploit structural properties derived from the data and the test statistics to reduce the runtime under certain conditions. In particular, we see that TT is low-rank plus a low-variance residual. This makes TT a good candidate for low-rank matrix completion, where only a very small number of entries of TT (0.35%\sim0.35\% of all entries in our experiments) have to be computed to obtain a good estimate. Based on this observation, we present RapidPT, an algorithm that efficiently recovers the max null distribution commonly obtained through regular permutation testing in voxel-wise analysis. We present an extensive validation on a synthetic dataset and four varying sized datasets against two baselines: Statistical NonParametric Mapping (SnPM13) and a standard permutation testing implementation (referred as NaivePT). We find that RapidPT achieves its best runtime performance on medium sized datasets (50n20050 \leq n \leq 200), with speedups of 1.5x - 38x (vs. SnPM13) and 20x-1000x (vs. NaivePT). For larger datasets (n200n \geq 200) RapidPT outperforms NaivePT (6x - 200x) on all datasets, and provides large speedups over SnPM13 when more than 10000 permutations (2x - 15x) are needed. The implementation is a standalone toolbox and also integrated within SnPM13, able to leverage multi-core architectures when available.Comment: 36 pages, 16 figure

    PERICLES Deliverable 4.3:Content Semantics and Use Context Analysis Techniques

    Get PDF
    The current deliverable summarises the work conducted within task T4.3 of WP4, focusing on the extraction and the subsequent analysis of semantic information from digital content, which is imperative for its preservability. More specifically, the deliverable defines content semantic information from a visual and textual perspective, explains how this information can be exploited in long-term digital preservation and proposes novel approaches for extracting this information in a scalable manner. Additionally, the deliverable discusses novel techniques for retrieving and analysing the context of use of digital objects. Although this topic has not been extensively studied by existing literature, we believe use context is vital in augmenting the semantic information and maintaining the usability and preservability of the digital objects, as well as their ability to be accurately interpreted as initially intended.PERICLE

    Real-time tomographic reconstruction

    Get PDF
    With tomography it is possible to reconstruct the interior of an object without destroying. It is an important technique for many applications in, e.g., science, industry, and medicine. The runtime of conventional reconstruction algorithms is typically much longer than the time it takes to perform the tomographic experiment, and this prohibits the real-time reconstruction and visualization of the imaged object. The research in this dissertation introduces various techniques such as new parallelization schemes, data partitioning methods, and a quasi-3D reconstruction framework, that significantly reduce the time it takes to run conventional tomographic reconstruction algorithms without affecting image quality. The resulting methods and software implementations put reconstruction times in the same ballpark as the time it takes to do a tomographic scan, so that we can speak of real-time tomographic reconstruction.NWONumber theory, Algebra and Geometr

    Innovative techniques to devise 3D-printed anatomical brain phantoms for morpho-functional medical imaging

    Get PDF
    Introduction. The Ph.D. thesis addresses the development of innovative techniques to create 3D-printed anatomical brain phantoms, which can be used for quantitative technical assessments on morpho-functional imaging devices, providing simulation accuracy not obtainable with currently available phantoms. 3D printing (3DP) technology is paving the way for advanced anatomical modelling in biomedical applications. Despite the potential already expressed by 3DP in this field, it is still little used for the realization of anthropomorphic phantoms of human organs with complex internal structures. Making an anthropomorphic phantom is very different from making a simple anatomical model and 3DP is still far from being plug-and-print. Hence, the need to develop ad-hoc techniques providing innovative solutions for the realization of anatomical phantoms with unique characteristics, and greater ease-of-use. Aim. The thesis explores the entire workflow (brain MRI images segmentation, 3D modelling and materialization) developed to prototype a new complex anthropomorphic brain phantom, which can simulate three brain compartments simultaneously: grey matter (GM), white matter (WM) and striatum (caudate nucleus and putamen, known to show a high uptake in nuclear medicine studies). The three separate chambers of the phantom will be filled with tissue-appropriate solutions characterized by different concentrations of radioisotope for PET/SPECT, para-/ferro-magnetic metals for MRI, and iodine for CT imaging. Methods. First, to design a 3D model of the brain phantom, it is necessary to segment MRI images and to extract an error-less STL (Standard Tessellation Language) description. Then, it is possible to materialize the prototype and test its functionality. - Image segmentation. Segmentation is one of the most critical steps in modelling. To this end, after demonstrating the proof-of-concept, a multi-parametric segmentation approach based on brain relaxometry was proposed. It includes a pre-processing step to estimate relaxation parameter maps (R1 = longitudinal relaxation rate, R2 = transverse relaxation rate, PD = proton density) from the signal intensities provided by MRI sequences of routine clinical protocols (3D-GrE T1-weighted, FLAIR and fast-T2-weighted sequences with ≤ 3 mm slice thickness). In the past, maps of R1, R2, and PD were obtained from Conventional Spin Echo (CSE) sequences, which are no longer suitable for clinical practice due to long acquisition times. Rehabilitating the multi-parametric segmentation based on relaxometry, the estimation of pseudo-relaxation maps allowed developing an innovative method for the simultaneous automatic segmentation of most of the brain structures (GM, WM, cerebrospinal fluid, thalamus, caudate nucleus, putamen, pallidus, nigra, red nucleus and dentate). This method allows the segmentation of higher resolution brain images for future brain phantom enhancements. - STL extraction. After segmentation, the 3D model of phantom is described in STL format, which represents the shapes through the approximation in manifold mesh (i.e., collection of triangles, which is continuous, without holes and with a positive – not zero – volume). For this purpose, we developed an automatic procedure to extract a single voxelized surface, tracing the anatomical interface between the phantom's compartments directly on the segmented images. Two tubes were designed for each compartment (one for filling and the other to facilitate the escape of air). The procedure automatically checks the continuity of the surface, ensuring that the 3D model could be exported in STL format, without errors, using a common image-to-STL conversion software. Threaded junctions were added to the phantom (for the hermetic closure) using a mesh processing software. The phantom's 3D model resulted correct and ready for 3DP. Prototyping. Finally, the most suitable 3DP technology is identified for the materialization. We investigated the material extrusion technology, named Fused Deposition Modeling (FDM), and the material jetting technology, named PolyJet. FDM resulted the best candidate for our purposes. It allowed materializing the phantom's hollow compartments in a single print, without having to print them in several parts to be reassembled later. FDM soluble internal support structures were completely removable after the materialization, unlike PolyJet supports. A critical aspect, which required a considerable effort to optimize the printing parameters, was the submillimetre thickness of the phantom walls, necessary to avoid distorting the imaging simulation. However, 3D printer manufacturers recommend maintaining a uniform wall thickness of at least 1 mm. The optimization of printing path made it possible to obtain strong, but not completely waterproof walls, approximately 0.5 mm thick. A sophisticated technique, based on the use of a polyvinyl-acetate solution, was developed to waterproof the internal and external phantom walls (necessary requirement for filling). A filling system was also designed to minimize the residual air bubbles, which could result in unwanted hypo-intensity (dark) areas in phantom-based imaging simulation. Discussions and conclusions. The phantom prototype was scanned trough CT and PET/CT to evaluate the realism of the brain simulation. None of the state-of-the-art brain phantoms allow such anatomical rendering of three brain compartments. Some represent only GM and WM, others only the striatum. Moreover, they typically have a poor anatomical yield, showing a reduced depth of the sulci and a not very faithful reproduction of the cerebral convolutions. The ability to simulate the three brain compartments simultaneously with greater accuracy, as well as the possibility of carrying out multimodality studies (PET/CT, PET/MRI), which represent the frontier of diagnostic imaging, give this device cutting-edge prospective characteristics. The effort to further customize 3DP technology for these applications is expected to increase significantly in the coming years
    corecore