84 research outputs found
Applications in GNSS water vapor tomography
Algebraic reconstruction algorithms are iterative algorithms that are used in many area including medicine, seismology or meteorology. These algorithms are known to be highly computational intensive. This may be especially troublesome for real-time applications or when processed by conventional low-cost personnel computers. One of these real time applications
is the reconstruction of water vapor images from Global Navigation Satellite System (GNSS) observations. The parallelization of algebraic reconstruction algorithms has the potential to diminish signi cantly the required resources permitting to obtain valid solutions in time to be used for nowcasting and forecasting weather models.
The main objective of this dissertation was to present and analyse diverse shared memory
libraries and techniques in CPU and GPU for algebraic reconstruction algorithms. It was concluded that the parallelization compensates over sequential implementations. Overall the GPU implementations were found to be only slightly faster than the CPU implementations, depending on the size of the problem being studied.
A secondary objective was to develop a software to perform the GNSS water vapor reconstruction using the implemented parallel algorithms. This software has been developed with success and diverse tests were made namely with synthetic and real data, the preliminary results shown to be satisfactory.
This dissertation was written in the Space & Earth Geodetic Analysis Laboratory (SEGAL) and was carried out in the framework of the Structure of Moist convection in high-resolution GNSS observations and models (SMOG) (PTDC/CTE-ATM/119922/2010) project funded by FCT.Algoritmos de reconstrução algébrica são algoritmos iterativos que são usados em muitas áreas
incluindo medicina, sismologia ou meteorologia. Estes algoritmos são conhecidos por serem bastante
exigentes computacionalmente. Isto pode ser especialmente complicado para aplicações
de tempo real ou quando processados por computadores pessoais de baixo custo. Uma destas
aplicações de tempo real é a reconstrução de imagens de vapor de água a partir de observações
de sistemas globais de navegação por satélite. A paralelização dos algoritmos de reconstrução
algébrica permite que se reduza significativamente os requisitos computacionais permitindo
obter soluções válidas para previsão meteorológica num curto espaço de tempo.
O principal objectivo desta dissertação é apresentar e analisar diversas bibliotecas e técnicas
multithreading para a reconstrução algébrica em CPU e GPU. Foi concluído que a paralelização
compensa sobre a implementações sequenciais. De um modo geral as implementações GPU
obtiveram resultados relativamente melhores que implementações em CPU, isto dependendo do
tamanho do problema a ser estudado. Um objectivo secundário era desenvolver uma aplicação
que realizasse a reconstrução de imagem de vapor de água através de sistemas globais de
navegação por satélite de uma forma paralela. Este software tem sido desenvolvido com sucesso
e diversos testes foram realizados com dados sintéticos e dados reais, os resultados preliminares
foram satisfatórios.
Esta dissertação foi escrita no Space & Earth Geodetic Analysis Laboratory (SEGAL) e foi realizada de acordo com o projecto Structure 01' Moist convection in high-resolution GNSS observations and models (SMOG) (PTDC / CTE-ATM/ 11992212010) financiado pelo FCT.Fundação para a Ciência e a Tecnologia (FCT
Comparison of different image reconstruction algorithms for Digital Breast Tomosynthesis and assessment of their potential to reduce radiation dose
Tese de mestrado, Engenharia Física, 2022, Universidade de Lisboa, Faculdade de CiênciasDigital Breast Tomosynthesis is a three-dimensional medical imaging technique that allows the
view of sectional parts of the breast. Obtaining multiple slices of the breast constitutes an advantage
in contrast to conventional mammography examination in view of the increased potential in breast
cancer detectability. Conventional mammography, despite being a screening success, has undesirable
specificity, sensitivity, and high recall rates owing to the overlapping of tissues. Although this new
technique promises better diagnostic results, the acquisition methods and image reconstruction
algorithms are still under research.
Several articles suggest the use of analytic algorithms. However, more recent articles highlight the
iterative algorithm’s potential for increasing image quality when compared to the former. The scope
of this dissertation was to test the hypothesis of achieving higher quality images using iterative
algorithms acquired with lower doses than those using analytic algorithms.
In a first stage, the open-source Tomographic Iterative GPU-based Reconstruction (TIGRE)
Toolbox for fast and accurate 3D x-ray image reconstruction was used to reconstruct the images
acquired using an acrylic phantom. The algorithms used from the toolbox were the Feldkamp, Davis,
and Kress, the Simultaneous Algebraic Reconstruction Technique, and the Maximum Likelihood
Expectation Maximization algorithm.
In a second and final state, the possibility of further reducing the radiation dose using image
postprocessing tools was evaluated. A Total Variation Minimization filter was applied to the images
reconstructed with the TIGRE toolbox algorithm that provided the best image quality. These were then
compared to the images of the commercial unit used for the image acquisitions.
With the use of image quality parameters, it was found that the Maximum Likelihood Expectation
Maximization algorithm performance was the best of the three for lower radiation doses, especially
with the filter. In sum, the result showed the potential of the algorithm in obtaining images with quality
for low doses
System Characterizations and Optimized Reconstruction Methods for Novel X-ray Imaging
In the past decade there have been many new emerging X-ray based imaging technologies developed for different diagnostic purposes or imaging tasks. However, there exist one or more specific problems that prevent them from being effectively or efficiently employed. In this dissertation, four different novel X-ray based imaging technologies are discussed, including propagation-based phase-contrast (PB-XPC) tomosynthesis, differential X-ray phase-contrast tomography (D-XPCT), projection-based dual-energy computed radiography (DECR), and tetrahedron beam computed tomography (TBCT). System characteristics are analyzed or optimized reconstruction methods are proposed for these imaging modalities. In the first part, we investigated the unique properties of propagation-based phase-contrast imaging technique when combined with the X-ray tomosynthesis. Fourier slice theorem implies that the high frequency components collected in the tomosynthesis data can be more reliably reconstructed. It is observed that the fringes or boundary enhancement introduced by the phase-contrast effects can serve as an accurate indicator of the true depth position in the tomosynthesis in-plane image. In the second part, we derived a sub-space framework to reconstruct images from few-view D-XPCT data set. By introducing a proper mask, the high frequency contents of the image can be theoretically preserved in a certain region of interest. A two-step reconstruction strategy is developed to mitigate the risk of subtle structures being oversmoothed when the commonly used total-variation regularization is employed in the conventional iterative framework. In the thirt part, we proposed a practical method to improve the quantitative accuracy of the projection-based dual-energy material decomposition. It is demonstrated that applying a total-projection-length constraint along with the dual-energy measurements can achieve a stabilized numerical solution of the decomposition problem, thus overcoming the disadvantages of the conventional approach that was extremely sensitive to noise corruption. In the final part, we described the modified filtered backprojection and iterative image reconstruction algorithms specifically developed for TBCT. Special parallelization strategies are designed to facilitate the use of GPU computing, showing demonstrated capability of producing high quality reconstructed volumetric images with a super fast computational speed. For all the investigations mentioned above, both simulation and experimental studies have been conducted to demonstrate the feasibility and effectiveness of the proposed methodologies
Sensing and Signal Processing in Smart Healthcare
In the last decade, we have witnessed the rapid development of electronic technologies that are transforming our daily lives. Such technologies are often integrated with various sensors that facilitate the collection of human motion and physiological data and are equipped with wireless communication modules such as Bluetooth, radio frequency identification, and near-field communication. In smart healthcare applications, designing ergonomic and intuitive human–computer interfaces is crucial because a system that is not easy to use will create a huge obstacle to adoption and may significantly reduce the efficacy of the solution. Signal and data processing is another important consideration in smart healthcare applications because it must ensure high accuracy with a high level of confidence in order for the applications to be useful for clinicians in making diagnosis and treatment decisions. This Special Issue is a collection of 10 articles selected from a total of 26 contributions. These contributions span the areas of signal processing and smart healthcare systems mostly contributed by authors from Europe, including Italy, Spain, France, Portugal, Romania, Sweden, and Netherlands. Authors from China, Korea, Taiwan, Indonesia, and Ecuador are also included
Recommended from our members
Pattern recognition systems design on parallel GPU architectures for breast lesions characterisation employing multimodality images
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University London.The aim of this research was to address the computational complexity in designing multimodality Computer-Aided Diagnosis (CAD) systems for characterising breast lesions, by harnessing the general purpose computational potential of consumer-level Graphics Processing Units (GPUs) through parallel programming methods. The complexity in designing such systems lies on the increased dimensionality of the problem, due to the multiple imaging modalities involved, on the inherent complexity of optimal design methods for securing high precision, and on assessing the performance of the design prior to deployment in a clinical environment, employing unbiased system evaluation methods. For the purposes of this research, a Pattern Recognition (PR)-system was designed to provide highest possible precision by programming in parallel the multiprocessors of the NVIDIA’s GPU-cards, GeForce 8800GT or 580GTX, and using the CUDA programming framework and C++. The PR-system was built around the Probabilistic Neural Network classifier and its performance was evaluated by a re-substitution method, for estimating the system’s highest accuracy, and by the external cross validation method, for assessing the PR-system’s unbiased accuracy to new, “unseen” by the system, data. Data comprised images of patients with histologically verified (benign or malignant) breast lesions, who underwent both ultrasound (US) and digital mammography (DM). Lesions were outlined on the images by an experienced radiologist, and textural features were calculated. Regarding breast lesion classification, the accuracies for discriminating malignant from benign lesions were, 85.5% using US-features alone, 82.3% employing DM-features alone, and 93.5% combining US and DM features. Mean accuracy to new “unseen” data for the combined US and DM features was 81%. Those classification accuracies were about 10% higher than accuracies achieved on a single CPU, using sequential programming methods, and 150-fold faster. In addition, benign lesions were found smoother, more homogeneous, and containing larger structures. Additionally, the PR-system design was adapted for tackling other medical problems, as a proof of its generalisation. These included classification of rare brain tumours, (achieving 78.6% for overall accuracy (OA) and 73.8% for estimated generalisation accuracy (GA), and accelerating system design 267 times), discrimination of patients with micro-ischemic and multiple sclerosis lesions (90.2% OA and 80% GA with 32-fold design acceleration), classification of normal and pathological knee cartilages (93.2% OA and 89% GA with 257-fold design acceleration), and separation of low from high grade laryngeal cancer cases (93.2% OA and 89% GA, with 130-fold design acceleration). The proposed PR-system improves breast-lesion discrimination accuracy, it may be redesigned on site when new verified data are incorporated in its depository, and it may serve as a second opinion tool in a clinical environment
Media gateway utilizando um GPU
Mestrado em Engenharia de Computadores e Telemátic
Optimizing Image Reconstruction in Electrical Impedance Tomography
Tato disertační práce pojednává o optimalizaci algoritmů pro rekonstrukci obrazu neznámé měrné vodivosti z měřených dat pořízených elektrickou impedanční tomografií. Danou problematiku zde věcně vymezuje několik různých prvků, zejména pak stručný matematický popis dopředné a inverzní úlohy řešené různými přístupy, metodika měření a pořizování dat pro rekonstrukci a přehled dostupných numerických nástrojů. Uvedenou charakteristiku rozšiřuje rozbor optimalizací parametrů modelu ovlivňujících přesnost rekonstrukce, způsoby paralelního zpracování algoritmů a souhrn dostupných zařízení pro měření tomografických dat. Na základě získaných poznatků byla navržena optimalizace parametrů matematického modelu, která umožňuje jeho velmi přesný návrh dle měřených dat. V této souvislosti dochází ke snížení nejistoty rekonstrukce rozložení konduktivity. Pro zefektivnění procesu získávání dat bylo navrženo zařízení k automatizaci tomografie s důrazem na cenovou dostupnost a snížení nejistoty měření. V oblasti tvorby numerického modelu byly dále zkoumány možnosti užití otevřených a uzavřených domén pro různé metody regularizace a hrubost sítě, a to s ohledem na velikost chyby rekonstruované konduktivity a výpočetní náročnost. Součástí práce je také paralelizace subalgoritmů rekonstrukce s využitím vícejádrové grafické karty. Předložené výsledky mají přímý vliv na snížení nejistoty rekonstrukce (optimalizací počáteční hodnoty konduktivity, rozmístění elektrod a tvarové deformace domény, regularizačních metod a typu domén) a urychlení výpočtů paralelizací algoritmů, přičemž výzkum byl podpořen vlastním návrhem jednotky pro automatizaci tomografie.The thesis presents, analyzes, and discusses the optimization of algorithms that reconstruct images of unknown specific conductivity from data acquired via electrical impedance tomography. In this context, the author provides a brief mathematical description of the forward and inverse tasks solved by using diverse approaches, characterizes relevant measurement techniques and data acquisition procedures, and discusses available numerical tools. Procedurally, the initial working stages involved analyzing the methods for optimizing those parameters of the model that influence the reconstruction accuracy; demonstrating approaches to the parallel processing of the algorithms; and outlining a survey of available instruments to acquire the tomographic data. The obtained knowledge then yielded a process for optimizing the parameters of the mathematical model, thus allowing the model to be designed precisely, based on the measured data; such a precondition eventually reduced the uncertainty in reconstructing the specific conductivity distribution. When forming the numerical model, the author investigated the possibilities and overall impact of combining the open and closed domains with various regularization methods and mesh element scales, considering both the character of the conductivity reconstruction error and the computational intensity. A complementary task resolved within the broader scheme outlined above lay in parallelizing the reconstruction subalgorithms by using a multi-core graphics card. The results of the thesis are directly reflected in a reduced reconstruction uncertainty (through an optimization of the initial conductivity value, placement of the electrodes, and shape deformation of the domains) and accelerated computation via parallelized algorithms. The actual research benefited from an in-house designed automated tomography unit.
- …