9 research outputs found

    Comparison of different image reconstruction algorithms for Digital Breast Tomosynthesis and assessment of their potential to reduce radiation dose

    Get PDF
    Tese de mestrado, Engenharia Física, 2022, Universidade de Lisboa, Faculdade de CiênciasDigital Breast Tomosynthesis is a three-dimensional medical imaging technique that allows the view of sectional parts of the breast. Obtaining multiple slices of the breast constitutes an advantage in contrast to conventional mammography examination in view of the increased potential in breast cancer detectability. Conventional mammography, despite being a screening success, has undesirable specificity, sensitivity, and high recall rates owing to the overlapping of tissues. Although this new technique promises better diagnostic results, the acquisition methods and image reconstruction algorithms are still under research. Several articles suggest the use of analytic algorithms. However, more recent articles highlight the iterative algorithm’s potential for increasing image quality when compared to the former. The scope of this dissertation was to test the hypothesis of achieving higher quality images using iterative algorithms acquired with lower doses than those using analytic algorithms. In a first stage, the open-source Tomographic Iterative GPU-based Reconstruction (TIGRE) Toolbox for fast and accurate 3D x-ray image reconstruction was used to reconstruct the images acquired using an acrylic phantom. The algorithms used from the toolbox were the Feldkamp, Davis, and Kress, the Simultaneous Algebraic Reconstruction Technique, and the Maximum Likelihood Expectation Maximization algorithm. In a second and final state, the possibility of further reducing the radiation dose using image postprocessing tools was evaluated. A Total Variation Minimization filter was applied to the images reconstructed with the TIGRE toolbox algorithm that provided the best image quality. These were then compared to the images of the commercial unit used for the image acquisitions. With the use of image quality parameters, it was found that the Maximum Likelihood Expectation Maximization algorithm performance was the best of the three for lower radiation doses, especially with the filter. In sum, the result showed the potential of the algorithm in obtaining images with quality for low doses

    CoreDiff: Contextual Error-Modulated Generalized Diffusion Model for Low-Dose CT Denoising and Generalization

    Full text link
    Low-dose computed tomography (CT) images suffer from noise and artifacts due to photon starvation and electronic noise. Recently, some works have attempted to use diffusion models to address the over-smoothness and training instability encountered by previous deep-learning-based denoising models. However, diffusion models suffer from long inference times due to the large number of sampling steps involved. Very recently, cold diffusion model generalizes classical diffusion models and has greater flexibility. Inspired by the cold diffusion, this paper presents a novel COntextual eRror-modulated gEneralized Diffusion model for low-dose CT (LDCT) denoising, termed CoreDiff. First, CoreDiff utilizes LDCT images to displace the random Gaussian noise and employs a novel mean-preserving degradation operator to mimic the physical process of CT degradation, significantly reducing sampling steps thanks to the informative LDCT images as the starting point of the sampling process. Second, to alleviate the error accumulation problem caused by the imperfect restoration operator in the sampling process, we propose a novel ContextuaL Error-modulAted Restoration Network (CLEAR-Net), which can leverage contextual information to constrain the sampling process from structural distortion and modulate time step embedding features for better alignment with the input at the next time step. Third, to rapidly generalize to a new, unseen dose level with as few resources as possible, we devise a one-shot learning framework to make CoreDiff generalize faster and better using only a single LDCT image (un)paired with NDCT. Extensive experimental results on two datasets demonstrate that our CoreDiff outperforms competing methods in denoising and generalization performance, with a clinically acceptable inference time. Source code is made available at https://github.com/qgao21/CoreDiff.Comment: IEEE Transactions on Medical Imaging, 202

    Improving Image Reconstruction for Digital Breast Tomosynthesis

    Full text link
    Digital breast tomosynthesis (DBT) has been developed to reduce the issue of overlapping tissue in conventional 2-D mammography for breast cancer screening and diagnosis. In the DBT procedure, the patient’s breast is compressed with a paddle and a sequence of x-ray projections is taken within a small angular range. Tomographic reconstruction algorithms are then applied to these projections, generating tomosynthesized image slices of the breast, such that radiologists can read the breast slice by slice. Studies have shown that DBT can reduce both false-negative diagnoses of breast cancer and false-positive recalls compared to mammography alone. This dissertation focuses on improving image quality for DBT reconstruction. Chapter I briefly introduces the concept of DBT and the inspiration of my study. Chapter II covers the background of my research including the concept of image reconstruction, the geometry of our experimental DBT system and figures of merit for image quality. Chapter III introduces our study of the segmented separable footprint (SG) projector. By taking into account the finite size of detector element, the SG projector improves the accuracy of forward projections in iterative image reconstruction. Due to the more efficient access to memory, the SG projector is also faster than the traditional ray-tracing (RT) projector. We applied the SG projector to regular and subpixel reconstructions and demonstrated its effectiveness. Chapter IV introduces a new DBT reconstruction method with detector blur and correlated noise modeling, called the SQS-DBCN algorithm. The SQS-DBCN algorithm is able to significantly enhance microcalcifications (MC) in DBT while preserving the appearance of the soft tissue and mass margin. Comparisons between the SQS-DBCN algorithm and several modified versions of the SQS-DBCN algorithm indicate the importance of modeling different components of the system physics at the same time. Chapter V investigates truncated projection artifact (TPA) removal algorithms. Among the three algorithms we proposed, the pre-reconstruction-based projection view (PV) extrapolation method provides the best performance. Possible improvements of the other two TPA removal algorithms have been discussed. Chapter VI of this dissertation examines the effect of source blur on DBT reconstruction. Our analytical calculation demonstrates that the point spread function (PSF) of source blur is highly shift-variant. We used CatSim to simulate digital phantoms. Analysis on the reconstructed images demonstrates that a typical finite-sized focal spot (~ 0.3 mm) will not affect the image quality if the x-ray tube is stationary during the data acquisition. For DBT systems with continuous-motion data acquisition, the motion of the x-ray tube is the main cause of the effective source blur and will cause loss in the contrast of objects. Therefore modeling the source blur for these DBT systems could potentially improve the reconstructed image quality. The final chapter of this dissertation discusses a few future studies that are inspired by my PhD research.PHDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/144059/1/jiabei_1.pd

    Applications of computational methods in biomedical breast cancer imaging diagnostics: A review

    Get PDF
    With the exponential increase in new cases coupled with an increased mortality rate, cancer has ranked as the second most prevalent cause of death in the world. Early detection is paramount for suitable diagnosis and effective treatment of different kinds of cancers, but this is limited to the accuracy and sensitivity of available diagnostic imaging methods. Breast cancer is the most widely diagnosed cancer among women across the globe with a high percentage of total cancer deaths requiring an intensive, accurate, and sensitive imaging approach. Indeed, it is treatable when detected at an early stage

    Reconstrução/processamento de imagem médica com GPU em tomossíntese

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Engenharia BiomédicaA Tomossíntese Digital Mamária (DBT) é uma recente técnica de imagem médica tridimensional baseada na mamografia digital que permite uma melhor observação dos tecidos sobrepostos, principalmente em mamas densas. Esta técnica consiste na obtenção de múltiplas imagens (cortes) do volume a reconstruir, permitindo dessa forma um diagnóstico mais eficaz, uma vez que os vários tecidos não se encontram sobrepostos numa imagem 2D. Os algoritmos de reconstrução de imagem usados em DBT são bastante similares aos usados em Tomografia Computorizada (TC). Existem duas classes de algoritmos de reconstrução de imagem: analíticos e iterativos. No âmbito deste trabalho foram implementados dois algoritmos iterativos de reconstrução: Maximum Likelihood – Expectation Maximization (ML-EM) e Ordered Subsets – Expectation Maximization (OS-EM). Os algoritmos iterativos permitem melhores resultados, no entanto são computacionalmente muito pesados, pelo que, os algoritmos analíticos têm sido preferencialmente usados em prática clínica. Com os avanços tecnológicos na área dos computadores, já é possível diminuir consideravelmente o tempo que leva para reconstruir uma imagem com um algoritmo iterativo. Os algoritmos foram implementados com recurso à programação em placas gráficas − General-Purpose computing on Graphics Processing Units (GPGPU). A utilização desta técnica permite usar uma placa gráfica (GPU – Graphics Processing Unit) para processar tarefas habitualmente designadas para o processador de um computador (CPU – Central Processing Unit) ao invés da habitual tarefa do processamento gráfico a que são associadas as GPUs. Para este projecto foi usado uma GPU NVIDIA®, recorrendo-se à arquitectura Compute Unified Device Architecture (CUDA™) para codificar os algoritmos de reconstrução. Os resultados mostraram que a implementação dos algoritmos em GPU permitiu uma diminuição do tempo de reconstrução em, aproximadamente, 6,2 vezes relativamente ao tempo obtido em CPU. No respeitante à qualidade de imagem, a GPU conseguiu atingir um nível de detalhe similar às imagens da CPU, apesar de diferenças pouco significativas

    On-belt Tomosynthesis: 3D Imaging of Baggage for Security Inspection

    Get PDF
    This thesis describes the design, testing and evaluation of `On-belt Tomosynthesis' (ObT): a cost-e ective baggage screening system based on limited angle digital x-ray tomosynthesis and close-range photogrammetry. It is designed to be retro tted to existing airport conveyor-belt systems and to overcome the limitations of current systems creating a pseudo-3D imaging system by combining x-ray and optical imaging to form digital tomograms. The ObT design and set-up consists of a con guration of two x-ray sources illuminating 12 strip detectors around a conveyor belt curve forming an 180 arc. Investigating the acquired ObT x-ray images' noise sources and distortions, improvements were demonstrated using developed image correction methods. An increase of 45% in image uniformity was shown as a result, in the postcorrection images. Simulation image reconstruction of objects with lower attenuation coe cients showed the potential of ObT to clearly distinguish between them. Reconstruction of real data showed that objects of bigger attenuation di erences (copper versus perspex, rather than air versus perspex) could be observed better. The main conclusion from the reconstruction results was that the current imaging method needed further re nements, regarding the geometry registration and the image reconstruction. The simulation results con rmed that advancing the experimental method could produce better results than the ones which can currently be achieved. For the current state of ObT, a standard deviation of 2 mm in (a) the source coordinates, and 2 in (b) the detector angles does not a ect the image reconstruction results. Therefore, a low-cost single camera coordination and tracking solution was developed to replace the previously used manual measurements. Results obtained by the developed solution showed that the necessary prerequisites for the ObT image reconstruction could be addressed. The resulting standard deviation was of an average of 0.4 mm and 1 degree for (a) and (b) respectively

    Task-based performance analysis of FBP, SART and ML for digital breast tomosynthesis using signal CNR and Channelised Hotelling Observers.

    No full text
    We assess the performance of filtered backprojection (FBP), the simultaneous algebraic reconstruction technique (SART) and the maximum likelihood (ML) algorithm for digital breast tomosynthesis (DBT) under variations in key imaging parameters, including the number of iterations, number of projections, angular range, initial guess, and radiation dose. This is the first study to compare these algorithms for the application of DBT. We present a methodology for the evaluation of DBT reconstructions, and use it to conduct preliminary experiments investigating trade-offs between the selected imaging parameters. This investigation includes trade-offs not previously considered in the DBT literature, such as the use of a stationary detector versus a C-arm imaging geometry. A real breast CT volume serves as a ground truth digital phantom from which to simulate X-ray projections under the various acquisition parameters. The reconstructed image quality is measured using task-based metrics, namely signal CNR and the AUC of a Channelised Hotelling Observer with Laguerre-Gauss basis functions. The task at hand is the detection of a simulated mass inserted into the breast CT volume. We find that the image quality in limited view tomography is highly dependent on the particular acquisition and reconstruction parameters used. In particular, we draw the following conclusions. First, we find that optimising the FBP filter design and SART relaxation parameter yields significant improvements in reconstruction quality from the same projection data. Second, we show that the convergence rate of the maximum likelihood algorithm, optimised with paraboloidal surrogates and conjugate gradient ascent (ML-PSCG), can be greatly accelerated using view-by-view updates. Third, we find that the optimal initial guess is algorithm dependent. In particular, we obtained best results with a zero initial guess for SART, and an FBP initial guess for ML-PSCG. Fourth, when the exposure per view is constant, increasing the total number of views within a given angular range improves the reconstruction quality, albeit with diminishing returns. When the total dose of all views combined is constant, there is a trade-off between increased sampling using a larger number of views and increased levels of quantum noise in each view. Fifth, we do not observe significant differences when testing various access ordering schemes, presumably due to the limited angular range of DBT. Sixth, we find that adjusting the z-resolution of the reconstruction can improve image quality, but that this resolution is best adjusted by using post-reconstruction binning, rather than by declaring lower-resolution voxels. Seventh, we find that the C-arm configuration yields higher image quality than a stationary detector geometry, the difference being most outspoken for the FBP algorithm. Lastly, we find that not all prototype systems found in the literature are currently being run under the best possible system or algorithm configurations. In other words, the present study demonstrates the critical importance (and reward) of using optimisation methodologies such as the one presented here to maximise the DBT reconstruction quality from a single scan of the patient

    Deep Learning in Medical Image Analysis

    Get PDF
    The accelerating power of deep learning in diagnosing diseases will empower physicians and speed up decision making in clinical environments. Applications of modern medical instruments and digitalization of medical care have generated enormous amounts of medical images in recent years. In this big data arena, new deep learning methods and computational models for efficient data processing, analysis, and modeling of the generated data are crucially important for clinical applications and understanding the underlying biological process. This book presents and highlights novel algorithms, architectures, techniques, and applications of deep learning for medical image analysis

    Implementing decision tree-based algorithms in medical diagnostic decision support systems

    Get PDF
    As a branch of healthcare, medical diagnosis can be defined as finding the disease based on the signs and symptoms of the patient. To this end, the required information is gathered from different sources like physical examination, medical history and general information of the patient. Development of smart classification models for medical diagnosis is of great interest amongst the researchers. This is mainly owing to the fact that the machine learning and data mining algorithms are capable of detecting the hidden trends between features of a database. Hence, classifying the medical datasets using smart techniques paves the way to design more efficient medical diagnostic decision support systems. Several databases have been provided in the literature to investigate different aspects of diseases. As an alternative to the available diagnosis tools/methods, this research involves machine learning algorithms called Classification and Regression Tree (CART), Random Forest (RF) and Extremely Randomized Trees or Extra Trees (ET) for the development of classification models that can be implemented in computer-aided diagnosis systems. As a decision tree (DT), CART is fast to create, and it applies to both the quantitative and qualitative data. For classification problems, RF and ET employ a number of weak learners like CART to develop models for classification tasks. We employed Wisconsin Breast Cancer Database (WBCD), Z-Alizadeh Sani dataset for coronary artery disease (CAD) and the databanks gathered in Ghaem Hospital’s dermatology clinic for the response of patients having common and/or plantar warts to the cryotherapy and/or immunotherapy methods. To classify the breast cancer type based on the WBCD, the RF and ET methods were employed. It was found that the developed RF and ET models forecast the WBCD type with 100% accuracy in all cases. To choose the proper treatment approach for warts as well as the CAD diagnosis, the CART methodology was employed. The findings of the error analysis revealed that the proposed CART models for the applications of interest attain the highest precision and no literature model can rival it. The outcome of this study supports the idea that methods like CART, RF and ET not only improve the diagnosis precision, but also reduce the time and expense needed to reach a diagnosis. However, since these strategies are highly sensitive to the quality and quantity of the introduced data, more extensive databases with a greater number of independent parameters might be required for further practical implications of the developed models
    corecore