1,402 research outputs found
QR-Factorization Algorithm for Computed Tomography (CT): Comparison With FDK and Conjugate Gradient (CG) Algorithms
[EN] Even though QR-factorization of the system matrix for tomographic devices has been already used for medical imaging, to date, no satisfactory solution has been found for solving large linear systems, such as those used in computed tomography (CT) (in the order of 106 equations). In CT, the Feldkamp, Davis, and Kress back projection algorithm (FDK) and iterative methods like conjugate gradient (CG) are the standard methods used for image reconstruction. As the image reconstruction problem can be modeled by a large linear system of equations, QR-factorization of the system matrix could be used to solve this system. Current advances in computer science enable the use of direct methods for solving such a large linear system. The QR-factorization is a numerically stable direct method for solving linear systems of equations, which is beginning to emerge as an alternative to traditional methods, bringing together the best from traditional methods. QR-factorization was chosen because the core of the algorithm, from the computational cost point of view, is precalculated and stored only once for a given CT system, and from then on, each image reconstruction only involves a backward substitution process and the product of a vector by a matrix. Image quality assessment was performed comparing contrast to noise ratio and noise power spectrum; performances regarding sharpness were evaluated by the reconstruction of small structures using data measured from a small animal 3-D CT. Comparisons of QR-factorization with FDK and CG methods show that QR-factorization is able to reconstruct more detailed images for a fixed voxel size.This work was supported by the Spanish Government under Grant TEC2016-79884-C2 and Grant RTC-2016-5186-1.Rodríguez-Álvarez, M.; Sánchez, F.; Soriano Asensi, A.; Moliner Martínez, L.; Sánchez Góez, S.; Benlloch Baviera, JM. (2018). QR-Factorization Algorithm for Computed Tomography (CT): Comparison With FDK and Conjugate Gradient (CG) Algorithms. IEEE Transactions on Radiation and Plasma Medical Sciences. 2(5):459-469. https://doi.org/10.1109/TRPMS.2018.2843803S4594692
Three-dimensional Photoacoustic Tomography System Design Analysis and Optimization
Photoacoustic tomography (PAT) is an emerging imaging modality capable of mapping optical absorption in tissues. It is a hybrid technique that combines the high spatial resolution of ultrasound imaging with the high contrast of optical imaging, and has demonstrated much potential in biomedical applications. Conventional PAT systems employ raster scanning to capture a large number of projections, thus improving image reconstruction at the cost of temporal resolution. Arising from the desire for real-time 3D PA imaging, several groups have begun to design PAT systems with staring arrays, where image acquisition is only limited by the repetition rate of the laser. However, there has been little emphasis on staring array design analysis and optimization. We have developed objective figures of merit for PAT system performance and applied these metrics to improve system design. The results suggested that the developed approach could be used to objectively characterize and improve any PAT system design
Recommended from our members
The role of HG in the analysis of temporal iteration and interaural correlation
Recommended from our members
Pattern classification approaches for breast cancer identification via MRI: state‐of‐the‐art and vision for the future
Mining algorithms for Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCEMRI)
of breast tissue are discussed. The algorithms are based on recent advances in multidimensional
signal processing and aim to advance current state‐of‐the‐art computer‐aided detection
and analysis of breast tumours when these are observed at various states of development. The topics
discussed include image feature extraction, information fusion using radiomics, multi‐parametric
computer‐aided classification and diagnosis using information fusion of tensorial datasets as well
as Clifford algebra based classification approaches and convolutional neural network deep learning
methodologies. The discussion also extends to semi‐supervised deep learning and self‐supervised
strategies as well as generative adversarial networks and algorithms using generated
confrontational learning approaches. In order to address the problem of weakly labelled tumour
images, generative adversarial deep learning strategies are considered for the classification of
different tumour types. The proposed data fusion approaches provide a novel Artificial Intelligence
(AI) based framework for more robust image registration that can potentially advance the early
identification of heterogeneous tumour types, even when the associated imaged organs are
registered as separate entities embedded in more complex geometric spaces. Finally, the general
structure of a high‐dimensional medical imaging analysis platform that is based on multi‐task
detection and learning is proposed as a way forward. The proposed algorithm makes use of novel
loss functions that form the building blocks for a generated confrontation learning methodology
that can be used for tensorial DCE‐MRI. Since some of the approaches discussed are also based on
time‐lapse imaging, conclusions on the rate of proliferation of the disease can be made possible. The
proposed framework can potentially reduce the costs associated with the interpretation of medical
images by providing automated, faster and more consistent diagnosis
Recommended from our members
Objective Assessment of Image Quality: Extension of Numerical Observer Models to Multidimensional Medical Imaging Studies
Encompassing with fields on engineering and medical image quality, this dissertation proposes a novel framework for diagnostic performance evaluation based on objective image-quality assessment, an important step in the development of new imaging devices, acquisitions, or image-processing techniques being used for clinicians and researchers. The objective of this dissertation is to develop computational modeling tools that allow comprehensive evaluation of task-based assessment including clinical interpretation of images regardless of image dimensionality.
Because of advances in the development of medical imaging devices, several techniques have improved image quality where the format domain of the outcome images becomes multidimensional (e.g., 3D+time or 4D). To evaluate the performance of new imaging devices or to optimize various design parameters and algorithms, the quality measurement should be performed using an appropriate image-quality figure-of-merit (FOM). Classical FOM such as bias and variance, or mean-square error, have been broadly used in the past. Unfortunately, they do not reflect the fact that the average performance of the principal agent in medical decision-making is frequently a human observer, nor are they aware of the specific diagnostic task.
The standard goal for image quality assessment is a task-based approach in which one evaluates human observer performance of a specified diagnostic task (e.g. detection of the presence of lesions). However, having a human observer performs the tasks is costly and time-consuming. To facilitate practical task-based assessment of image quality, a numerical observer is required as a surrogate for human observers. Previously, numerical observers for the detection task have been studied both in research and industry; however, little research effort has been devoted toward development of one utilized for multidimensional imaging studies (e.g., 4D). Limiting the numerical observer tools that accommodate all information embedded in a series of images, the performance assessment of a particular new technique that generates multidimensional data is complex and limited. Consequently, key questions remain unanswered about how much the image quality improved using these new multidimensional images on a specific clinical task.
To address this gap, this dissertation proposes a new numerical-observer methodology to assess the improvement achieved from newly developed imaging technologies. This numerical observer approach can be generalized to exploit pertinent statistical information in multidimensional images and accurately predict the performance of a human observer over the complexity of the image domains. Part I of this dissertation aims to develop a numerical observer that accommodates multidimensional images to process correlated signal components and appropriately incorporate them into an absolute FOM. Part II of this dissertation aims to apply the model developed in Part I to selected clinical applications with multidimensional images including: 1) respiratory-gated positron emission tomography (PET) in lung cancer (3D+t), 2) kinetic parametric PET in head-and-neck cancer (3D+k), and 3) spectral computed tomography (CT) in atherosclerotic plaque (3D+e).
The author compares the task-based performance of the proposed approach to that of conventional methods, evaluated based on a broadly-used signal-known-exactly /background-known-exactly paradigm, which is in the context of the specified properties of a target object (e.g., a lesion) on highly realistic and clinical backgrounds. A realistic target object is generated with specific properties and applied to a set of images to create pathological scenarios for the performance evaluation, e.g., lesions in the lungs or plaques in the artery. The regions of interest (ROIs) of the target objects are formed over an ensemble of data measurements under identical conditions and evaluated for the inclusion of useful information from different complex domains (i.e., 3D+t, 3D+k, 3D+e). This work provides an image-quality assessment metric with no dimensional limitation that could help substantially improve assessment of performance achieved from new developments in imaging that make use of high dimensional data
Potentials and caveats of AI in Hybrid Imaging
State-of-the-art patient management frequently mandates the investigation of both anatomy and physiology of the patients. Hybrid imaging modalities such as the PET/MRI, PET/CT and SPECT/CT have the ability to provide both structural and functional information of the investigated tissues in a single examination. With the introduction of such advanced hardware fusion, new problems arise such as the exceedingly large amount of multi-modality data that requires novel approaches of how to extract a maximum of clinical information from large sets of multi-dimensional imaging data. Artificial intelligence (AI) has emerged as one of the leading technologies that has shown promise in facilitating highly integrative analysis of multi-parametric data. Specifically, the usefulness of AI algorithms in the medical imaging field has been heavily investigated in the realms of (1) image acquisition and reconstruction, (2) post-processing and (3) data mining and modelling. Here, we aim to provide an overview of the challenges encountered in hybrid imaging and discuss how AI algorithms can facilitate potential solutions. In addition, we highlight the pitfalls and challenges in using advanced AI algorithms in the context of hybrid imaging and provide suggestions for building robust AI solutions that enable reproducible and transparent research
Investigating the build-up of precedence effect using reflection masking
The auditory processing level involved in the build‐up of precedence [Freyman et al., J. Acoust. Soc. Am. 90, 874–884 (1991)] has been investigated here by employing reflection masked threshold (RMT) techniques. Given that RMT techniques are generally assumed to address lower levels of the auditory signal processing, such an approach represents a bottom‐up approach to the buildup of precedence. Three conditioner configurations measuring a possible buildup of reflection suppression were compared to the baseline RMT for four reflection delays ranging from 2.5–15 ms. No buildup of reflection suppression was observed for any of the conditioner configurations. Buildup of template (decrease in RMT for two of the conditioners), on the other hand, was found to be delay dependent. For five of six listeners, with reflection delay=2.5 and 15 ms, RMT decreased relative to the baseline. For 5‐ and 10‐ms delay, no change in threshold was observed. It is concluded that the low‐level auditory processing involved in RMT is not sufficient to realize a buildup of reflection suppression. This confirms suggestions that higher level processing is involved in PE buildup. The observed enhancement of reflection detection (RMT) may contribute to active suppression at higher processing levels
- …