826 research outputs found
PyTomography: A Python Library for Quantitative Medical Image Reconstruction
Background: There is a scarcity of open-source libraries in medical imaging
dedicated to both (i) the development and deployment of novel reconstruction
algorithms and (ii) support for clinical data.
Purpose: To create and evaluate a GPU-accelerated, open-source, and
user-friendly image reconstruction library, designed to serve as a central
platform for the development, validation, and deployment of novel tomographic
reconstruction algorithms.
Methods: PyTomography was developed using Python and inherits the
GPU-accelerated functionality of PyTorch for fast computations. The software
uses a modular design that decouples the system matrix from reconstruction
algorithms, simplifying the process of integrating new imaging modalities or
developing novel reconstruction techniques. As example developments, SPECT
reconstruction in PyTomography is validated against both vendor-specific
software and alternative open-source libraries. Bayesian reconstruction
algorithms are implemented and validated.
Results: PyTomography is consistent with both vendor-software and alternative
open source libraries for standard SPECT clinical reconstruction, while
providing significant computational advantages. As example applications,
Bayesian reconstruction algorithms incorporating anatomical information are
shown to outperform the traditional ordered subset expectation maximum (OSEM)
algorithm in quantitative image analysis. PSF modeling in PET imaging is shown
to reduce blurring artifacts.
Conclusions: We have developed and publicly shared PyTomography, a highly
optimized and user-friendly software for quantitative image reconstruction of
medical images, with a class hierarchy that fosters the development of novel
imaging applications.Comment: 26 pages, 7 figure
Fully 3D Implementation of the End-to-end Deep Image Prior-based PET Image Reconstruction Using Block Iterative Algorithm
Deep image prior (DIP) has recently attracted attention owing to its
unsupervised positron emission tomography (PET) image reconstruction, which
does not require any prior training dataset. In this paper, we present the
first attempt to implement an end-to-end DIP-based fully 3D PET image
reconstruction method that incorporates a forward-projection model into a loss
function. To implement a practical fully 3D PET image reconstruction, which
could not be performed due to a graphics processing unit memory limitation, we
modify the DIP optimization to block-iteration and sequentially learn an
ordered sequence of block sinograms. Furthermore, the relative difference
penalty (RDP) term was added to the loss function to enhance the quantitative
PET image accuracy. We evaluated our proposed method using Monte Carlo
simulation with [F]FDG PET data of a human brain and a preclinical study
on monkey brain [F]FDG PET data. The proposed method was compared with
the maximum-likelihood expectation maximization (EM), maximum-a-posterior EM
with RDP, and hybrid DIP-based PET reconstruction methods. The simulation
results showed that the proposed method improved the PET image quality by
reducing statistical noise and preserved a contrast of brain structures and
inserted tumor compared with other algorithms. In the preclinical experiment,
finer structures and better contrast recovery were obtained by the proposed
method. This indicated that the proposed method can produce high-quality images
without a prior training dataset. Thus, the proposed method is a key enabling
technology for the straightforward and practical implementation of end-to-end
DIP-based fully 3D PET image reconstruction.Comment: 9 pages, 10 figure
A Case Study Tested Framework for Multivariate Analyses of Microbiomes: Software for Microbial Community Comparisons
The study of microbiomes is important because our understanding of microbial communities is providing insight into human health and many other areas of interest. Researchers often use genomic data to study microbial organisms, demonstrating differences from one organism to the next. Metagenomic data is utilized to study communities of microbial organisms. The research described herein involved the development of a collection of computational methods.
This suite of computational methods and tools (written in the R and Perl languages) has become a framework used for metagenomic data analysis and result visualization. Multivariate analyses such as Linear Discriminate Analysis (LDA) are used to determine which microbial organisms are useful in distinguishing between microbial communities. The differences between communities are visualized in two or three dimensions using dimensional reduction techniques. Other analyses provided by the framework include, but are not limited to, feature selection, cross-validation, multi-objective optimization, side-by-side comparisons of communities, and identification of core members in a microbial community.
The effectiveness of these methods and techniques was verified in multiple real world case studies such as body fat classification of elk using a fecal microbiome, identification of important changes in community composition when permafrost is thawed, and longitudinal classification of intestinal locations. The fecal microbiome may be used in the future to assist in assessing the health of animal populations using non-invasive samples. Additionally, the analysis of thawing permafrost may yield insight into the release of greenhouse gases into the atmosphere, furthering our understanding of global warming. Our understanding of the intestinal microbiome may someday grant us understanding and control of our intestinal well being, which plays a significant factor in immune system response and overall health
Improve the Performance and Scalability of RAID-6 Systems Using Erasure Codes
RAID-6 is widely used to tolerate concurrent failures of any two disks to provide a higher level of reliability with the support of erasure codes. Among many implementations, one class of codes called Maximum Distance Separable (MDS) codes aims to offer data protection against disk failures with optimal storage efficiency. Typical MDS codes contain horizontal and vertical codes. However, because of the limitation of horizontal parity or diagonal/anti-diagonal parities used in MDS codes, existing RAID-6 systems suffer several important problems on performance and scalability, such as low write performance, unbalanced I/O, and high migration cost in the scaling process. To address these problems, in this dissertation, we design techniques for high performance and scalable RAID-6 systems. It includes high performance and load balancing erasure codes (H-Code and HDP Code), and Stripe-based Data Migration (SDM) scheme. We also propose a flexible MDS Scaling Framework (MDS-Frame), which can integrate H-Code, HDP Code and SDM scheme together. Detailed evaluation results are also given in this dissertation
- …