187 research outputs found

    A Comparison of Alternating Minimization and Expectation Maximization Algorithms for Single Source Gamma Ray Tomography

    Get PDF
    Lange and Carson (1984 J. Comput. Assist. Tomogr. 8 306-16) Defined Image Reconstruction for Transmission Tomography as a Maximum Likelihood Estimation Problem and Derived an Expectation Maximization (EM) Algorithm to Obtain the Maximum Likelihood Image Estimate. However, in the Maximization Step or M-Step of the EM Algorithm, an Approximation is Made in the Solution Which Can Affect the Image Quality, particularly in the Case of Domains with High Attenuating Material. O\u27Sullivan and Benac (2007 IEEE Trans. Med. Imaging 26 283-97) Reformulated the Maximum Likelihood Problem as a Double Minimization of an I-Divergence to Obtain a Family of Image Reconstruction Algorithms, Called the Alternating Minimization (AM) Algorithm. the AM Algorithm Increases the Log-Likelihood Function While Minimizing the I-Divergence. in This Work, We Implement the AM Algorithm for Image Reconstruction in Gamma Ray Tomography for Industrial Applications. Experimental Gamma Ray Transmission Data Obtained with a Fan Beam Geometry Gamma Ray Scanner, and Simulated Transmission Data based on a Synthetic Phantom, with Two Phases (Water and Air) Were Considered in This Study. Image Reconstruction Was Carried Out with These Data using the AM and the EM Algorithms to Determine and Quantitatively Compare the Holdup Distribution Images of the Two Phases in the Phantoms. When Compared to the EM Algorithm, the AM Algorithm Shows Qualitative and Quantitative Improvement in the Holdup Distribution Images of the Two Phases for Both the Experimental and the Simulated Gamma Ray Transmission Data. © 2008 IOP Publishing Ltd

    Alternating Minimization Algorithms for Dual-Energy X-Ray CT Imaging and Information Optimization

    Get PDF
    This dissertation contributes toward solutions to two distinct problems linked through the use of common information optimization methods. The first problem is the X-ray computed tomography (CT) imaging problem and the second is the computation of Berger-Tung bounds for the lossy distributed source coding problem. The first problem discussed through most of the dissertation is motivated by applications in radiation oncology, including dose prediction in proton therapy and brachytherapy. In proton therapy dose prediction, the stopping power calculation is based on estimates of the electron density and mean excitation energy. In turn, the estimates of the linear attenuation coefficients or the component images from dual-energy CT image reconstruction are used to estimate the electron density and mean excitation. Therefore, the quantitative accuracy of the estimates of the linear attenuation coefficients or the component images affects the accuracy of proton therapy dose prediction. In brachytherapy, photons with low energies (approximately 20 keV) are often used for internal treatment. Those photons are attenuated through their interactions with tissues. The dose distribution in the tissue obeys an exponential decay with the linear attenuation coefficient as the parameter in the exponential. Therefore, the accuracy of the estimates of the linear attenuation coefficients at low energy levels has strong influence on dose prediction in brachytherapy. Numerical studies of the regularized alternating minimization (DE-AM) algorithm with different regularization parameters were performed to find ranges of the parameters that can achieve the desired image quality in terms of estimation accuracy and image smoothness. The DE-AM algorithm is an extension of the AM algorithm proposed by O\u27Sullivan and Benac. Both simulated data and real data reconstructions, as well as system bias and variance experiments, were carried out to demonstrate that the DE-AM algorithm is incapable of reconstructing a high density material accurately with a limited number of iterations (1000 iterations with 33 ordered subsets). This slow convergence phenomenon was then studied via a toy. or scaled-down problem, indicating a highly ridged objective function. Motivated by the studies which demonstrate the slow convergence of the DE-AM algorithm, a new algorithm, the linear integral alternating minimization (LIAM) algorithm was developed, which estimates the linear integrals of the component images first; then the component images can be recovered by an expectation-maximization (EM) algorithm or linear regression methods. Both simulated and real data were reconstructed by the LIAM algorithm while varying the regularization parameters to ascertain good choices ( &delta= 500; &lambda= 50 for I0 = 100000 scenario). The results from the DE-AM algorithm applied to the same data were used for comparison. While using only 1/10 of the computation time of the DE-AM algorithm, the LIAM algorithm achieves at least a two-fold improvement in the relative absolute error of the component images in the presence of Poisson noise. This work also explored the reconstruction of image differences from tomographic Poisson data. An alternating minimization algorithm was developed and monotonic decrease in the objective function was achieved for each iteration. Simulations with random images and tomographic data were presented to demonstrate that the algorithm can recover the difference images with 100% accuracy in the number of and identity of pixels which differ. An extension to 4D CT with simulated tomographic data was also presented and an approach to 4D PET was described. Different approaches for X-ray adaptive sensing were also proposed and reconstructions of simulated data were computed to test these approaches. Early simulation results show improved image reconstruction performance in terms of normalized L2 norm error compared to a non-adaptive sensing method. For the second problem, an optimization and computational approach was described for characterizing the inner and outer bounds for the achievable rate regions for distributed source coding, known as Berger-Tung inner and outer bounds. Several two-variable examples were presented to demonstrate the computational capability of the algorithm. For each problem considered that has a sum of distortions on the encoded variables, the inner and outer bound regions coincided. For a problem defined by Wagner and Anantharam with a single joint distortion for the two variables, their gap was observed in our results. These boundary regions can motivate hypothesized optimal distributions which can be tested in the first order necessary conditions for the optimal distributions

    Automatic Optimization of Alignment Parameters for Tomography Datasets

    Full text link
    As tomographic imaging is being performed at increasingly smaller scales, the stability of the scanning hardware is of great importance to the quality of the reconstructed image. Instabilities lead to perturbations in the geometrical parameters used in the acquisition of the projections. In particular for electron tomography and high-resolution X-ray tomography, small instabilities in the imaging setup can lead to severe artifacts. We present a novel alignment algorithm for recovering the true geometrical parameters \emph{after} the object has been scanned, based on measured data. Our algorithm employs an optimization algorithm that combines alignment with reconstruction. We demonstrate that problem-specific design choices made in the implementation are vital to the success of the method. The algorithm is tested in a set of simulation experiments. Our experimental results indicate that the method is capable of aligning tomography datasets with considerably higher accuracy compared to standard cross-correlation methods

    Variationally Constrained Numerical Solution of Electrical Impedance Tomography

    Get PDF
    We propose a novel, variational inversion methodology for the electrical impedance tomography problem, where we seek electrical conductivity σ inside a bounded, simply connected domain Ω, given simultaneous measurements of electric currents I and potentials V at the boundary. Explicitly, we make use of natural, variational constraints on the space of admissible functions σ, to obtain efficient reconstruction methods which make best use of the data. We give a detailed analysis of the variational constraints, we propose a variety of reconstruction algorithms and we discuss their advantages and disadvantages. We also assess the performance of our algorithms through numerical simulations and comparisons with other, well established, numerical reconstruction methods

    Statistical image reconstruction for quantitative computed tomography

    Get PDF
    Statistical iterative reconstruction (SIR) algorithms for x-ray computed tomography (CT) have the potential to reconstruct images with less noise and systematic error than the conventional filtered backprojection (FBP) algorithm. More accurate reconstruction algorithms are important for reducing imaging dose and for a wide range of quantitative CT applications. The work presented herein investigates some potential advantages of one such statistically motivated algorithm called Alternating Minimization (AM). A simulation study is used to compare the tradeoff between noise and resolution in images reconstructed with the AM and FBP algorithms. The AM algorithm is employed with an edge-preserving penalty function, which is shown to result in images with contrast-dependent resolution. The AM algorithm always reconstructed images with less image noise than the FBP algorithm. Compared to previous studies in the literature, this is the first work to clearly illustrate that the reported noise advantage when using edge-preserving penalty functions can be highly dependent on the contrast of the object used for quantifying resolution. A polyenergetic version of the AM algorithm, which incorporates knowledge of the scanner’s x-ray spectrum, is then commissioned from data acquired on a commercially available CT scanner. Homogeneous cylinders are used to assess the absolute accuracy of the polyenergetic AM algorithm and to compare systematic errors to conventional FBP reconstruction. Methods to estimate the x-ray spectrum, model the bowtie filter and measure scattered radiation are outlined which support AM reconstruction to within 0.5% of the expected ground truth. The polyenergetic AM algorithm reconstructs the cylinders with less systematic error than FBP, in terms of better image uniformity and less object-size dependence. Finally, the accuracy of a post-processing dual-energy CT (pDECT) method to non-invasively measure a material’s photon cross-section information is investigated. Data is acquired on a commercial scanner for materials of known composition. Since the pDECT method has been shown to be highly sensitive to reconstructed image errors, both FBP and polyenergetic AM reconstruction are employed. Linear attenuation coefficients are estimated with residual errors of around 1% for energies of 30 keV to 1 MeV with errors rising to 3%-6% at lower energies down to 10 keV. In the ideal phantom geometry used here, the main advantage of AM reconstruction is less random cross-section uncertainty due to the improved noise performance

    On Tensors, Sparsity, and Nonnegative Factorizations

    Full text link
    Tensors have found application in a variety of fields, ranging from chemometrics to signal processing and beyond. In this paper, we consider the problem of multilinear modeling of sparse count data. Our goal is to develop a descriptive tensor factorization model of such data, along with appropriate algorithms and theory. To do so, we propose that the random variation is best described via a Poisson distribution, which better describes the zeros observed in the data as compared to the typical assumption of a Gaussian distribution. Under a Poisson assumption, we fit a model to observed data using the negative log-likelihood score. We present a new algorithm for Poisson tensor factorization called CANDECOMP-PARAFAC Alternating Poisson Regression (CP-APR) that is based on a majorization-minimization approach. It can be shown that CP-APR is a generalization of the Lee-Seung multiplicative updates. We show how to prevent the algorithm from converging to non-KKT points and prove convergence of CP-APR under mild conditions. We also explain how to implement CP-APR for large-scale sparse tensors and present results on several data sets, both real and simulated

    The Sixth Copper Mountain Conference on Multigrid Methods, part 2

    Get PDF
    The Sixth Copper Mountain Conference on Multigrid Methods was held on April 4-9, 1993, at Copper Mountain, Colorado. This book is a collection of many of the papers presented at the conference and so represents the conference proceedings. NASA Langley graciously provided printing of this document so that all of the papers could be presented in a single forum. Each paper was reviewed by a member of the conference organizing committee under the coordination of the editors. The multigrid discipline continues to expand and mature, as is evident from these proceedings. The vibrancy in this field is amply expressed in these important papers, and the collection clearly shows its rapid trend to further diversity and depth

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more

    Advanced regularization and discretization methods in diffuse optical tomography

    Get PDF
    Diffuse optical tomography (DOT) is an emerging technique that utilizes light in the near infrared spectral region (650−900nm) to measure the optical properties of physiological tissue. Comparing with other imaging modalities, DOT modality is non-invasive and non-ionising. Because of the relatively lower absorption of haemoglobin, water and lipid at the near infrared spectral region, the light is able to propagate several centimeters inside of the tissue without being absolutely absorbed. The transmitted near infrared light is then combined with the image reconstruction algorithm to recover the clinical relevant information inside of the tissue. Image reconstruction in DOT is a critical problem. The accuracy and precision of diffuse optical imaging rely on the accuracy of image reconstruction. Therefore, it is of great importance to design efficient and effective algorithms for image reconstruction. Image reconstruction has two processes. The process of modelling light propagation in tissues is called the forward problem. A large number of models can be used to predict light propagation within tissues, including stochastic, analytical and numerical models. The process of recovering optical parameters inside of the tissue using the transmitted measurements is called the inverse problem. In this thesis, a number of advanced regularization and discretization methods in diffuse optical tomography are proposed and evaluated on simulated and real experimental data in reconstruction accuracy and efficiency. In DOT, the number of measurements is significantly fewer than the number of optical parameters to be recovered. Therefore the inverse problem is an ill-posed problem which would suffer from the local minimum trap. Regularization methods are necessary to alleviate the ill-posedness and help to constrain the inverse problem to achieve a plausible solution. In order to alleviate the over-smoothing effect of the popular used Tikhonov regularization, L1-norm regularization based nonlinear DOT reconstruction for spectrally constrained diffuse optical tomography is proposed. This proposed regularization can reduce crosstalk between chromophores and scatter parameters and maintain image contrast by inducing sparsity. This work investigates multiple algorithms to find the most computational efficient one for solving the proposed regularization methods. In order to recover non-sparse images where multiple activations or complex injuries happen in the brain, a more general total variation regularization is introduced. The proposed total variation is shown to be able to alleviate the over-smoothing effect of Tikhonov regularization and localize the anomaly by inducing sparsity of the gradient of the solution. A new numerical method called graph-based numerical method is introduced to model unstructured geometries of DOT objects. The new numerical method (discretization method) is compared with the widely used finite element-based (FEM) numerical method and it turns out that the graph-based numerical method is more stable and robust to changes in mesh resolution. With the advantages discovered on the graph-based numerical method, graph-based numerical method is further applied to model the light propagation inside of the tissue. In this work, two measurement systems are considered: continuous wave (CW) and frequency domain (FD). New formulations of the forward model for CW/FD DOT are proposed and the concepts of differential operators are defined under the nonlocal vector calculus. Extensive numerical experiments on simulated and realistic experimental data validated that the proposed forward models are able to accurately model the light propagation in the medium and are quantitatively comparable with both analytical and FEM forward models. In addition, it is more computational efficient and allows identical implementation for geometries in any dimension
    • …
    corecore