1,127 research outputs found
A multi-level preconditioned Krylov method for the efficient solution of algebraic tomographic reconstruction problems
Classical iterative methods for tomographic reconstruction include the class
of Algebraic Reconstruction Techniques (ART). Convergence of these stationary
linear iterative methods is however notably slow. In this paper we propose the
use of Krylov solvers for tomographic linear inversion problems. These advanced
iterative methods feature fast convergence at the expense of a higher
computational cost per iteration, causing them to be generally uncompetitive
without the inclusion of a suitable preconditioner. Combining elements from
standard multigrid (MG) solvers and the theory of wavelets, a novel
wavelet-based multi-level (WMG) preconditioner is introduced, which is shown to
significantly speed-up Krylov convergence. The performance of the
WMG-preconditioned Krylov method is analyzed through a spectral analysis, and
the approach is compared to existing methods like the classical Simultaneous
Iterative Reconstruction Technique (SIRT) and unpreconditioned Krylov methods
on a 2D tomographic benchmark problem. Numerical experiments are promising,
showing the method to be competitive with the classical Algebraic
Reconstruction Techniques in terms of convergence speed and overall performance
(CPU time) as well as precision of the reconstruction.Comment: Journal of Computational and Applied Mathematics (2014), 26 pages, 13
figures, 3 table
Geometric reconstruction methods for electron tomography
Electron tomography is becoming an increasingly important tool in materials
science for studying the three-dimensional morphologies and chemical
compositions of nanostructures. The image quality obtained by many current
algorithms is seriously affected by the problems of missing wedge artefacts and
nonlinear projection intensities due to diffraction effects. The former refers
to the fact that data cannot be acquired over the full tilt range;
the latter implies that for some orientations, crystalline structures can show
strong contrast changes. To overcome these problems we introduce and discuss
several algorithms from the mathematical fields of geometric and discrete
tomography. The algorithms incorporate geometric prior knowledge (mainly
convexity and homogeneity), which also in principle considerably reduces the
number of tilt angles required. Results are discussed for the reconstruction of
an InAs nanowire
GENFIRE: A generalized Fourier iterative reconstruction algorithm for high-resolution 3D imaging
Tomography has made a radical impact on diverse fields ranging from the study
of 3D atomic arrangements in matter to the study of human health in medicine.
Despite its very diverse applications, the core of tomography remains the same,
that is, a mathematical method must be implemented to reconstruct the 3D
structure of an object from a number of 2D projections. In many scientific
applications, however, the number of projections that can be measured is
limited due to geometric constraints, tolerable radiation dose and/or
acquisition speed. Thus it becomes an important problem to obtain the
best-possible reconstruction from a limited number of projections. Here, we
present the mathematical implementation of a tomographic algorithm, termed
GENeralized Fourier Iterative REconstruction (GENFIRE). By iterating between
real and reciprocal space, GENFIRE searches for a global solution that is
concurrently consistent with the measured data and general physical
constraints. The algorithm requires minimal human intervention and also
incorporates angular refinement to reduce the tilt angle error. We demonstrate
that GENFIRE can produce superior results relative to several other popular
tomographic reconstruction techniques by numerical simulations, and by
experimentally by reconstructing the 3D structure of a porous material and a
frozen-hydrated marine cyanobacterium. Equipped with a graphical user
interface, GENFIRE is freely available from our website and is expected to find
broad applications across different disciplines.Comment: 18 pages, 6 figure
Representation of probabilistic scientific knowledge
This article is available through the Brunel Open Access Publishing Fund. Copyright © 2013 Soldatova et al; licensee BioMed Central Ltd.The theory of probability is widely used in biomedical research for data analysis and modelling. In previous work the probabilities of the research hypotheses have been recorded as experimental metadata. The ontology HELO is designed to support probabilistic reasoning, and provides semantic descriptors for reporting on research that involves operations with probabilities. HELO explicitly links research statements such as hypotheses, models, laws, conclusions, etc. to the associated probabilities of these statements being true. HELO enables the explicit semantic representation and accurate recording of probabilities in hypotheses, as well as the inference methods used to generate and update those hypotheses. We demonstrate the utility of HELO on three worked examples: changes in the probability of the hypothesis that sirtuins regulate human life span; changes in the probability of hypotheses about gene functions in the S. cerevisiae aromatic amino acid pathway; and the use of active learning in drug design (quantitative structure activity relation learning), where a strategy for the selection of compounds with the highest probability of improving on the best known compound was used. HELO is open source and available at https://github.com/larisa-soldatova/HELO.This work was partially supported by grant BB/F008228/1 from the UK Biotechnology & Biological Sciences Research Council, from the European Commission under the FP7 Collaborative Programme, UNICELLSYS, KU Leuven GOA/08/008 and ERC Starting Grant 240186
A distributed SIRT implementation for the ASTRA Toolbox
The ASTRA Toolbox is a software toolbox that enables rapid development of GPU accelerated
tomography algorithms. It contains GPU implementations of forward and backprojection operations
for common scanning geometries, as well as a set of algorithms for iterative reconstruction. These
algorithms are currently limited to using a single GPU.
A drawback of iterative
reconstruction algorithms is that they are slow compared to classical
backprojection algorithms. As a result, using only a single GPU can result
in prohibitively long reconstruction times when working with large data volumes.
In this paper, we present an extension of the ASTRA Toolbox with implementations of forward
projection, backprojection and the SIRT algorithm that can be distributed over
multiple GPUs and multiple workstations to make processing larger data sets
with ASTRA feasible
A distributed ASTRA toolbox
While iterative reconstruction algorithms for tomography have several advantages compared to standard backprojection methods, the adoption of such algorithms in large-scale imaging facilities is still limited,
- âŠ