6,869 research outputs found
Automatic Optimization of Alignment Parameters for Tomography Datasets
As tomographic imaging is being performed at increasingly smaller scales, the stability of the scanning hardware
is of great importance to the quality of the reconstructed image. Instabilities lead to perturbations in the
geometrical parameters used in the acquisition of the projections. In particular for electron tomography
and high-resolution X-ray tomography, small instabilities in the imaging setup can lead to severe artifacts.
We present a novel alignment algorithm for recovering the true geometrical parameters \emph{after} the object
has been scanned, based on measured data.
Our algorithm employs an optimization algorithm that combines alignment with reconstruction.
We demonstrate that problem-specific design choices made in the implementation are vital to the success of the method. The algorithm
is tested in a set of simulation experiments. Our experimental results indicate that the method is capable of
aligning tomography datasets with considerably higher accuracy compared to standard cross-correlation methods
Adorym: A multi-platform generic x-ray image reconstruction framework based on automatic differentiation
We describe and demonstrate an optimization-based x-ray image reconstruction
framework called Adorym. Our framework provides a generic forward model,
allowing one code framework to be used for a wide range of imaging methods
ranging from near-field holography to and fly-scan ptychographic tomography. By
using automatic differentiation for optimization, Adorym has the flexibility to
refine experimental parameters including probe positions, multiple hologram
alignment, and object tilts. It is written with strong support for parallel
processing, allowing large datasets to be processed on high-performance
computing systems. We demonstrate its use on several experimental datasets to
show improved image quality through parameter refinement
Automatic alignment for three-dimensional tomographic reconstruction
In tomographic reconstruction, the goal is to reconstruct an unknown object
from a collection of line integrals. Given a complete sampling of such line
integrals for various angles and directions, explicit inverse formulas exist to
reconstruct the object. Given noisy and incomplete measurements, the inverse
problem is typically solved through a regularized least-squares approach. A
challenge for both approaches is that in practice the exact directions and
offsets of the x-rays are only known approximately due to, e.g. calibration
errors. Such errors lead to artifacts in the reconstructed image. In the case
of sufficient sampling and geometrically simple misalignment, the measurements
can be corrected by exploiting so-called consistency conditions. In other
cases, such conditions may not apply and we have to solve an additional inverse
problem to retrieve the angles and shifts. In this paper we propose a general
algorithmic framework for retrieving these parameters in conjunction with an
algebraic reconstruction technique. The proposed approach is illustrated by
numerical examples for both simulated data and an electron tomography dataset
Direct 3D Tomographic Reconstruction and Phase-Retrieval of Far-Field Coherent Diffraction Patterns
We present an alternative numerical reconstruction algorithm for direct
tomographic reconstruction of a sample refractive indices from the measured
intensities of its far-field coherent diffraction patterns. We formulate the
well-known phase-retrieval problem in ptychography in a tomographic framework
which allows for simultaneous reconstruction of the illumination function and
the sample refractive indices in three dimensions. Our iterative reconstruction
algorithm is based on the Levenberg-Marquardt algorithm. We demonstrate the
performance of our proposed method with simulation studies
EPiK-a Workflow for Electron Tomography in Kepler.
Scientific workflows integrate data and computing interfaces as configurable, semi-automatic graphs to solve a scientific problem. Kepler is such a software system for designing, executing, reusing, evolving, archiving and sharing scientific workflows. Electron tomography (ET) enables high-resolution views of complex cellular structures, such as cytoskeletons, organelles, viruses and chromosomes. Imaging investigations produce large datasets. For instance, in Electron Tomography, the size of a 16 fold image tilt series is about 65 Gigabytes with each projection image including 4096 by 4096 pixels. When we use serial sections or montage technique for large field ET, the dataset will be even larger. For higher resolution images with multiple tilt series, the data size may be in terabyte range. Demands of mass data processing and complex algorithms require the integration of diverse codes into flexible software structures. This paper describes a workflow for Electron Tomography Programs in Kepler (EPiK). This EPiK workflow embeds the tracking process of IMOD, and realizes the main algorithms including filtered backprojection (FBP) from TxBR and iterative reconstruction methods. We have tested the three dimensional (3D) reconstruction process using EPiK on ET data. EPiK can be a potential toolkit for biology researchers with the advantage of logical viewing, easy handling, convenient sharing and future extensibility
A convolutional autoencoder approach for mining features in cellular electron cryo-tomograms and weakly supervised coarse segmentation
Cellular electron cryo-tomography enables the 3D visualization of cellular
organization in the near-native state and at submolecular resolution. However,
the contents of cellular tomograms are often complex, making it difficult to
automatically isolate different in situ cellular components. In this paper, we
propose a convolutional autoencoder-based unsupervised approach to provide a
coarse grouping of 3D small subvolumes extracted from tomograms. We demonstrate
that the autoencoder can be used for efficient and coarse characterization of
features of macromolecular complexes and surfaces, such as membranes. In
addition, the autoencoder can be used to detect non-cellular features related
to sample preparation and data collection, such as carbon edges from the grid
and tomogram boundaries. The autoencoder is also able to detect patterns that
may indicate spatial interactions between cellular components. Furthermore, we
demonstrate that our autoencoder can be used for weakly supervised semantic
segmentation of cellular components, requiring a very small amount of manual
annotation.Comment: Accepted by Journal of Structural Biolog
Automatic PET-CT Image Registration Method Based on Mutual Information and Genetic Algorithms
Hybrid PET/CT scanners can simultaneously visualize coronary artery disease as revealed by computed tomography (CT) and myocardial perfusion as measured by positron emission tomography (PET). Manual registration is usually required in clinical practice to compensate spatial mismatch between datasets. In this paper, we present a registration algorithm that is able to automatically align PET/CT cardiac images. The algorithm bases on mutual information (MI) as registration metric and on genetic algorithm as optimization method. A multiresolution approach was used to optimize the processing time. The algorithm was tested on computerized models of volumetric PET/CT cardiac data and on real PET/CT datasets. The proposed automatic registration algorithm smoothes the pattern of the MI and allows it to reach the global maximum of the similarity function. The implemented method also allows the definition of the correct spatial transformation that matches both synthetic and real PET and CT volumetric datasets
- …