50,738 research outputs found
Geometry Processing of Conventionally Produced Mouse Brain Slice Images
Brain mapping research in most neuroanatomical laboratories relies on
conventional processing techniques, which often introduce histological
artifacts such as tissue tears and tissue loss. In this paper we present
techniques and algorithms for automatic registration and 3D reconstruction of
conventionally produced mouse brain slices in a standardized atlas space. This
is achieved first by constructing a virtual 3D mouse brain model from annotated
slices of Allen Reference Atlas (ARA). Virtual re-slicing of the reconstructed
model generates ARA-based slice images corresponding to the microscopic images
of histological brain sections. These image pairs are aligned using a geometric
approach through contour images. Histological artifacts in the microscopic
images are detected and removed using Constrained Delaunay Triangulation before
performing global alignment. Finally, non-linear registration is performed by
solving Laplace's equation with Dirichlet boundary conditions. Our methods
provide significant improvements over previously reported registration
techniques for the tested slices in 3D space, especially on slices with
significant histological artifacts. Further, as an application we count the
number of neurons in various anatomical regions using a dataset of 51
microscopic slices from a single mouse brain. This work represents a
significant contribution to this subfield of neuroscience as it provides tools
to neuroanatomist for analyzing and processing histological data.Comment: 14 pages, 11 figure
Monte Carlo-based Noise Compensation in Coil Intensity Corrected Endorectal MRI
Background: Prostate cancer is one of the most common forms of cancer found
in males making early diagnosis important. Magnetic resonance imaging (MRI) has
been useful in visualizing and localizing tumor candidates and with the use of
endorectal coils (ERC), the signal-to-noise ratio (SNR) can be improved. The
coils introduce intensity inhomogeneities and the surface coil intensity
correction built into MRI scanners is used to reduce these inhomogeneities.
However, the correction typically performed at the MRI scanner level leads to
noise amplification and noise level variations. Methods: In this study, we
introduce a new Monte Carlo-based noise compensation approach for coil
intensity corrected endorectal MRI which allows for effective noise
compensation and preservation of details within the prostate. The approach
accounts for the ERC SNR profile via a spatially-adaptive noise model for
correcting non-stationary noise variations. Such a method is useful
particularly for improving the image quality of coil intensity corrected
endorectal MRI data performed at the MRI scanner level and when the original
raw data is not available. Results: SNR and contrast-to-noise ratio (CNR)
analysis in patient experiments demonstrate an average improvement of 11.7 dB
and 11.2 dB respectively over uncorrected endorectal MRI, and provides strong
performance when compared to existing approaches. Conclusions: A new noise
compensation method was developed for the purpose of improving the quality of
coil intensity corrected endorectal MRI data performed at the MRI scanner
level. We illustrate that promising noise compensation performance can be
achieved for the proposed approach, which is particularly important for
processing coil intensity corrected endorectal MRI data performed at the MRI
scanner level and when the original raw data is not available.Comment: 23 page
Electronic depth profiles with atomic layer resolution from resonant soft x-ray reflectivity
The analysis of x-ray reflectivity data from artificial heterostructures
usually relies on the homogeneity of optical properties of the constituent
materials. However, when the x-ray energy is tuned to an absorption edge, this
homogeneity no longer exists. Within the same material, spatial regions
containing elements at resonance will have optical properties very different
from regions without resonating sites. In this situation, models assuming
homogeneous optical properties throughout the material can fail to describe the
reflectivity adequately. As we show here, resonant soft x-ray reflectivity is
sensitive to these variations, even though the wavelength is typically large as
compared to the atomic distances over which the optical properties vary. We
have therefore developed a scheme for analyzing resonant soft x-ray
reflectivity data, which takes the atomic structure of a material into account
by "slicing" it into atomic planes with characteristic optical properties.
Using LaSrMnO4 as an example, we discuss both the theoretical and experimental
implications of this approach. Our analysis not only allows to determine
important structural information such as interface terminations and stacking of
atomic layers, but also enables to extract depth-resolved spectroscopic
information with atomic resolution, thus enhancing the capability of the
technique to study emergent phenomena at surfaces and interfaces.Comment: Completely overhauled with respect to the previous version due to
peer revie
A new approach for quantitative evaluation of reconstruction algorithms in SPECT
Background: In nuclear medicine, phantoms are mainly used to evaluate the overall performance of the imaging systems, and practically there is no phantom exclusively designed for the evaluation of the software performance. In this study the Hoffman brain phantom was used for quantitative evaluation of reconstruction techniques. The phantom is modified to acquire tomographic and planar image of the same structure. The planar image may be used as the reference image to evaluate the quality of reconstructed slices, using the companion software developed in MATLAB. Materials and Methods: The designed phantom was composed of 4 independent 2D slices that could have been placed juxtapose to the 3D phantom. Each slice was composed of objects of different size and shape (for example: circle, triangle, and rectangle). Each 2D slice was imaged at distances ranging from 0 to 15 cm from the collimator surface. The phantom in 3D configuration was imaged acquiring 128 views of 128×128 matrix size. Reconstruction was performed using different filtering condition and the reconstructed images were compared to the corresponding planar images. The modulation transfer function, scatter fraction and attenuation map were calculated for each reconstructed image. Results: Since all the parameters of the acquisition were identical for the 2D and the 3D imaging, it was assumed that the difference in the quality of the images has exclusively been due to the reconstruction condition. The planar images were assumed to be the most perfect images which could be obtained with the system. The comparison of the reconstructed slices with the corresponding planar images yielded the optimum reconstruction condition. The results clearly showed that Wiener filter yields superior quality image among the entire tested filters. The extent of the improvement has been quantified in terms of universal image quality index. Conclusion: The phantom and the accompanying software were evaluated and found to be quite useful in determining the optimum filtering condition and mathematical evaluation of the scatter and attenuation in tomographic images.
A new approach for quantitative evaluation of reconstruction algorithms in SPECT (PDF Download Available). Available from: https://www.researchgate.net/publication/229005593_A_new_approach_for_quantitative_evaluation_of_reconstruction_algorithms_in_SPECT [accessed Nov 07 2017]
Particle-by-Particle Reconstruction of Ultrafiltration Cakes in 3D from Binarized TEM Images
Transmission electron microscopy (TEM) imaging is one of the few techniques available for direct observation of the microstructure of ultrafiltration cakes. TEM images yield local microstructural information in the form of two-dimensional grayscale images of slices a few particle diameters in thickness. This work presents an innovative particle-by-particle reconstruction scheme for simulating ultrafiltration cake microstructure in three dimensions from TEM images. The scheme uses binarized TEM images, thereby permitting use of lesser-quality images. It is able to account for short- and long-range order within ultrafiltration cake structure by matching the morphology of simulated and measured microstructures at a number of resolutions and scales identifiable within the observed microstructure. In the end, simulated microstructures are intended for improving our understanding of the relationships between cake morphology, ultrafiltration performance, and operating conditions
Electron tomography at 2.4 {\AA} resolution
Transmission electron microscopy (TEM) is a powerful imaging tool that has
found broad application in materials science, nanoscience and biology(1-3).
With the introduction of aberration-corrected electron lenses, both the spatial
resolution and image quality in TEM have been significantly improved(4,5) and
resolution below 0.5 {\AA} has been demonstrated(6). To reveal the 3D structure
of thin samples, electron tomography is the method of choice(7-11), with
resolutions of ~1 nm^3 currently achievable(10,11). Recently, discrete
tomography has been used to generate a 3D atomic reconstruction of a silver
nanoparticle 2-3 nm in diameter(12), but this statistical method assumes prior
knowledge of the particle's lattice structure and requires that the atoms fit
rigidly on that lattice. Here we report the experimental demonstration of a
general electron tomography method that achieves atomic scale resolution
without initial assumptions about the sample structure. By combining a novel
projection alignment and tomographic reconstruction method with scanning
transmission electron microscopy, we have determined the 3D structure of a ~10
nm gold nanoparticle at 2.4 {\AA} resolution. While we cannot definitively
locate all of the atoms inside the nanoparticle, individual atoms are observed
in some regions of the particle and several grains are identified at three
dimensions. The 3D surface morphology and internal lattice structure revealed
are consistent with a distorted icosahedral multiply-twinned particle. We
anticipate that this general method can be applied not only to determine the 3D
structure of nanomaterials at atomic scale resolution(13-15), but also to
improve the spatial resolution and image quality in other tomography
fields(7,9,16-20).Comment: 27 pages, 17 figure
EchoFusion: Tracking and Reconstruction of Objects in 4D Freehand Ultrasound Imaging without External Trackers
Ultrasound (US) is the most widely used fetal imaging technique. However, US
images have limited capture range, and suffer from view dependent artefacts
such as acoustic shadows. Compounding of overlapping 3D US acquisitions into a
high-resolution volume can extend the field of view and remove image artefacts,
which is useful for retrospective analysis including population based studies.
However, such volume reconstructions require information about relative
transformations between probe positions from which the individual volumes were
acquired. In prenatal US scans, the fetus can move independently from the
mother, making external trackers such as electromagnetic or optical tracking
unable to track the motion between probe position and the moving fetus. We
provide a novel methodology for image-based tracking and volume reconstruction
by combining recent advances in deep learning and simultaneous localisation and
mapping (SLAM). Tracking semantics are established through the use of a
Residual 3D U-Net and the output is fed to the SLAM algorithm. As a proof of
concept, experiments are conducted on US volumes taken from a whole body fetal
phantom, and from the heads of real fetuses. For the fetal head segmentation,
we also introduce a novel weak annotation approach to minimise the required
manual effort for ground truth annotation. We evaluate our method
qualitatively, and quantitatively with respect to tissue discrimination
accuracy and tracking robustness.Comment: MICCAI Workshop on Perinatal, Preterm and Paediatric Image analysis
(PIPPI), 201
Binary morphological shape-based interpolation applied to 3-D tooth reconstruction
In this paper we propose an interpolation algorithm using a mathematical morphology morphing approach. The aim of this algorithm is to reconstruct the -dimensional object from a group of (n-1)-dimensional sets representing sections of that object. The morphing transformation modifies pairs of consecutive sets such that they approach in shape and size. The interpolated set is achieved when the two consecutive sets are made idempotent by the morphing transformation. We prove the convergence of the morphological morphing. The entire object is modeled by successively interpolating a certain number of intermediary sets between each two consecutive given sets. We apply the interpolation algorithm for 3-D tooth reconstruction
Multiple Projection Optical Diffusion Tomography with Plane Wave Illumination
We describe a new data collection scheme for optical diffusion tomography in
which plane wave illumination is combined with multiple projections in the slab
imaging geometry. Multiple projection measurements are performed by rotating
the slab around the sample. The advantage of the proposed method is that the
measured data can be much more easily fitted into the dynamic range of most
commonly used detectors. At the same time, multiple projections improve image
quality by mutually interchanging the depth and transverse directions, and the
scanned (detection) and integrated (illumination) surfaces. Inversion methods
are derived for image reconstructions with extremely large data sets. Numerical
simulations are performed for fixed and rotated slabs
- …