114 research outputs found

    High Performance Reconstruction Framework for Straight Ray Tomography:from Micro to Nano Resolution Imaging

    Get PDF
    We develop a high-performance scheme to reconstruct straight-ray tomographic scans. We preserve the quality of the state-of-the-art schemes typically found in traditional computed tomography but reduce the computational cost substantially. Our approach is based on 1) a rigorous discretization of the forward model using a generalized sampling scheme; 2) a variational formulation of the reconstruction problem; and 3) iterative reconstruction algorithms that use the alternating-direction method of multipliers. To improve the quality of the reconstruction, we take advantage of total-variation regularization and its higher-order variants. In addition, the prior information on the support and the positivity of the refractive index are both considered, which yields significant improvements. The two challenging applications to which we apply the methods of our framework are grating-based \mbox{x-ray} imaging (GI) and single-particle analysis (SPA). In the context of micro-resolution GI, three complementary characteristics are measured: the conventional absorption contrast, the differential phase contrast, and the small-angle scattering contrast. While these three measurements provide powerful insights on biological samples, up to now they were calling for a large-dose deposition which potentially was harming the specimens ({\textit{e.g.}}, in small-rodent scanners). As it turns out, we are able to preserve the image quality of filtered back-projection-type methods despite the fewer acquisition angles and the lower signal-to-noise ratio implied by a reduction in the total dose of {\textit{in-vivo}} grating interferometry. To achieve this, we first apply our reconstruction framework to differential phase-contrast imaging (DPCI). We then add Jacobian-type regularization to simultaneously reconstruct phase and absorption. The experimental results confirm the power of our method. This is a crucial step toward the deployment of DPCI in medicine and biology. Our algorithms have been implemented in the TOMCAT laboratory of the Paul Scherrer Institute. In the context of near-atomic-resolution SPA, we need to cope with hundreds or thousands of noisy projections of macromolecules onto different micrographs. Moreover, each projection has an unknown orientation and is blurred by some space-dependent point-spread function of the microscope. Consequently, the determination of the structure of a macromolecule involves not only a reconstruction task, but also the deconvolution of each projection image. We formulate this problem as a constrained regularized reconstruction. We are able to directly include the contrast transfer function in the system matrix without any extra computational cost. The experimental results suggest that our approach brings a significant improvement in the quality of the reconstruction. Our framework also provides an important step toward the application of SPA for the {\textit{de novo}} generation of macromolecular models. The corresponding algorithms have been implemented in Xmipp

    Accurate 3D shape and displacement measurement using a scanning electron microscope

    Get PDF
    With the current development of nano-technology, there exists an increasing demand for three-dimensional shape and deformation measurements at this reduced-length scale in the field of materials research. Images acquired by \ud Scanning Electron Microscope (SEM) systems coupled with analysis by Digital Image Correlation (DIC) is an interesting combination for development of a high magnification measurement system. However, a SEM is designed for visualization, not for metrological studies, and the application of DIC to the micro- or nano-scale with such a system faces the challenges of calibrating the imaging system and correcting the spatially-varying and \ud time-varying distortions in order to obtain accurate measurements. Moreover, the SEM provides only a single sensor and recovering 3D information is not possible with the classical stereo-vision approach. But the specimen being mounted on the mobile SEM stage, images can be acquired from multiple viewpoints and 3D reconstruction is possible using the principle of videogrammetry for recovering the unknown rigid-body motions undergone by \ud the specimen.\ud The dissertation emphasizes the new calibration methodology that has been developed because it is a major contribution for the accuracy of 3D shape and deformation measurements at reduced-length scale. It proves that, unlike previous works, image drift and distortion must be taken into account if accurate measurements are to be made with such a system. Necessary background and required theoretical knowledge for the 3D shape measurement using videogrammetry and for in-plane and out-of-plane deformation measurement are presented in details as well. In order to validate our work and demonstrate in particular the obtained measurement accuracy, experimental results resulting from different applications are presented throughout the different chapters. At last, a software gathering different computer vision applications has been developed.\ud Avec le développement actuel des nano-technologies, la demande en matière d'étude du comportement des matériaux à des échelles micro ou nanoscopique ne cesse d'augmenter. Pour la mesure de forme ou de déformations tridimensionnelles à ces échelles de grandeur,l'acquisition d'images à partir d'un Microscope électronique à Balayage (MEB) couplée à l'analyse par corrélation d'images numériques s'est avérée une technique intéressante. \ud Cependant, un MEB est un outil conçu essentiellement pour de la visualisation et son utilisation pour des mesures tridimensionnelles précises pose un certain nombre de difficultés comme par exemple le calibrage du système et la \ud correction des fortes distorsions (spatiales et temporelles) présentes dans les images. De plus, le MEB ne possède qu'un seul capteur et les informations tridimensionnelles souhaitées ne peuvent pas être obtenues par une approche classique de type stéréovision. Cependant, l'échantillon à analyser étant monté sur un support orientable, des images peuvent être acquises sous différents points de vue, ce qui permet une reconstruction tridimensionnelle en utilisant le principe de vidéogrammétrie pour retrouver à partir des seules images les mouvements inconnus du porte-échantillon.\ud La thèse met l'accent sur la nouvelle technique de calibrage et de correction des distorsions développée car c'est une contribution majeure pour la précision de la mesure de forme et de déformations 3D aux échelles de \ud grandeur étudiées. Elle prouve que, contrairement aux travaux précédents, la prise en compte de la dérive temporelle et des distorsions spatiales d'images \ud est indispensable pour obtenir une précision de mesure suffisante. Les principes permettant la mesure de forme par vidéogrammétrie et le calcul de déformations 2D et 3D sont aussi présentés en détails. Enfin, et dans le but de valider nos travaux et démontrer en particulier la précision de mesure obtenue, des résultats expérimentaux issus de différentes applications sont présentés.\ud \ud \u

    Estimation of affine transformations directly from tomographic projections in two and three dimensions

    Get PDF
    This paper presents a new approach to estimate two- and three-dimensional affine transformations from tomographic projections. Instead of estimating the deformation from the reconstructed data, we introduce a method which works directly in the projection domain, using parallel and fan beam projection geometries. We show that any affine deformation can be analytically compensated, and we develop an efficient multiscale estimation framework based on the normalized cross correlation. The accuracy of the approach is verified using simulated and experimental data, and we demonstrate that the new method needs less projection angles and has a much lower computational complexity as compared to approaches based on the standard reconstruction technique

    Feature-Based Models for Three-Dimensional Data Fitting.

    Get PDF
    There are numerous techniques available for fitting a surface to any supplied data set. The feature-based modeling technique takes advantage of the known, geometric shape of the data by deforming a model having this generic shape to approximate the data. The model is constructed as a rational B-spline surface with characteristic features superimposed on its definition. The first step in the fitting process is to align the model with a data set using the center of mass, principal axes and/or landmarks. Using this initial orientation, the position, rotation and scale parameters are optimized using a Newton-type optimization of a least squares cost function. Once aligned, features embedded within the model, corresponding to pertinent characteristics of the shape, are used to improve the fit of the model to the data. Finally, the control vertex weights and positions of the rational B-spline model are optimized to approximate the data to within a specified tolerance. Since the characteristic features are defined within the model a creation, important measures are easily extracted from a data set, once fit. The feature-based modeling approach is demonstrated in two-dimensions by the fitting of five facial, silhouette profiles and in three-dimensions by the fitting of eleven human foot scans. The algorithm is tested for sensitivity to data distribution and structure and the extracted measures are tested for repeatability and accuracy. Limitations within the current implementation, future work and potential applications are also provided

    Generalizable automated pixel-level structural segmentation of medical and biological data

    Get PDF
    Over the years, the rapid expansion in imaging techniques and equipments has driven the demand for more automation in handling large medical and biological data sets. A wealth of approaches have been suggested as optimal solutions for their respective imaging types. These solutions span various image resolutions, modalities and contrast (staining) mechanisms. Few approaches generalise well across multiple image types, contrasts or resolution. This thesis proposes an automated pixel-level framework that addresses 2D, 2D+t and 3D structural segmentation in a more generalizable manner, yet has enough adaptability to address a number of specific image modalities, spanning retinal funduscopy, sequential fluorescein angiography and two-photon microscopy. The pixel-level segmentation scheme involves: i ) constructing a phase-invariant orientation field of the local spatial neighbourhood; ii ) combining local feature maps with intensity-based measures in a structural patch context; iii ) using a complex supervised learning process to interpret the combination of all the elements in the patch in order to reach a classification decision. This has the advantage of transferability from retinal blood vessels in 2D to neural structures in 3D. To process the temporal components in non-standard 2D+t retinal angiography sequences, we first introduce a co-registration procedure: at the pairwise level, we combine projective RANSAC with a quadratic homography transformation to map the coordinate systems between any two frames. At the joint level, we construct a hierarchical approach in order for each individual frame to be registered to the global reference intra- and inter- sequence(s). We then take a non-training approach that searches in both the spatial neighbourhood of each pixel and the filter output across varying scales to locate and link microvascular centrelines to (sub-) pixel accuracy. In essence, this \link while extract" piece-wise segmentation approach combines the local phase-invariant orientation field information with additional local phase estimates to obtain a soft classification of the centreline (sub-) pixel locations. Unlike retinal segmentation problems where vasculature is the main focus, 3D neural segmentation requires additional exibility, allowing a variety of structures of anatomical importance yet with different geometric properties to be differentiated both from the background and against other structures. Notably, cellular structures, such as Purkinje cells, neural dendrites and interneurons, all display certain elongation along their medial axes, yet each class has a characteristic shape captured by an orientation field that distinguishes it from other structures. To take this into consideration, we introduce a 5D orientation mapping to capture these orientation properties. This mapping is incorporated into the local feature map description prior to a learning machine. Extensive performance evaluations and validation of each of the techniques presented in this thesis is carried out. For retinal fundus images, we compute Receiver Operating Characteristic (ROC) curves on existing public databases (DRIVE & STARE) to assess and compare our algorithms with other benchmark methods. For 2D+t retinal angiography sequences, we compute the error metrics ("Centreline Error") of our scheme with other benchmark methods. For microscopic cortical data stacks, we present segmentation results on both surrogate data with known ground-truth and experimental rat cerebellar cortex two-photon microscopic tissue stacks.Open Acces

    Reconstruction algorithms for multispectral diffraction imaging

    Full text link
    Thesis (Ph.D.)--Boston UniversityIn conventional Computed Tomography (CT) systems, a single X-ray source spectrum is used to radiate an object and the total transmitted intensity is measured to construct the spatial linear attenuation coefficient (LAC) distribution. Such scalar information is adequate for visualization of interior physical structures, but additional dimensions would be useful to characterize the nature of the structures. By imaging using broadband radiation and collecting energy-sensitive measurement information, one can generate images of additional energy-dependent properties that can be used to characterize the nature of specific areas in the object of interest. In this thesis, we explore novel imaging modalities that use broadband sources and energy-sensitive detection to generate images of energy-dependent properties of a region, with the objective of providing high quality information for material component identification. We explore two classes of imaging problems: 1) excitation using broad spectrum sub-millimeter radiation in the Terahertz regime and measure- ment of the diffracted Terahertz (THz) field to construct the spatial distribution of complex refractive index at multiple frequencies; 2) excitation using broad spectrum X-ray sources and measurement of coherent scatter radiation to image the spatial distribution of coherent-scatter form factors. For these modalities, we extend approaches developed for multimodal imaging and propose new reconstruction algorithms that impose regularization structure such as common object boundaries across reconstructed regions at different frequencies. We also explore reconstruction techniques that incorporate prior knowledge in the form of spectral parametrization, sparse representations over redundant dictionaries and explore the advantage and disadvantages of these techniques in terms of image quality and potential for accurate material characterization. We use the proposed reconstruction techniques to explore alternative architectures with reduced scanning time and increased signal-to-noise ratio, including THz diffraction tomography, limited angle X-ray diffraction tomography and the use of coded aperture masks. Numerical experiments and Monte Carlo simulations were conducted to compare performances of the developed methods, and validate the studied architectures as viable options for imaging of energy-dependent properties
    • …
    corecore