179 research outputs found

    Iterative X-ray Spectroscopic Ptychography

    Full text link
    Spectroscopic ptychography is a powerful technique to determine the chemical composition of a sample with high spatial resolution. In spectro-ptychography, a sample is rastered through a focused x-ray beam with varying photon energy so that a series of phaseless diffraction data are recorded. Each chemical component in the material under investigation has a characteristic absorption and phase contrast as a function of photon energy. Using a dictionary formed by the set of contrast functions of each energy for each chemical component, it is possible to obtain the chemical composition of the material from high resolution multi-spectral images. This paper presents SPA (Spectroscopic Ptychography with ADMM), a novel algorithm to iteratively solve the spectroscopic blind ptychography problem. We design first a nonlinear spectro-ptychography model based on Poisson maximum likelihood, and construct then the proposed method based on fast iterative splitting operators. SPA can be used to retrieve spectral contrast when considering both a known or an incomplete (partially known) dictionary of reference spectra. By coupling the redundancy across different spectral measurements, the proposed algorithm can achieve higher reconstruction quality when compared to standard state-of-the-art two-step methods. We demonstrate how SPA can recover accurate chemical maps from Poisson-noised measurements, and also show its enhanced robustness when reconstructing reduced redundancy ptychography data using large scanning stepsizes

    A switchable light field camera architecture with Angle Sensitive Pixels and dictionary-based sparse coding

    Get PDF
    We propose a flexible light field camera architecture that is at the convergence of optics, sensor electronics, and applied mathematics. Through the co-design of a sensor that comprises tailored, Angle Sensitive Pixels and advanced reconstruction algorithms, we show that-contrary to light field cameras today-our system can use the same measurements captured in a single sensor image to recover either a high-resolution 2D image, a low-resolution 4D light field using fast, linear processing, or a high-resolution light field using sparsity-constrained optimization.National Science Foundation (U.S.) (NSF Grant IIS-1218411)National Science Foundation (U.S.) (NSF Grant IIS-1116452)MIT Media Lab ConsortiumNational Science Foundation (U.S.) (NSF Graduate Research Fellowship)Natural Sciences and Engineering Research Council of Canada (NSERC Postdoctoral Fellowship)Alfred P. Sloan Foundation (Research Fellowship)United States. Defense Advanced Research Projects Agency (DARPA Young Faculty Award

    Fast Adaptive Augmented Lagrangian Digital Image Correlation

    Get PDF
    Digital image correlation (DIC) is a powerful experimental technique for measuring full-field displacement and strain. The basic idea of the method is to compare images of an object decorated with a speckle pattern before and after deformation in order to compute the displacement and strain fields. Local Subset DIC and finite element-based Global DIC are two widely used image matching methods; however there are some drawbacks to these methods. In Local Subset DIC, the computed displacement field may not satisfy compatibility, and the deformation gradient may be noisy, especially when the subset size is small. Global DIC incorporates displacement compatibility, but can be computationally expensive. In this thesis, we propose a new method, the augmented-Lagrangian digital image correlation (ALDIC), that combines the advantages of both the local (fast and in parallel) and global (compatible) methods. We demonstrate that ALDIC has higher accuracy and behaves more robustly compared to both Local Subset DIC and Global DIC. DIC requires a large number of high resolution images, which imposes significant needs on data storage and transmission. We combined DIC algorithms with image compression techniques and show that it is possible to obtain accurate displace- ment and strain fields with only 5 % of the original image size. We studied two compression techniques – discrete cosine transform (DCT) and wavelet transform, and three DIC algorithms – Local Subset DIC, Global DIC and our newly proposed augmented Lagrangian DIC (ALDIC). We found the Local Subset DIC leads to the largest errors and ALDIC to the smallest when compressed images are used. We also found wavelet-based image compression introduces less error compared to DCT image compression. To further speed up and improve the accuracy of DIC algorithms, especially in the study of complex heterogeneous strain fields at various length scales, we apply an adaptive finite element mesh to DIC methods. We develop a new h-adaptive technique and apply it to ALDIC. We show that this adaptive mesh ALDIC algorithm significantly decreases computation time with no loss (and some gain) in accuracy.</p

    Analysis and development of phase retrieval algorithms for ptychography

    Get PDF
    Ptychography, a relatively new form of phase retrieval, can reconstruct both intensity and phase images of a sample from a group of diffraction patterns, which are recorded as the sample is translated through a grid of positions. To recover the phase information lost in the recording of these diffraction patterns, iterative algorithms must optimise an objective function full of local minima, in a huge multidimensional space. Many such algorithms have been developed, each aiming to converge rapidly whilst avoiding stagnation. This thesis aims to set a standard error metric for comparing some of the more popular algorithms, to determine their advantages and disadvantages under a range of different conditions, and hence develop a more adaptive algorithm that combines the advantages of these ancestors. In this thesis, different algorithms are explained together with their reconstruction results from both simulated and practical data. Modifications for mPIE, ADMM and RAAR are suggested to either reducing the number of parameters or improving their computation efficiency. An improved spatial error metric, which can evaluate the reconstruction quality by removing inherent ambiguities, is introduced to compare these algorithms. Based on the explained phase retrieval algorithms, a new algorithm, i.e., adaptive PIE, is developed. It has。 a faster converging speed and better accuracy comparing to its ancestors

    Efficient Computing for Three-Dimensional Quantitative Phase Imaging

    Get PDF
    Quantitative Phase Imaging (QPI) is a powerful imaging technique for measuring the refractive index distribution of transparent objects such as biological cells and optical fibers. The quantitative, non-invasive approach of QPI provides preeminent advantages in biomedical applications and the characterization of optical fibers. Tomographic Deconvolution Phase Microscopy (TDPM) is a promising 3D QPI method that combines diffraction tomography, deconvolution, and through-focal scanning with object rotation to achieve isotropic spatial resolution. However, due to the large data size, 3D TDPM has a drawback in that it requires extensive computation power and time. In order to overcome this shortcoming, CPU/GPU parallel computing and application-specific embedded systems can be utilized. In this research, OpenMP Tasking and CUDA Streaming with Unified Memory (TSUM) is proposed to speed up the tomographic angle computations in 3D TDPM. TSUM leverages CPU multithreading and GPU computing on a System on a Chip (SoC) with unified memory. Unified memory eliminates data transfer between CPU and GPU memories, which is a major bottleneck in GPU computing. This research presents a speedup of 3D TDPM with TSUM for a large dataset and demonstrates the potential of TSUM in realizing real-time 3D TDPM.M.S

    Biological image analysis

    Get PDF
    In biological research images are extensively used to monitor growth, dynamics and changes in biological specimen, such as cells or plants. Many of these images are used solely for observation or are manually annotated by an expert. In this dissertation we discuss several methods to automate the annotating and analysis of bio-images. Two large clusters of methods have been investigated and developed. A first set of methods focuses on the automatic delineation of relevant objects in bio-images, such as individual cells in microscopic images. Since these methods should be useful for many different applications, e.g. to detect and delineate different objects (cells, plants, leafs, ...) in different types of images (different types of microscopes, regular colour photographs, ...), the methods should be easy to adjust. Therefore we developed a methodology relying on probability theory, where all required parameters can easily be estimated by a biologist, without requiring any knowledge on the techniques used in the actual software. A second cluster of investigated techniques focuses on the analysis of shapes. By defining new features that describe shapes, we are able to automatically classify shapes, retrieve similar shapes from a database and even analyse how an object deforms through time

    High-speed imaging with optical encoding and compressive sensing

    Get PDF
    Imaging instruments can be used to obtain a series of frames in domains such as frequency and time. Recent advancements in applications such as medical astronomical, scientic and the consumer application, demand overall improvements in these imaging systems. Many current imaging methods rely on the well-known Shannon-Nyquist theorem where sustaining this conventional model increases the system complexity, data rate, storage and processing power as well as the overall build costs of these units. Recent investigations based on the mathematical theory of compressed sensing (CS) have broken the traditional sampling mechanisms and introduces alternative methods of data sampling. This dissertation investigates the current advancements in the high-speed imaging schemes and proposes new methods and optical designs to improve the spatial and temporal resolution as well as the required transmission and storage capacity of the imaging systems. First, we investigate the current mathematical models of CS based algorithms in video acquisition systems and propose an improved adapted technique for data reconstruction. Then we investigate the state-of-the-art high-speed imaging methods and introduce optical encoding techniques that enable the current high-speed imaging systems to reach 10 times faster frame rates whilst preserving the spatial resolution of the existing systems. Second, we develop a novel high-speed imaging system that implements CS based optical imaging technique and experimentally demonstrate the operation of this novel imaging system. The proposed compressive coded rotating mirror (CCRM) camera benefits from noticeably improved physical dimensions, highly reduced build costs and the significantly simplified operation compared to the other high-speed cameras. Due to the built-in optical encoding and on-the-fly compression functionalities of CCRM camera, it becomes a viable option for the fields such as the medical and military based imaging applications where the security of the data remains one of the top priorities in the imaging instruments. Finally, we discuss the potential improvements on the CCRM camera and propose several advancement plans for the future of this system

    A Variational Aggregation Framework for Patch-Based Optical Flow Estimation

    Get PDF
    International audienceWe propose a variational aggregation method for optical flow estimation. It consists of a two-step framework, first estimating a collection of parametric motion models to generate motion candidates, and then reconstructing a global dense motion field. The aggregation step is designed as a motion reconstruction problem from spatially varying sets of motion candidates given by parametric motion models. Our method is designed to capture large displacements in a variational framework without requiring any coarse-to-fine strategy. We handle occlusion with a motion inpainting approach in the candidates computation step. By performing parametric motion estimation, we combine the robustness to noise of local parametric methods with the accuracy yielded by global regularization. We demonstrate the performance of our aggregation approach by comparing it to standard variational methods and a discrete aggregation approach on the Middlebury and MPI Sintel datasets
    • …
    corecore