25 research outputs found

    Fast Objective Coupled Planar Illumination Microscopy

    Get PDF
    Among optical imaging techniques light sheet fluorescence microscopy stands out as one of the most attractive for capturing high-speed biological dynamics unfolding in three dimensions. The technique is potentially millions of times faster than point-scanning techniques such as two-photon microscopy. This potential is especially poignant for neuroscience applications due to the fact that interactions between neurons transpire over mere milliseconds within tissue volumes spanning hundreds of cubic microns. However current-generation light sheet microscopes are limited by volume scanning rate and/or camera frame rate. We begin by reviewing the optical principles underlying light sheet fluorescence microscopy and the origin of these rate bottlenecks. We present an analysis leading us to the conclusion that Objective Coupled Planar Illumination (OCPI) microscopy is a particularly promising technique for recording the activity of large populations of neurons at high sampling rate. We then present speed-optimized OCPI microscopy, the first fast light sheet technique to avoid compromising image quality or photon efficiency. We enact two strategies to develop the fast OCPI microscope. First, we devise a set of optimizations that increase the rate of the volume scanning system to 40 Hz for volumes up to 700 microns thick. Second, we introduce Multi-Camera Image Sharing (MCIS), a technique to scale imaging rate by incorporating additional cameras. MCIS can be applied not only to OCPI but to any widefield imaging technique, circumventing the limitations imposed by the camera. Detailed design drawings are included to aid in dissemination to other research groups. We also demonstrate fast calcium imaging of the larval zebrafish brain and find a heartbeat-induced motion artifact. We recommend a new preprocessing step to remove the artifact through filtering. This step requires a minimal sampling rate of 15 Hz, and we expect it to become a standard procedure in zebrafish imaging pipelines. In the last chapter we describe essential computational considerations for controlling a fast OCPI microscope and processing the data that it generates. We introduce a new image processing pipeline developed to maximize computational efficiency when analyzing these multi-terabyte datasets, including a novel calcium imaging deconvolution algorithm. Finally we provide a demonstration of how combined innovations in microscope hardware and software enable inference of predictive relationships between neurons, a promising complement to more conventional correlation-based analyses

    Evaluation of single photon avalanche diode arrays for imaging fluorescence correlation spectroscopy : FPGA-based data readout and fast correlation analysis on CPUs, GPUs and FPGAs

    Get PDF
    The metabolism of all living organisms, and specifically also of their smallest constituents, the cell, is based on chemical reactions. A key factor determining the speed of these processes is transport of reactants, energy, and information within the and between the cells of an organism. It has been shown that the relevant transport processes also depend on the spatial organization of the cells. Such transport processes are typically investigated using fluorescence correlation spectroscopy (FCS) in combination with fluorescent labeling of the molecules of interest. In FCS, one observes the fluctuating fluorescence signal from a femtoliter-sized sub-volume within the sample (e.g. a cell). The variations in the intensity arise from the particles moving in and out of this sub-volume. By means of an autocorrelation analysis of the intensity signal, conclusion can be drawn regarding the concentration and the mobility parameters, such as the diffusion coefficient. Typically, one uses the laser focus of a confocal microscope for FCS measurements. But with this microscopy technique, FCS is limited to a single spot a every time. In order to conduct parallel multi-spot measurements, i.e. to create diffusion maps, FCS can be combined with the lightsheet based selective plane illumination microscopy (SPIM). This recent widefield microscopy technique allows observing a small plane of a sample (1-3um thick), which can be positioned arbitrarily. Usually, FCS on a SPIM is done using fast electron-multiplying charge-coupled device (EMCCD) cameras, which offer a limited temporal resolution (500us). Such a temporal resolution only allows measuring the motion of intermediately sized particles within a cell reliably. The limited temporal resolution renders the detection of even smaller molecules impossible. In this thesis, arrays of single photon avalanche diodes (SPADs) were used as detectors. Although SPAD-based image sensors still lack in sensitivity, they provide a significantly better temporal resolution (1-10us for full frames) that is not achievable with sensitive cameras and seem to be the ideal sensors for SPIM-FCS. In the course of this work, two recent SPAD arrays (developed in the groups of Prof. Edoardo Charbon, TU Delft, the Netherlands, and EPFL, Switzerland) were extensively characterized with regards to their suitability for SPIM-FCS. The evaluated SPAD arrays comprise 32x32 and 512x128 pixels and allow for frame rates of up to 300000 or 150000 frames per second, respectively. With these specifications, the latter array is one of the largest and fastest sensors that is currently available. During full-frame readout, it delivers a data rate of up to 1.2 GiB/s. For both arrays, suitable readout-hardware-based on field programmable gate arrays (FPGAs) was designed. To cope with the high data rate and to allow real-time correlation analysis, correlation algorithms were implemented and characterized on the three major high performance computing platforms, namely FPGAs, CPUs, and graphics processing units (GPUs). Of all three platforms, the GPU performed best in terms of correlation analysis, and a speed of 2.6 over real time was achieved for the larger SPAD array. Beside the lack in sensitivity, which could be accounted for by microlenses, a major drawback of the evaluated SPAD arrays was their afterpulsing. It appeared that the temporal structure superimposed the signal of the diffusion. Thus, extracting diffusion properties from the autocorrelation analysis only proved impossible. By additionally performing a spatial cross-correlation analysis such influences could be significantly minimized. Furthermore, this approach allowed for the determination of absolute diffusion coefficients without prior calibration. With that, spatially resolved measurements of fluorescent proteins in living cells could be conducted successfully

    Augmented Reality

    Get PDF
    Augmented Reality (AR) is a natural development from virtual reality (VR), which was developed several decades earlier. AR complements VR in many ways. Due to the advantages of the user being able to see both the real and virtual objects simultaneously, AR is far more intuitive, but it's not completely detached from human factors and other restrictions. AR doesn't consume as much time and effort in the applications because it's not required to construct the entire virtual scene and the environment. In this book, several new and emerging application areas of AR are presented and divided into three sections. The first section contains applications in outdoor and mobile AR, such as construction, restoration, security and surveillance. The second section deals with AR in medical, biological, and human bodies. The third and final section contains a number of new and useful applications in daily living and learning

    Optimization of the holographic process for imaging and lithography

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 272-297).Since their invention in 1948 by Dennis Gabor, holograms have demonstrated to be important components of a variety of optical systems and their implementation in new fields and methods is expected to continue growing. Their ability to encode 3D optical fields on a 2D plane opened the possibility of novel applications for imaging and lithography. In the traditional form, holograms are produced by the interference of a reference and object waves recording the phase and amplitude of the complex field. The holographic process has been extended to include different recording materials and methods. The increasing demand for holographic-based systems is followed by a need for efficient optimization tools designed for maximizing the performance of the optical system. In this thesis, a variety of multi-domain optimization tools designed to improve the performance of holographic optical systems are proposed. These tools are designed to be robust, computationally efficient and sufficiently general to be applied when designing various holographic systems. All the major forms of holographic elements are studied: computer generated holograms, thin and thick conventional holograms, numerically simulated holograms and digital holograms. Novel holographic optical systems for imaging and lithography are proposed. In the case of lithography, a high-resolution system based on Fresnel domain computer generated holograms (CGHs) is presented. The holograms are numerically designed using a reduced complexity hybrid optimization algorithm (HOA) based on genetic algorithms (GAs) and the modified error reduction (MER) method. The algorithm is efficiently implemented on a graphic processing unit. Simulations as well as experimental results for CGHs fabricated using electron-beam lithography are presented. A method for extending the system's depth of focus is proposed. The HOA is extended for the design and optimization of multispectral CGHs applied for high efficiency solar concentration and spectral splitting. A second lithographic system based on optically recorded total internal reflection (TIR) holograms is studied. A comparative analysis between scalar and (cont.) vector diffraction theories for the modeling and simulation of the system is performed.A complete numerical model of the system is conducted including the photoresist response and first order models for shrinkage of the holographic emulsion. A novel block-stitching algorithm is introduced for the calculation of large diffraction patterns that allows overcoming current computational limitations of memory and processing time. The numerical model is implemented for optimizing the system's performance as well as redesigning the mask to account for potential fabrication errors. The simulation results are compared to experimentally measured data. In the case of imaging, a segmented aperture thin imager based on holographically corrected gradient index lenses (GRIN) is proposed. The compound system is constrained to a maximum thickness of 5mm and utilizes an optically recorded hologram for correcting high-order optical aberrations of the GRIN lens array. The imager is analyzed using system and information theories. A multi-domain optimization approach is implemented based on GAs for maximizing the system's channel capacity and hence improving the information extraction or encoding process. A decoding or reconstruction strategy is implemented using the superresolution algorithm. Experimental results for the optimization of the hologram's recording process and the tomographic measurement of the system's space-variant point spread function are presented. A second imaging system for the measurement of complex fluid flows by tracking micron sized particles using digital holography is studied. A stochastic theoretical model based on a stability metric similar to the channel capacity for a Gaussian channel is presented and used to optimize the system. The theoretical model is first derived for the extreme case of point source particles using Rayleigh scattering and scalar diffraction theory formulations. The model is then extended to account for particles of variable sizes using Mie theory for the scattering of homogeneous dielectric spherical particles. The influence and statistics of the particle density dependent cross-talk noise are studied. Simulation and experimental results for finding the optimum particle density based on the stability metric are presented. For all the studied systems, a sensitivity analysis is performed to predict and assist in the correction of potential fabrication or calibration errors.by José Antonio Domínguez-Caballero.Ph.D

    Design of large polyphase filters in the Quadratic Residue Number System

    Full text link

    Temperature aware power optimization for multicore floating-point units

    Full text link

    Eight Biennial Report : April 2005 – March 2007

    No full text
    corecore