393 research outputs found

    A Vignetting Model for Light Field Cameras with an Application to Light Field Microscopy

    Get PDF
    International audienceIn standard photography, vignetting is considered mainly as a radiometric effect because it results in a darkening of the edges of the captured image. In this paper, we demonstrate that for light field cameras, vignetting is more than just a radio-metric effect. It modifies the properties of the acquired light field and renders most of the calibration procedures from the literature inadequate. We address the problem by describing a model-and camera-agnostic method to evaluate vignetting in phase space. This enables the synthesis of vignetted pixel values, that, applied to a range of pixels yield images corresponding to the white images that are customarily recorded for calibrating light field cameras. We show that the commonly assumed reference points for microlens-based systems are incorrect approximations to the true optical reference, i.e. the image of the center of the exit pupil. We introduce a novel calibration procedure to determine this optically correct reference point from experimental white images. We describe the changes vignetting imposes on the light field sampling patterns and, therefore, the optical properties of the corresponding virtual cameras using the ECA model [1] and apply these insights to a custom-built light field microscope

    In situ correction of liquid meniscus in cell culture imaging system based on parallel Fourier ptychographic microscopy (96 Eyes)

    Get PDF
    We collaborated with Amgen and spent five years in designing and fabricating next generation multi-well plate imagers based on Fourier ptychographic microscopy (FPM). A 6-well imager (Emsight) and a low-cost parallel microscopic system (96 Eyes) based on parallel FPM were reported in our previous work. However, the effect of liquid meniscus on the image quality is much stronger than anticipated, introducing obvious wavevector misalignment and additional image aberration. To this end, an adaptive wavevector correction (AWC-FPM) algorithm and a pupil recovery improvement strategy are presented to solve these challenges in situ. In addition, dual-channel fluorescence excitation is added to obtain structural information for microbiologists. Experiments are demonstrated to verify their performances. The accuracy of angular resolution with our algorithm is within 0.003 rad. Our algorithms would make the FPM algorithm more robust and practical and can be extended to other FPM-based applications to overcome similar challenges

    FPM-WSI: Fourier ptychographic whole slide imaging via feature-domain backdiffraction

    Full text link
    Fourier ptychographic microscopy (FPM), characterized by high-throughput computational imaging, theoretically provides a cunning solution to the trade-off between spatial resolution and field of view (FOV), which has a promising prospect in the application of digital pathology. However, block reconstruction and then stitching has currently become an unavoidable procedure due to vignetting effects. The stitched image tends to present color inconsistency in different image segments, or even stitching artifacts. In response, we reported a computational framework based on feature-domain backdiffraction to realize full-FOV, stitching-free FPM reconstruction. Different from conventional algorithms that establish the loss function in the image domain, our method formulates it in the feature domain, where effective information of images is extracted by a feature extractor to bypass the vignetting effect. The feature-domain error between predicted images based on estimation of model parameters and practically captured images is then digitally diffracted back through the optical system for complex amplitude reconstruction and aberration compensation. Through massive simulations and experiments, the method presents effective elimination of vignetting artifacts, and reduces the requirement of precise knowledge of illumination positions. We also found its great potential to recover the data with a lower overlapping rate of spectrum and to realize automatic blind-digital refocusing without a prior defocus distance

    Bioluminescence Microscopy: Design and Applications

    Get PDF
    Bioluminescence imaging by microscopy is performed using an ultra-low-light imaging camera. Although imaging devices such as sensor and camera have been greatly improved over time, such improvements have not been attained commercially which are available for microscopes now. We previously optimized the optical system of a microscope for bioluminescence imaging using a short-focal-length imaging lens and evaluated this system with a conventional color charge-coupled device camera. Here, we describe the concept of bioluminescence microscope design using a short-focal-length imaging lens and some representative applications, including intracellular calcium imaging, imaging of clock gene promoter assays, and three-dimensional reconstruction of Drosophila larva. This system facilitates the acquisition of bioluminescence images of single live cells using luciferase, which is similar to fluorescence microscopy using a fluorescent protein

    Light Field compression and manipulation via residual convolutional neural network

    Get PDF
    Light field (LF) imaging has gained significant attention due to its recent success in microscopy, 3-dimensional (3D) displaying and rendering, augmented and virtual reality usage. Postprocessing of LF enables us to extract more information from a scene compared to traditional cameras. However, the use of LF is still a research novelty because of the current limitations in capturing high-resolution LF in all of its four dimensions. While researchers are actively improving methods of capturing high-resolution LF\u27s, using simulation, it is possible to explore a high-quality captured LF\u27s properties. The immediate concerns following the LF capture are its storage and processing time. A rich LF occupies a large chunk of memory ---order of multiple gigabytes per LF---. Also, most feature extraction techniques associated with LF postprocessing involve multi-dimensional integration that requires access to the whole LF and is usually time-consuming. Recent advancements in computer processing units made it possible to simulate realistic images using physical-based rendering software. In this work, at first, a transformation function is proposed for building a camera array (CA) to capture the same portion of LF from a scene that a standard plenoptic camera (SPC) can acquire. Using this transformation, LF simulation with similar properties as a plenoptic camera will become trivial in any rendering software. Artificial intelligence (AI) and machine learning (ML) algorithms ---when deployed on the new generation of GPUs--- are faster than ever. It is possible to generate and train large networks with millions of trainable parameters to learn very complex features. Here, residual convolutional neural network (RCNN) structures are employed to build complex networks for compression and feature extraction from an LF. By combining state-of-the-art image compression and RCNN, I have created a compression pipeline. The proposed pipeline\u27s bit per pixel (bpp) ratio is 0.0047 on average. I show that with a 1% compression time cost and 18x speedup for decompression, our methods reconstructed LFs have better structural similarity index metric (SSIM) and comparable peak signal-to-noise ratio (PSNR) compared to the state-of-the-art video compression techniques used to compress LFs. In the end, using RCNN, I created a network called RefNet, for extracting a group of 16 refocused images from a raw LF. The training parameters of the 16 LFs are set to (\alpha=0.125, 0.250, 0.375, ..., 2.0) for training. I show that RefNet is 134x faster than the state-of-the-art refocusing technique. The RefNet is also superior in color prediction compared to the state-of-the-art ---Fourier slice and shift-and-sum--- methods

    Development and Evaluation of Unmanned Aerial Vehicles for High Throughput Phenotyping of Field-based Wheat Trials.

    Get PDF
    Growing demands for increased global yields are driving researchers to develop improved crops, capable of securing higher yields in the face of significant challenges including climate change and competition for resources. However, abilities to measure favourable physical characteristics (phenotypes) of key crops in response to these challenges is limited. For crop breeders and researchers, current abilities to phenotype field-based experiments with sufficient precision, resolution and throughput is restricting any meaningful advances in crop development. This PhD thesis presents work focused on the development and evaluation of Unmanned Aerial Vehicles (UAVs) in combination with remote sensing technologies as a solution for improved phenotyping of field-based crop experiments. Chapter 2 presents first, a review of specific target phenotypic traits within the categories of crop morphology and spectral reflectance, together with critical review of current standard measurement protocols. After reviewing phenotypic traits, focus turns to UAVs and UAV specific technologies suitable for the application of crop phenotyping, including critical evaluation of both the strengths and current limitations associated with UAV methods and technologies, highlighting specific areas for improvement. Chapter 3 presents a published paper successfully developing and evaluating Structure from Motion photogrammetry for accurate (R2 ≥ 0.93, RMSE ≤ 0.077m, and Bias ≤ -0.064m) and temporally consistent 3D reconstructions of wheat plot heights. The superior throughput achieved further facilitated measures of crop growth rate through the season; whilst very high spatial resolutions highlighted both the inter- and intra-plot variability in crop heights, something unachievable with the traditional manual ruler methods. Chapter 4 presents published work developing and evaluating modified Commercial ‘Off the Shelf’ (COTS) cameras for obtaining radiometrically calibrated imagery of canopy spectral reflectance. Specifically, development focussed on improving application of these cameras under variable illumination conditions, via application of camera exposure, vignetting, and irradiance corrections. Validation of UAV derived Normalised Difference Vegetation Index (NDVI) against a ground spectrometer from the COTS cameras (0.94 ≤ R2 ≥ 0.88) indicated successful calibration and correction of the cameras. The higher spatial resolution obtained from the COTS cameras, facilitated the assessment of the impact of background soil reflectance on derived mean Normalised Difference Vegetation Index (NDVI) measures of experimental plots, highlighting the impact of incomplete canopy on derived indices. Chapter 5 utilises the developed methods and cameras from Chapter 4 to assess the impact of nitrogen fertiliser application on the formation and senescence dynamics of canopy traits over multiple growing seasons. Quantification of changes in canopy reflectance, via NDVI, through three select trends in the wheat growth cycle were used to assess any impact of nitrogen on these periods of growth. Results showed consistent impact of zero nitrogen application on crop canopies within all three development phases. Additional results found statistically significant positive correlations between quantified phases and harvest metrics (e.g. final yield), with greatest correlations occurring within the second (Full Canopy) and third (Senescence) phases. Chapter 6 focuses on evaluation of the financial costs and throughput associated with UAVs; with specific focus on comparison to conventional methods in a real-world phenotyping scenario. A ‘cost throughput’ analysis based on real-world experiments at Rothamsted Research, provided quantitative assessment demonstrating both the financial savings (£4.11 per plot savings) and superior throughput obtained (229% faster) from implementing a UAV based phenotyping strategy to long term phenotyping of field-based experiments. Overall the methods and tools developed in this PhD thesis demonstrate UAVs combined with appropriate remote sensing tools can replicate and even surpass the precision, accuracy, cost and throughput of current strategies

    Defect and thickness inspection system for cast thin films using machine vision and full-field transmission densitometry

    Get PDF
    Quick mass production of homogeneous thin film material is required in paper, plastic, fabric, and thin film industries. Due to the high feed rates and small thicknesses, machine vision and other nondestructive evaluation techniques are used to ensure consistent, defect-free material by continuously assessing post-production quality. One of the fastest growing inspection areas is for 0.5-500 micrometer thick thin films, which are used for semiconductor wafers, amorphous photovoltaics, optical films, plastics, and organic and inorganic membranes. As a demonstration application, a prototype roll-feed imaging system has been designed to inspect high-temperature polymer electrolyte membrane (PEM), used for fuel cells, after being die cast onto a moving transparent substrate. The inspection system continuously detects thin film defects and classifies them with a neural network into categories of holes, bubbles, thinning, and gels, with a 1.2% false alarm rate, 7.1% escape rate, and classification accuracy of 96.1%. In slot die casting processes, defect types are indicative of a misbalance in the mass flow rate and web speed; so, based on the classified defects, the inspection system informs the operator of corrective adjustments to these manufacturing parameters. Thickness uniformity is also critical to membrane functionality, so a real-time, full-field transmission densitometer has been created to measure the bi-directional thickness profile of the semi-transparent PEM between 25-400 micrometers. The local thickness of the 75 mm x 100 mm imaged area is determined by converting the optical density of the sample to thickness with the Beer-Lambert law. The PEM extinction coefficient is determined to be 1.4 D/mm and the average thickness error is found to be 4.7%. Finally, the defect inspection and thickness profilometry systems are compiled into a specially-designed graphical user interface for intuitive real-time operation and visualization.M.S.Committee Chair: Tequila Harris; Committee Member: Levent Degertekin; Committee Member: Wayne Dale

    Efficient and Accurate Disparity Estimation from MLA-Based Plenoptic Cameras

    Get PDF
    This manuscript focuses on the processing images from microlens-array based plenoptic cameras. These cameras enable the capturing of the light field in a single shot, recording a greater amount of information with respect to conventional cameras, allowing to develop a whole new set of applications. However, the enhanced information introduces additional challenges and results in higher computational effort. For one, the image is composed of thousand of micro-lens images, making it an unusual case for standard image processing algorithms. Secondly, the disparity information has to be estimated from those micro-images to create a conventional image and a three-dimensional representation. Therefore, the work in thesis is devoted to analyse and propose methodologies to deal with plenoptic images. A full framework for plenoptic cameras has been built, including the contributions described in this thesis. A blur-aware calibration method to model a plenoptic camera, an optimization method to accurately select the best microlenses combination, an overview of the different types of plenoptic cameras and their representation. Datasets consisting of both real and synthetic images have been used to create a benchmark for different disparity estimation algorithm and to inspect the behaviour of disparity under different compression rates. A robust depth estimation approach has been developed for light field microscopy and image of biological samples
    • …
    corecore