51 research outputs found

    Developing A Simulation Toolbox For Biomedical Plenoptic Imaging

    Get PDF

    Efficient and Accurate Disparity Estimation from MLA-Based Plenoptic Cameras

    Get PDF
    This manuscript focuses on the processing images from microlens-array based plenoptic cameras. These cameras enable the capturing of the light field in a single shot, recording a greater amount of information with respect to conventional cameras, allowing to develop a whole new set of applications. However, the enhanced information introduces additional challenges and results in higher computational effort. For one, the image is composed of thousand of micro-lens images, making it an unusual case for standard image processing algorithms. Secondly, the disparity information has to be estimated from those micro-images to create a conventional image and a three-dimensional representation. Therefore, the work in thesis is devoted to analyse and propose methodologies to deal with plenoptic images. A full framework for plenoptic cameras has been built, including the contributions described in this thesis. A blur-aware calibration method to model a plenoptic camera, an optimization method to accurately select the best microlenses combination, an overview of the different types of plenoptic cameras and their representation. Datasets consisting of both real and synthetic images have been used to create a benchmark for different disparity estimation algorithm and to inspect the behaviour of disparity under different compression rates. A robust depth estimation approach has been developed for light field microscopy and image of biological samples

    Light field image processing: an overview

    Get PDF
    Light field imaging has emerged as a technology allowing to capture richer visual information from our world. As opposed to traditional photography, which captures a 2D projection of the light in the scene integrating the angular domain, light fields collect radiance from rays in all directions, demultiplexing the angular information lost in conventional photography. On the one hand, this higher dimensional representation of visual data offers powerful capabilities for scene understanding, and substantially improves the performance of traditional computer vision problems such as depth sensing, post-capture refocusing, segmentation, video stabilization, material classification, etc. On the other hand, the high-dimensionality of light fields also brings up new challenges in terms of data capture, data compression, content editing, and display. Taking these two elements together, research in light field image processing has become increasingly popular in the computer vision, computer graphics, and signal processing communities. In this paper, we present a comprehensive overview and discussion of research in this field over the past 20 years. We focus on all aspects of light field image processing, including basic light field representation and theory, acquisition, super-resolution, depth estimation, compression, editing, processing algorithms for light field display, and computer vision applications of light field data

    Absolute depth using low-cost light field cameras

    Get PDF
    Digital cameras are increasingly used for measurement tasks within engineering scenarios, often being part of metrology platforms. Existing cameras are well equipped to provide 2D information about the fields of view (FOV) they observe, the objects within the FOV, and the accompanying environments. But for some applications these 2D results are not sufficient, specifically applications that require Z dimensional data (depth data) along with the X and Y dimensional data. New designs of camera systems have previously been developed by integrating multiple cameras to provide 3D data, ranging from 2 camera photogrammetry to multiple camera stereo systems. Many earlier attempts to record 3D data on 2D sensors have been completed, and likewise many research groups around the world are currently working on camera technology but from different perspectives; computer vision, algorithm development, metrology, etc. Plenoptic or Lightfield camera technology was defined as a technique over 100 years ago but has remained dormant as a potential metrology instrument. Lightfield cameras utilize an additional Micro Lens Array (MLA) in front of the imaging sensor, to create multiple viewpoints of the same scene and allow encoding of depth information. A small number of companies have explored the potential of lightfield cameras, but in the majority, these have been aimed at domestic consumer photography, only ever recording scenes as relative scale greyscale images. This research considers the potential for lightfield cameras to be used for world scene metrology applications, specifically to record absolute coordinate data. Specific interest has been paid to a range of low cost lightfield cameras to; understand the functional/behavioural characteristics of the optics, identify potential need for optical and/or algorithm development, define sensitivity, repeatability and accuracy characteristics and limiting thresholds of use, and allow quantified 3D absolute scale coordinate data to be extracted from the images. The novel output of this work is; an analysis of lightfield camera system sensitivity leading to the definition of Active Zones (linear data generation good data) and In-active Zones (non-linear data generation poor data), development of bespoke calibration algorithms that remove radial/tangential distortion from the data captured using any MLA based camera, and, a light field camera independent algorithm that allows the delivery of 3D coordinate data in absolute units within a well-defined measurable range from a given camera

    Development and evaluation of depth estimation from plenoptic imaging for application in retinal imaging

    Get PDF
    Plenoptic imaging is a technology by which a three dimensional representation of the world can be captured in a single image. Previous research has focused on the technology itself, with very little focusing on applications of the technology. This thesis presents an investigation into its potential application to the field of retinal imaging, with the aim of producing three dimensional images of the retina at a cheaper cost than the current gold standard of retinal imaging, optical coherence tomography. Both a theoretical and practical approach have been utilised through the means of computational simulations and plenoptic imaging through the use of a commercial camera. Significant steps have been taken towards the overall goal, forming a strong foundation from which future projects will benefit

    Fusion of computed point clouds and integral-imaging concepts for full-parallax 3D display

    Get PDF
    During the last century, various technologies of 3D image capturing and visualization have spotlighted, due to both their pioneering nature and the aspiration to extend the applications of conventional 2D imaging technology to 3D scenes. Besides, thanks to advances in opto-electronic imaging technologies, the possibilities of capturing and transmitting 2D images in real-time have progressed significantly, and boosted the growth of 3D image capturing, processing, transmission and as well as display techniques. Among the latter, integral-imaging technology has been considered as one of the promising ones to restore real 3D scenes through the use of a multi-view visualization system that provides to observers with a sense of immersive depth. Many research groups and companies have researched this novel technique with different approaches, and occasions for various complements. In this work, we followed this trend, but processed through our novel strategies and algorithms. Thus, we may say that our approach is innovative, when compared to conventional proposals. The main objective of our research is to develop techniques that allow recording and simulating the natural scene in 3D by using several cameras which have different types and characteristics. Then, we compose a dense 3D scene from the computed 3D data by using various methods and techniques. Finally, we provide a volumetric scene which is restored with great similarity to the original shape, through a comprehensive 3D monitor and/or display system. Our Proposed integral-imaging monitor shows an immersive experience to multiple observers. In this thesis we address the challenges of integral image production techniques based on the computerized 3D information, and we focus in particular on the implementation of full-parallax 3D display system. We have also made progress in overcoming the limitations of the conventional integral-imaging technique. In addition, we have developed different refinement methodologies and restoration strategies for the composed depth information. Finally, we have applied an adequate solution that reduces the computation times significantly, associated with the repetitive calculation phase in the generation of an integral image. All these results are presented by the corresponding images and proposed display experiments

    3D Volumetric Reconstruction for Light-Field SPECT

    Full text link
    Preclinical research on single-photon emission computed tomography (SPECT) imaging is now well acknowledged for its critical role. It is fundamental for functional imaging and is a well-researched area of nuclear medicine emission tomography. Numerous efforts were made to provide an optimized SPECT collimator and detector design. However, these approaches suffer from limited sensitivity and resolution, demanding an efficient reconstruction algorithm development. Moreover, due to the image deterioration induced by the non-stationary collimator-detector response and the single-photon emitting nature of SPECT, it is difficult to quantify the 3D radiopharmaceutical distribution within the patient quantitatively. This dissertation's primary incentive is to design and develop a complete computational framework for the newly proposed L-SPECT scan procedure from the image acquisition to the image reconstruction. Using this framework, I solve several challenging problems related to implementing a dedicated novel 3D L-SPECT image reconstruction algorithm. In particular, a volumetric reconstruction algorithm for L-SPECT system is developed by considering the system configurations. Also, an in-depth analysis of the SPECT imaging system based on the light field concept using the micro pinhole range collimator is presented in this thesis. Moreover, I evaluate the performance of the developed reconstruction algorithms under various imaging circumstances in terms of image quality, computational complexity, and resolution. A Monte Carlo simulation environment for L-SPECT was developed by modelling the properties of the SPECT imaging setup. By examining the existing limitations in the proposed L-SPECT, an improved collimator-detector geometry for the micro-pinhole arrays was introduced in this thesis as one of the main contributions. The modular L-SPECT with the detector heads in a partial ring geometry achieved higher sensitivity and resolution than the planer L-SPECT. The modular L-SPECT was further improved by shifting the centre of the scanning detectors to eliminate the artifacts in the reconstructed images. A dedicated reconstruction algorithm for the modular L-SPECT was developed as proof of concept. In SPECT reconstruction, identification of uncertainty information would help to enhance and mitigate the limitations of the existing reconstruction algorithms. The critical contribution of this thesis is manifested in the development of an image reconstruction algorithm based on Bayesian probabilistic programming for SPECT and L-SPECT. A NUTS based MCMC algorithm is used for probabilistic programming-based reconstruction. The uncertainty associated with the radiation measurement is identified as a distribution from the posterior samples generated using the MCMC algorithm. The performance of the NUTS algorithm improved by using reverse-mode automatic differentiation and distributed programming. The automatic differentiation variational inference-based SPECT reconstruction algorithm is developed to reduce the computational cost in NUTS based reconstruction and uncertainty analysis. Further in this thesis, the L-SPECT simulations are calibrated by comparing with GATE simulations, which are the gold standard in this field. The projection results of MATLAB based simulations are comparable with GATE simulations. The system performance for the proposed different configurations was investigated and contrasted against the existing SPECT modalities and systems, such as LEHR and Inveon SPECT, respectively. The performance analysis of the L-SPECT revealed the system is able to achieve improved sensitivity and better field of view compared to the existing systems. The essential characteristics of this L-SPECT system based on the reconstructed images were assessed with pinhole radii of 0.1 mm and 0.05 mm. In addition, the system sensitivity, spatial resolution, and image quality are appraised from the 3D reconstructed images. The maximum achieved system’s sensitivity was 1000 Cps/Bbq using arrays with a pinhole radius of 0.1 mm at 1 mm pitch, while the highest resolution was obtained using arrays with 0.05 mm pinhole and 3 mm pitch. The designed L-SPECT with different configurations and the developed 3D reconstruction algorithms yielded superior image quality compared with LEHR reconstructions

    Revealing the Invisible: On the Extraction of Latent Information from Generalized Image Data

    Get PDF
    The desire to reveal the invisible in order to explain the world around us has been a source of impetus for technological and scientific progress throughout human history. Many of the phenomena that directly affect us cannot be sufficiently explained based on the observations using our primary senses alone. Often this is because their originating cause is either too small, too far away, or in other ways obstructed. To put it in other words: it is invisible to us. Without careful observation and experimentation, our models of the world remain inaccurate and research has to be conducted in order to improve our understanding of even the most basic effects. In this thesis, we1 are going to present our solutions to three challenging problems in visual computing, where a surprising amount of information is hidden in generalized image data and cannot easily be extracted by human observation or existing methods. We are able to extract the latent information using non-linear and discrete optimization methods based on physically motivated models and computer graphics methodology, such as ray tracing, real-time transient rendering, and image-based rendering

    OPTICAL NAVIGATION TECHNIQUES FOR MINIMALLY INVASIVE ROBOTIC SURGERIES

    Get PDF
    Minimally invasive surgery (MIS) involves small incisions in a patient's body, leading to reduced medical risk and shorter hospital stays compared to open surgeries. For these reasons, MIS has experienced increased demand across different types of surgery. MIS sometimes utilizes robotic instruments to complement human surgical manipulation to achieve higher precision than can be obtained with traditional surgeries. Modern surgical robots perform within a master-slave paradigm, in which a robotic slave replicates the control gestures emanating from a master tool manipulated by a human surgeon. Presently, certain human errors due to hand tremors or unintended acts are moderately compensated at the tool manipulation console. However, errors due to robotic vision and display to the surgeon are not equivalently addressed. Current vision capabilities within the master-slave robotic paradigm are supported by perceptual vision through a limited binocular view, which considerably impacts the hand-eye coordination of the surgeon and provides no quantitative geometric localization for robot targeting. These limitations lead to unexpected surgical outcomes, and longer operating times compared to open surgery. To improve vision capabilities within an endoscopic setting, we designed and built several image guided robotic systems, which obtained sub-millimeter accuracy. With this improved accuracy, we developed a corresponding surgical planning method for robotic automation. As a demonstration, we prototyped an autonomous electro-surgical robot that employed quantitative 3D structural reconstruction with near infrared registering and tissue classification methods to localize optimal targeting and suturing points for minimally invasive surgery. Results from validation of the cooperative control and registration between the vision system in a series of in vivo and in vitro experiments are presented and the potential enhancement to autonomous robotic minimally invasive surgery by utilizing our technique will be discussed
    • …
    corecore