119 research outputs found

    Reconstruction from Spatio-Spectrally Coded Multispectral Light Fields

    Get PDF
    In this work, spatio-spectrally coded multispectral light fields, as taken by a light field camera with a spectrally coded microlens array, are investigated. For the reconstruction of the coded light fields, two methods, one based on the principles of compressed sensing and one deep learning approach, are developed. Using novel synthetic as well as a real-world datasets, the proposed reconstruction approaches are evaluated in detail

    A Multispectral Light Field Dataset and Framework for Light Field Deep Learning

    Get PDF
    Deep learning undoubtedly has had a huge impact on the computer vision community in recent years. In light field imaging, machine learning-based applications have significantly outperformed their conventional counterparts. Furthermore, multi- and hyperspectral light fields have shown promising results in light field-related applications such as disparity or shape estimation. Yet, a multispectral light field dataset, enabling data-driven approaches, is missing. Therefore, we propose a new synthetic multispectral light field dataset with depth and disparity ground truth. The dataset consists of a training, validation and test dataset, containing light fields of randomly generated scenes, as well as a challenge dataset rendered from hand-crafted scenes enabling detailed performance assessment. Additionally, we present a Python framework for light field deep learning. The goal of this framework is to ensure reproducibility of light field deep learning research and to provide a unified platform to accelerate the development of new architectures. The dataset is made available under dx.doi.org/10.21227/y90t-xk47 . The framework is maintained at gitlab.com/iiit-public/lfcnn

    Reconstruction from Spatio-Spectrally Coded Multispectral Light Fields

    Get PDF
    In dieser Arbeit werden spektral kodierte multispektrale Lichtfelder untersucht, wie sie von einer Lichtfeldkamera mit einem spektral kodierten Mikrolinsenarray aufgenommen werden. Für die Rekonstruktion der kodierten Lichtfelder werden zwei Methoden entwickelt, eine basierend auf den Prinzipien des Compressed Sensing sowie eine Deep Learning Methode. Anhand neuartiger synthetischer und realer Datensätze werden die vorgeschlagenen Rekonstruktionsansätze im Detail evaluiert

    Reconstruction from Spatio-Spectrally Coded Multispectral Light Fields

    Get PDF
    In dieser Arbeit werden spektral codierte multispektrale Lichtfelder, wie sie von einer Lichtfeldkamera mit einem spektral codierten Mikrolinsenarray aufgenommen werden, untersucht. Für die Rekonstruktion der codierten Lichtfelder werden zwei Methoden entwickelt und im Detail ausgewertet. Zunächst wird eine vollständige Rekonstruktion des spektralen Lichtfelds entwickelt, die auf den Prinzipien des Compressed Sensing basiert. Um die spektralen Lichtfelder spärlich darzustellen, werden 5D-DCT-Basen sowie ein Ansatz zum Lernen eines Dictionary untersucht. Der konventionelle vektorisierte Dictionary-Lernansatz wird auf eine tensorielle Notation verallgemeinert, um das Lichtfeld-Dictionary tensoriell zu faktorisieren. Aufgrund der reduzierten Anzahl von zu lernenden Parametern ermöglicht dieser Ansatz größere effektive Atomgrößen. Zweitens wird eine auf Deep Learning basierende Rekonstruktion der spektralen Zentralansicht und der zugehörigen Disparitätskarte aus dem codierten Lichtfeld entwickelt. Dabei wird die gewünschte Information direkt aus den codierten Messungen geschätzt. Es werden verschiedene Strategien des entsprechenden Multi-Task-Trainings verglichen. Um die Qualität der Rekonstruktion weiter zu verbessern, wird eine neuartige Methode zur Einbeziehung von Hilfslossfunktionen auf der Grundlage ihrer jeweiligen normalisierten Gradientenähnlichkeit entwickelt und gezeigt, dass sie bisherige adaptive Methoden übertrifft. Um die verschiedenen Rekonstruktionsansätze zu trainieren und zu bewerten, werden zwei Datensätze erstellt. Zunächst wird ein großer synthetischer spektraler Lichtfelddatensatz mit verfügbarer Disparität Ground Truth unter Verwendung eines Raytracers erstellt. Dieser Datensatz, der etwa 100k spektrale Lichtfelder mit dazugehöriger Disparität enthält, wird in einen Trainings-, Validierungs- und Testdatensatz aufgeteilt. Um die Qualität weiter zu bewerten, werden sieben handgefertigte Szenen, so genannte Datensatz-Challenges, erstellt. Schließlich wird ein realer spektraler Lichtfelddatensatz mit einer speziell angefertigten spektralen Lichtfeldreferenzkamera aufgenommen. Die radiometrische und geometrische Kalibrierung der Kamera wird im Detail besprochen. Anhand der neuen Datensätze werden die vorgeschlagenen Rekonstruktionsansätze im Detail bewertet. Es werden verschiedene Codierungsmasken untersucht -- zufällige, reguläre, sowie Ende-zu-Ende optimierte Codierungsmasken, die mit einer neuartigen differenzierbaren fraktalen Generierung erzeugt werden. Darüber hinaus werden weitere Untersuchungen durchgeführt, zum Beispiel bezüglich der Abhängigkeit von Rauschen, der Winkelauflösung oder Tiefe. Insgesamt sind die Ergebnisse überzeugend und zeigen eine hohe Rekonstruktionsqualität. Die Deep-Learning-basierte Rekonstruktion, insbesondere wenn sie mit adaptiven Multitasking- und Hilfslossstrategien trainiert wird, übertrifft die Compressed-Sensing-basierte Rekonstruktion mit anschließender Disparitätsschätzung nach dem Stand der Technik

    Neural Spectro-polarimetric Fields

    Full text link
    Modeling the spatial radiance distribution of light rays in a scene has been extensively explored for applications, including view synthesis. Spectrum and polarization, the wave properties of light, are often neglected due to their integration into three RGB spectral bands and their non-perceptibility to human vision. Despite this, these properties encompass substantial material and geometric information about a scene. In this work, we propose to model spectro-polarimetric fields, the spatial Stokes-vector distribution of any light ray at an arbitrary wavelength. We present Neural Spectro-polarimetric Fields (NeSpoF), a neural representation that models the physically-valid Stokes vector at given continuous variables of position, direction, and wavelength. NeSpoF manages inherently noisy raw measurements, showcases memory efficiency, and preserves physically vital signals, factors that are crucial for representing the high-dimensional signal of a spectro-polarimetric field. To validate NeSpoF, we introduce the first multi-view hyperspectral-polarimetric image dataset, comprised of both synthetic and real-world scenes. These were captured using our compact hyperspectral-polarimetric imaging system, which has been calibrated for robustness against system imperfections. We demonstrate the capabilities of NeSpoF on diverse scenes

    A novel segmentation approach for crop modeling using a plenoptic light-field camera: going from 2D to 3D

    Get PDF
    Crop phenotyping is a desirable task in crop characterization since it allows the farmer to make early decisions, and therefore be more productive. This research is motivated by the generation of tools for rice crop phenotyping within the OMICAS research ecosystem framework. It proposes implementing the image process- ing technologies and artificial intelligence technics through a multisensory approach with multispectral information. Three main stages are covered: (i) A segmentation approach that allows identifying the biological material associated with plants, and the main contri- bution is the GFKuts segmentation approach; (ii) a strategy that allows the development of sensory fusion between three different cameras, a 3D camera, an infrared multispectral camera, and a thermal multispectral camera, this stage is developed through a complex object detection approach; and (iii) the characterization of a 4D model that generates topological relationships with the information of the point cloud, the main contribution of this strategy is the improvement of the point cloud captured by the 3D sensor, in this sense, this stage improves the acquisition of any 3D sensor. This research presents a development that receives information from multiple sensors, especially infrared 2D, and generates a single 4D model in geometric space [X, Y, Z]. This model integrates the color information of 5 channels and topological information, relating the points in space. Overall, the research allows the integration of the 3D information from any sensor\technology and the multispectral channels from any multispectral camera, to generate direct non-invasive measurements on the plant.OMICASCrop phenotyping is a desirable task in crop characterization since it allows the farmer to make early decisions, and therefore be more productive. This research is motivated by the generation of tools for rice crop phenotyping within the OMICAS research ecosystem framework. It proposes implementing the image process- ing technologies and artificial intelligence technics through a multisensory approach with multispectral information. Three main stages are covered: (i) A segmentation approach that allows identifying the biological material associated with plants, and the main contri- bution is the GFKuts segmentation approach; (ii) a strategy that allows the development of sensory fusion between three different cameras, a 3D camera, an infrared multispectral camera, and a thermal multispectral camera, this stage is developed through a complex object detection approach; and (iii) the characterization of a 4D model that generates topological relationships with the information of the point cloud, the main contribution of this strategy is the improvement of the point cloud captured by the 3D sensor, in this sense, this stage improves the acquisition of any 3D sensor. This research presents a development that receives information from multiple sensors, especially infrared 2D, and generates a single 4D model in geometric space [X, Y, Z]. This model integrates the color information of 5 channels and topological information, relating the points in space. Overall, the research allows the integration of the 3D information from any sensor\technology and the multispectral channels from any multispectral camera, to generate direct non-invasive measurements on the plant.Magíster en Ingeniería ElectrónicaMaestríahttps://orcid.org/ 0000-0002-1477-6825https://scholar.google.com/citations?user=cpuxcwgAAAAJ&hl=eshttps://scienti.minciencias.gov.co/cvlac/visualizador/generarCurriculoCv.do?cod_rh=0001556911Porque aun me encuentro desarrollando la investigación y quiero darle mas profundidad

    A Vignetting Model for Light Field Cameras with an Application to Light Field Microscopy

    Get PDF
    International audienceIn standard photography, vignetting is considered mainly as a radiometric effect because it results in a darkening of the edges of the captured image. In this paper, we demonstrate that for light field cameras, vignetting is more than just a radio-metric effect. It modifies the properties of the acquired light field and renders most of the calibration procedures from the literature inadequate. We address the problem by describing a model-and camera-agnostic method to evaluate vignetting in phase space. This enables the synthesis of vignetted pixel values, that, applied to a range of pixels yield images corresponding to the white images that are customarily recorded for calibrating light field cameras. We show that the commonly assumed reference points for microlens-based systems are incorrect approximations to the true optical reference, i.e. the image of the center of the exit pupil. We introduce a novel calibration procedure to determine this optically correct reference point from experimental white images. We describe the changes vignetting imposes on the light field sampling patterns and, therefore, the optical properties of the corresponding virtual cameras using the ECA model [1] and apply these insights to a custom-built light field microscope

    Mult-Spectral Imaging of Vegetation with a Diffractive Plenoptic Camera

    Get PDF
    Snapshot multi-spectral sensors allow for object detection based on its spectrum for remote sensing applications in air or space. By making these types of sensors more compact and lightweight, it allows drones to dwell longer on targets or the reduction of transport costs for satellites. To address this need, I designed and built a diffractive plenoptic camera (DPC) which utilized a Fresnel zone plate and a light field camera in order to detect vegetation via a normalized difference vegetation index (NDVI). This thesis derives design equations by relating DPC system parameters to its expected performance and evaluates its multi-spectral performance. The experimental results yielded a good agreement for spectral range and FOV with the design equations but was worse than the expected spectral resolution of 6.06 nm. In testing the spectral resolution of the DPC, it was found that near the design wavelength, the DPC had a spectral resolution of 25 nm. As the algorithm refocused further from design the spectral resolution broadened to 30 nm. In order to test multi-spectral performance, three scenes containing leaves in various states of health were captured by the DPC and an NDVI was calculated for each one. The DPC was able to identify vegetation in all scenes but at reduced NDVI values in comparison to the data measured by a spectrometer. Additionally, background noise contributed by the zeroth-order of diffraction and multiple wavelengths coming from the same spatial location was found to reduce the signal of vegetation. Optical aberrations were also found to create artifacts near the edges of the final refocused image. The future of this work includes using a different diffractive optic design to get a higher efficiency on the first order, deriving an aberrated sampling pattern, and using an intermediate image diffractive plenoptic camera to reduce the zeroth-order effects of the FZP
    corecore