364 research outputs found

    Multispectral Image Correction for Geometric Measurements

    Get PDF
    Multispectral- and hyperspectral imaging technologies enable new possibilities in industrial measurement applications. Based on the knowledge of remote sensing a lot of investigations were made in the last decades of years. Nevertheless the demands on remote sensing versus technical multi spectral image processing are quite different. In the field of precise geometric measurement technics it is necessary to correct the image data between different spectral channels with a high accuracy, normally in the micron range. Otherwise the geometric absolute value of fail detection on edges can be become very large. State of the art in industrial imaging and detection of geometric features is the calibration of only one imaging channel. In this paper, the studies on a twelve channel multi spectral imager were presented. For the applied filter wheel system, investigations on the improvement of lens aberration as well as for the defocus problem were made. Therefore a calibrated high precision geometric test chart was used to calibrate the system geometrically. To correct the geometric errors on the image plane a special moving filter approach, based on linear convolution, was developed. For every channel a calibration matrix were calculated and applied on the image system output

    Surgical Guidance for Removal of Cholesteatoma Using a Multispectral 3D-Endoscope

    Get PDF
    We develop a stereo-multispectral endoscopic prototype in which a filter-wheel is used for surgical guidance to remove cholesteatoma tissue in the middle ear. Cholesteatoma is a destructive proliferating tissue. The only treatment for this disease is surgery. Removal is a very demanding task, even for experienced surgeons. It is very difficult to distinguish between bone and cholesteatoma. In addition, it can even reoccur if not all tissue particles of the cholesteatoma are removed, which leads to undesirable follow-up operations. Therefore, we propose an image-based method that combines multispectral tissue classification and 3D reconstruction to identify all parts of the removed tissue and determine their metric dimensions intraoperatively. The designed multispectral filter-wheel 3D-endoscope prototype can switch between narrow-band spectral and broad-band white illumination, which is technically evaluated in terms of optical system properties. Further, it is tested and evaluated on three patients. The wavelengths 400 nm and 420 nm are identified as most suitable for the differentiation task. The stereoscopic image acquisition allows accurate 3D surface reconstruction of the enhanced image information. The first results are promising, as the cholesteatoma can be easily highlighted, correctly identified, and visualized as a true-to-scale 3D model showing the patient-specific anatomy.Peer Reviewe

    Reconstruction from Spatio-Spectrally Coded Multispectral Light Fields

    Get PDF
    In this work, spatio-spectrally coded multispectral light fields, as taken by a light field camera with a spectrally coded microlens array, are investigated. For the reconstruction of the coded light fields, two methods, one based on the principles of compressed sensing and one deep learning approach, are developed. Using novel synthetic as well as a real-world datasets, the proposed reconstruction approaches are evaluated in detail

    Reconstruction from Spatio-Spectrally Coded Multispectral Light Fields

    Get PDF
    In dieser Arbeit werden spektral kodierte multispektrale Lichtfelder untersucht, wie sie von einer Lichtfeldkamera mit einem spektral kodierten Mikrolinsenarray aufgenommen werden. FĂŒr die Rekonstruktion der kodierten Lichtfelder werden zwei Methoden entwickelt, eine basierend auf den Prinzipien des Compressed Sensing sowie eine Deep Learning Methode. Anhand neuartiger synthetischer und realer DatensĂ€tze werden die vorgeschlagenen RekonstruktionsansĂ€tze im Detail evaluiert

    Reconstruction from Spatio-Spectrally Coded Multispectral Light Fields

    Get PDF
    In dieser Arbeit werden spektral codierte multispektrale Lichtfelder, wie sie von einer Lichtfeldkamera mit einem spektral codierten Mikrolinsenarray aufgenommen werden, untersucht. FĂŒr die Rekonstruktion der codierten Lichtfelder werden zwei Methoden entwickelt und im Detail ausgewertet. ZunĂ€chst wird eine vollstĂ€ndige Rekonstruktion des spektralen Lichtfelds entwickelt, die auf den Prinzipien des Compressed Sensing basiert. Um die spektralen Lichtfelder spĂ€rlich darzustellen, werden 5D-DCT-Basen sowie ein Ansatz zum Lernen eines Dictionary untersucht. Der konventionelle vektorisierte Dictionary-Lernansatz wird auf eine tensorielle Notation verallgemeinert, um das Lichtfeld-Dictionary tensoriell zu faktorisieren. Aufgrund der reduzierten Anzahl von zu lernenden Parametern ermöglicht dieser Ansatz grĂ¶ĂŸere effektive AtomgrĂ¶ĂŸen. Zweitens wird eine auf Deep Learning basierende Rekonstruktion der spektralen Zentralansicht und der zugehörigen DisparitĂ€tskarte aus dem codierten Lichtfeld entwickelt. Dabei wird die gewĂŒnschte Information direkt aus den codierten Messungen geschĂ€tzt. Es werden verschiedene Strategien des entsprechenden Multi-Task-Trainings verglichen. Um die QualitĂ€t der Rekonstruktion weiter zu verbessern, wird eine neuartige Methode zur Einbeziehung von Hilfslossfunktionen auf der Grundlage ihrer jeweiligen normalisierten GradientenĂ€hnlichkeit entwickelt und gezeigt, dass sie bisherige adaptive Methoden ĂŒbertrifft. Um die verschiedenen RekonstruktionsansĂ€tze zu trainieren und zu bewerten, werden zwei DatensĂ€tze erstellt. ZunĂ€chst wird ein großer synthetischer spektraler Lichtfelddatensatz mit verfĂŒgbarer DisparitĂ€t Ground Truth unter Verwendung eines Raytracers erstellt. Dieser Datensatz, der etwa 100k spektrale Lichtfelder mit dazugehöriger DisparitĂ€t enthĂ€lt, wird in einen Trainings-, Validierungs- und Testdatensatz aufgeteilt. Um die QualitĂ€t weiter zu bewerten, werden sieben handgefertigte Szenen, so genannte Datensatz-Challenges, erstellt. Schließlich wird ein realer spektraler Lichtfelddatensatz mit einer speziell angefertigten spektralen Lichtfeldreferenzkamera aufgenommen. Die radiometrische und geometrische Kalibrierung der Kamera wird im Detail besprochen. Anhand der neuen DatensĂ€tze werden die vorgeschlagenen RekonstruktionsansĂ€tze im Detail bewertet. Es werden verschiedene Codierungsmasken untersucht -- zufĂ€llige, regulĂ€re, sowie Ende-zu-Ende optimierte Codierungsmasken, die mit einer neuartigen differenzierbaren fraktalen Generierung erzeugt werden. DarĂŒber hinaus werden weitere Untersuchungen durchgefĂŒhrt, zum Beispiel bezĂŒglich der AbhĂ€ngigkeit von Rauschen, der Winkelauflösung oder Tiefe. Insgesamt sind die Ergebnisse ĂŒberzeugend und zeigen eine hohe RekonstruktionsqualitĂ€t. Die Deep-Learning-basierte Rekonstruktion, insbesondere wenn sie mit adaptiven Multitasking- und Hilfslossstrategien trainiert wird, ĂŒbertrifft die Compressed-Sensing-basierte Rekonstruktion mit anschließender DisparitĂ€tsschĂ€tzung nach dem Stand der Technik

    Reconstruction from Spatio-Spectrally Coded Multispectral Light Fields

    Get PDF
    In this work, spatio-spectrally coded multispectral light fields, as taken by a light field camera with a spectrally coded microlens array, are investigated. For the reconstruction of the coded light fields, two methods, one based on the principles of compressed sensing and one deep learning approach, are developed. Using novel synthetic as well as a real-world datasets, the proposed reconstruction approaches are evaluated in detail

    Autofocus for multispectral camera using focus symmetry

    Get PDF
    Author name used in this publication: Si-Jie ShaoAuthor name used in this publication: John H. Xin2011-2012 > Academic research: refereed > Publication in refereed journalVersion of RecordPublishe

    Earth imaging with microsatellites: An investigation, design, implementation and in-orbit demonstration of electronic imaging systems for earth observation on-board low-cost microsatellites.

    Get PDF
    This research programme has studied the possibilities and difficulties of using 50 kg microsatellites to perform remote imaging of the Earth. The design constraints of these missions are quite different to those encountered in larger, conventional spacecraft. While the main attractions of microsatellites are low cost and fast response times, they present the following key limitations: Payload mass under 5 kg, Continuous payload power under 5 Watts, peak power up to 15 Watts, Narrow communications bandwidths (9.6 / 38.4 kbps), Attitude control to within 5°, No moving mechanics. The most significant factor is the limited attitude stability. Without sub-degree attitude control, conventional scanning imaging systems cannot preserve scene geometry, and are therefore poorly suited to current microsatellite capabilities. The foremost conclusion of this thesis is that electronic cameras, which capture entire scenes in a single operation, must be used to overcome the effects of the satellite's motion. The potential applications of electronic cameras, including microsatellite remote sensing, have erupted with the recent availability of high sensitivity field-array CCD (charge-coupled device) image sensors. The research programme has established suitable techniques and architectures necessary for CCD sensors, cameras and entire imaging systems to fulfil scientific/commercial remote sensing despite the difficult conditions on microsatellites. The author has refined these theories by designing, building and exploiting in-orbit five generations of electronic cameras. The major objective of meteorological scale imaging was conclusively demonstrated by the Earth imaging camera flown on the UoSAT-5 spacecraft in 1991. Improved cameras have since been carried by the KITSAT-1 (1992) and PoSAT-1 (1993) microsatellites. PoSAT-1 also flies a medium resolution camera (200 metres) which (despite complete success) has highlighted certain limitations of microsatellites for high resolution remote sensing. A reworked, and extensively modularised, design has been developed for the four camera systems deployed on the FASat-Alfa mission (1995). Based on the success of these missions, this thesis presents many recommendations for the design of microsatellite imaging systems. The novelty of this research programme has been the principle of designing practical camera systems to fit on an existing, highly restrictive, satellite platform, rather than conceiving a fictitious small satellite to support a high performance scanning imager. This pragmatic approach has resulted in the first incontestable demonstrations of the feasibility of remote sensing of the Earth from inexpensive microsatellites

    Pre-Flight Calibration of the Mars 2020 Rover Mastcam Zoom (Mastcam-Z) Multispectral, Stereoscopic Imager

    Get PDF
    The NASA Perseverance rover Mast Camera Zoom (Mastcam-Z) system is a pair of zoomable, focusable, multi-spectral, and color charge-coupled device (CCD) cameras mounted on top of a 1.7 m Remote Sensing Mast, along with associated electronics and two calibration targets. The cameras contain identical optical assemblies that can range in focal length from 26 mm (25.5∘×19.1∘ FOV) to 110 mm (6.2∘×4.2∘ FOV) and will acquire data at pixel scales of 148-540 ÎŒm at a range of 2 m and 7.4-27 cm at 1 km. The cameras are mounted on the rover’s mast with a stereo baseline of 24.3±0.1 cm and a toe-in angle of 1.17±0.03∘ (per camera). Each camera uses a Kodak KAI-2020 CCD with 1600×1200 active pixels and an 8 position filter wheel that contains an IR-cutoff filter for color imaging through the detectors’ Bayer-pattern filters, a neutral density (ND) solar filter for imaging the sun, and 6 narrow-band geology filters (16 total filters). An associated Digital Electronics Assembly provides command data interfaces to the rover, 11-to-8 bit companding, and JPEG compression capabilities. Herein, we describe pre-flight calibration of the Mastcam-Z instrument and characterize its radiometric and geometric behavior. Between April 26thth and May 9thth, 2019, ∌45,000 images were acquired during stand-alone calibration at Malin Space Science Systems (MSSS) in San Diego, CA. Additional data were acquired during Assembly Test and Launch Operations (ATLO) at the Jet Propulsion Laboratory and Kennedy Space Center. Results of the radiometric calibration validate a 5% absolute radiometric accuracy when using camera state parameters investigated during testing. When observing using camera state parameters not interrogated during calibration (e.g., non-canonical zoom positions), we conservatively estimate the absolute uncertainty to be 0.2 design requirement. We discuss lessons learned from calibration and suggest tactical strategies that will optimize the quality of science data acquired during operation at Mars. While most results matched expectations, some surprises were discovered, such as a strong wavelength and temperature dependence on the radiometric coefficients and a scene-dependent dynamic component to the zero-exposure bias frames. Calibration results and derived accuracies were validated using a Geoboard target consisting of well-characterized geologic samples
    • 

    corecore