59 research outputs found

    Workshop on Advanced Technologies for Planetary Instruments, part 1

    Get PDF
    This meeting was conceived in response to new challenges facing NASA's robotic solar system exploration program. This volume contains papers presented at the Workshop on Advanced Technologies for Planetary Instruments on 28-30 Apr. 1993. This meeting was conceived in response to new challenges facing NASA's robotic solar system exploration program. Over the past several years, SDIO has sponsored a significant technology development program aimed, in part, at the production of instruments with these characteristics. This workshop provided an opportunity for specialists from the planetary science and DoD communities to establish contacts, to explore common technical ground in an open forum, and more specifically, to discuss the applicability of SDIO's technology base to planetary science instruments

    Real-Time Computational Gigapixel Multi-Camera Systems

    Get PDF
    The standard cameras are designed to truthfully mimic the human eye and the visual system. In recent years, commercially available cameras are becoming more complex, and offer higher image resolutions than ever before. However, the quality of conventional imaging methods is limited by several parameters, such as the pixel size, lens system, the diffraction limit, etc. The rapid technological advancements, increase in the available computing power, and introduction of Graphics Processing Units (GPU) and Field-Programmable-Gate-Arrays (FPGA) open new possibilities in the computer vision and computer graphics communities. The researchers are now focusing on utilizing the immense computational power offered on the modern processing platforms, to create imaging systems with novel or significantly enhanced capabilities compared to the standard ones. One popular type of the computational imaging systems offering new possibilities is a multi-camera system. This thesis will focus on FPGA-based multi-camera systems that operate in real-time. The aim of themulti-camera systems presented in this thesis is to offer a wide field-of-view (FOV) video coverage at high frame rates. The wide FOV is achieved by constructing a panoramic image from the images acquired by the multi-camera system. Two new real-time computational imaging systems that provide new functionalities and better performance compared to conventional cameras are presented in this thesis. Each camera system design and implementation are analyzed in detail, built and tested in real-time conditions. Panoptic is a miniaturized low-cost multi-camera system that reconstructs a 360 degrees view in real-time. Since it is an easily portable system, it provides means to capture the complete surrounding light field in dynamic environment, such as when mounted on a vehicle or a flying drone. The second presented system, GigaEye II , is a modular high-resolution imaging system that introduces the concept of distributed image processing in the real-time camera systems. This thesis explains in detail howsuch concept can be efficiently used in real-time computational imaging systems. The purpose of computational imaging systems in the form of multi-camera systems does not end with real-time panoramas. The application scope of these cameras is vast. They can be used in 3D cinematography, for broadcasting live events, or for immersive telepresence experience. The final chapter of this thesis presents three potential applications of these systems: object detection and tracking, high dynamic range (HDR) imaging, and observation of multiple regions of interest. Object detection and tracking, and observation of multiple regions of interest are extremely useful and desired capabilities of surveillance systems, in security and defense industry, or in the fast-growing industry of autonomous vehicles. On the other hand, high dynamic range imaging is becoming a common option in the consumer market cameras, and the presented method allows instantaneous capture of HDR videos. Finally, this thesis concludes with the discussion of the real-time multi-camera systems, their advantages, their limitations, and the future predictions

    Omnidirectional Light Field Analysis and Reconstruction

    Get PDF
    Digital photography exists since 1975, when Steven Sasson attempted to build the first digital camera. Since then the concept of digital camera did not evolve much: an optical lens concentrates light rays onto a focal plane where a planar photosensitive array transforms the light intensity into an electric signal. During the last decade a new way of conceiving digital photography emerged: a photography is the acquisition of the entire light ray field in a confined region of space. The main implication of this new concept is that a digital camera does not acquire a 2-D signal anymore, but a 5-D signal in general. Acquiring an image becomes more demanding in terms of memory and processing power; at the same time, it offers the users a new set of possibilities, like choosing dynamically the focal plane and the depth of field of the final digital photo. In this thesis we develop a complete mathematical framework to acquire and then reconstruct the omnidirectional light field around an observer. We also propose the design of a digital light field camera system, which is composed by several pinhole cameras distributed around a sphere. The choice is not casual, as we take inspiration from something already seen in nature: the compound eyes of common terrestrial and flying insects like the house fly. In the first part of the thesis we analyze the optimal sampling conditions that permit an efficient discrete representation of the continuous light field. In other words, we will give an answer to the question: how many cameras and what resolution are needed to have a good representation of the 4-D light field? Since we are dealing with an omnidirectional light field we use a spherical parametrization. The results of our analysis is that we need an irregular (i.e., not rectangular) sampling scheme to represent efficiently the light field. Then, to store the samples we use a graph structure, where each node represents a light ray and the edges encode the topology of the light field. When compared to other existing approaches our scheme has the favorable property of having a number of samples that scales smoothly for a given output resolution. The next step after the acquisition of the light field is to reconstruct a digital picture, which can be seen as a 2-D slice of the 4-D acquired light field. We interpret the reconstruction as a regularized inverse problem defined on the light field graph and obtain a solution based on a diffusion process. The proposed scheme has three main advantages when compared to the classic linear interpolation: it is robust to noise, it is computationally efficient and can be implemented in a distributed fashion. In the second part of the thesis we investigate the problem of extracting geometric information about the scene in the form of a depth map. We show that the depth information is encoded inside the light field derivatives and set up a TV-regularized inverse problem, which efficiently calculates a dense depth map of the scene while respecting the discontinuities at the boundaries of objects. The extracted depth map is used to remove visual and geometrical artifacts from the reconstruction when the light field is under-sampled. In other words, it can be used to help the reconstruction process in challenging situations. Furthermore, when the light field camera is moving temporally, we show how the depth map can be used to estimate the motion parameters between two consecutive acquisitions with a simple and effective algorithm, which does not require the computation nor the matching of features and performs only simple arithmetic operations directly in the pixel space. In the last part of the thesis, we introduce a novel omnidirectional light field camera that we call Panoptic. We obtain it by layering miniature CMOS imagers onto an hemispherical surface, which are then connected to a network of FPGAs. We show that the proposed mathematical framework is well suited to be embedded in hardware by demonstrating a real time reconstruction of an omnidirectional video stream at 25 frames per second

    Optical Splitting Trees for High-Precision Monocular Imaging

    Full text link

    Advanced technologies for planetary instruments

    Get PDF
    The planetary science community described instrumentation needed for missions that may go into development during the next 5 to 10 years. Then the DoD community to informed their counterparts in planetary science about their interests and capabilities, and to described the BMDO technology base, flight programs, and future directions. The working group sessions and the panel discussion synthesized technical and programmatic issues from all the presentations, with a specific goal of assessing the applicability of BMDO technologies to science instrumentation for planetary exploration.edited by J. Appleby.Clementine II: A Double Asteroid Flyby and Impactor Mission / Boain, R.J. -- The APX Spectrometer for Martian Missions / Economou, T. -- Clementine Sensor Processing System / Feldstein, A.A. -- The Ultraviolet Plume Instrument (UVPI) / Horan, D.M. -- New Technologies for UV Detectors / Joseph, C.L

    Earth imaging with microsatellites: An investigation, design, implementation and in-orbit demonstration of electronic imaging systems for earth observation on-board low-cost microsatellites.

    Get PDF
    This research programme has studied the possibilities and difficulties of using 50 kg microsatellites to perform remote imaging of the Earth. The design constraints of these missions are quite different to those encountered in larger, conventional spacecraft. While the main attractions of microsatellites are low cost and fast response times, they present the following key limitations: Payload mass under 5 kg, Continuous payload power under 5 Watts, peak power up to 15 Watts, Narrow communications bandwidths (9.6 / 38.4 kbps), Attitude control to within 5°, No moving mechanics. The most significant factor is the limited attitude stability. Without sub-degree attitude control, conventional scanning imaging systems cannot preserve scene geometry, and are therefore poorly suited to current microsatellite capabilities. The foremost conclusion of this thesis is that electronic cameras, which capture entire scenes in a single operation, must be used to overcome the effects of the satellite's motion. The potential applications of electronic cameras, including microsatellite remote sensing, have erupted with the recent availability of high sensitivity field-array CCD (charge-coupled device) image sensors. The research programme has established suitable techniques and architectures necessary for CCD sensors, cameras and entire imaging systems to fulfil scientific/commercial remote sensing despite the difficult conditions on microsatellites. The author has refined these theories by designing, building and exploiting in-orbit five generations of electronic cameras. The major objective of meteorological scale imaging was conclusively demonstrated by the Earth imaging camera flown on the UoSAT-5 spacecraft in 1991. Improved cameras have since been carried by the KITSAT-1 (1992) and PoSAT-1 (1993) microsatellites. PoSAT-1 also flies a medium resolution camera (200 metres) which (despite complete success) has highlighted certain limitations of microsatellites for high resolution remote sensing. A reworked, and extensively modularised, design has been developed for the four camera systems deployed on the FASat-Alfa mission (1995). Based on the success of these missions, this thesis presents many recommendations for the design of microsatellite imaging systems. The novelty of this research programme has been the principle of designing practical camera systems to fit on an existing, highly restrictive, satellite platform, rather than conceiving a fictitious small satellite to support a high performance scanning imager. This pragmatic approach has resulted in the first incontestable demonstrations of the feasibility of remote sensing of the Earth from inexpensive microsatellites

    3-D Cloud Morphology and Evolution Derived from Hemispheric Stereo Cameras

    Get PDF
    Clouds play a key role in the Earth-atmosphere system as they reflect incoming solar radiation back to space, while absorbing and emitting longwave radiation. A significant challenge for observation and modeling pose cumulus clouds due to their relatively small size that can reach several hundreds up to a few thousand meters, their often complex 3-D shapes and highly dynamic life-cycle. Common instruments employed to study clouds include cloud radars, lidar-ceilometers, (microwave-)radiometers, but also satellite and airborne observations (in-situ and remote), all of which lack either sufficient sensitivity or a spatial or temporal resolution for a comprehensive observation. This thesis investigates the feasibility of a ground-based network of hemispheric stereo cameras to retrieve detailed 3-D cloud geometries, which are needed for validation of simulated cloud fields and parametrization in numerical models. Such camera systems, which offer a hemispheric field of view and a temporal resolution in the range of seconds and less, have the potential to fill the remaining gap of cloud observations to a considerable degree and allow to derive critical information about size, morphology, spatial distribution and life-cycle of individual clouds and the local cloud field. The technical basis for the 3-D cloud morphology retrieval is the stereo reconstruction: a cloud is synchronously recorded by a pair of cameras, which are separated by a few hundred meters, so that mutually visible areas of the cloud can be reconstructed via triangulation. Location and orientation of each camera system was obtained from a satellite-navigation system, detected stars in night sky images and mutually visible cloud features in the images. The image point correspondences required for 3-D triangulation were provided primarily by a dense stereo matching algorithm that allows to reconstruct an object with high degree of spatial completeness, which can improve subsequent analysis. The experimental setup in the vicinity of the Jülich Observatory for Cloud Evolution (JOYCE) included a pair of hemispheric sky cameras; it was later extended by another pair to reconstruct clouds from different view perspectives and both were separated by several kilometers. A comparison of the cloud base height (CBH) at zenith obtained from the stereo cameras and a lidar-ceilometer showed a typical bias of mostly below 2% of the lidar-derived CBH, but also a few occasions between 3-5%. Typical standard deviations of the differences ranged between 50 m (1.5 % of CBH) for altocumulus clouds and between 7% (123 m) and 10% (165 m) for cumulus and strato-cumulus clouds. A comparison of the estimated 3-D cumulus boundary at near-zenith to the sensed 2-D reflectivity profiles from a 35-GHz cloud radar revealed typical differences between 35 - 81 m. For clouds at larger distances (> 2 km) both signals can deviate significantly, which can in part be explained by a lower reconstruction accuracy for the low-contrast areas of a cloud base, but also with the insufficient sensitivity of the cloud radar if the cloud condensate is dominated by very small droplets or diluted with environmental air. For sequences of stereo images, the 3-D cloud reconstructions from the stereo analysis can be combined with the motion and tracking information from an optical flow routine in order to derive 3-D motion and deformation vectors of clouds. This allowed to estimate atmospheric motion in case of cloud layers with an accuracy of 1 ms-1 in velocity and 7° to 10° in direction. The fine-grained motion data was also used to detect and quantify cloud motion patterns of individual cumuli, such as deformations under vertical wind-shear. The potential of the proposed method lies in an extended analysis of life-cycle and morphology of cumulus clouds. This is illustrated in two show cases where developing cumulus clouds were reconstructed from two different view perspectives. In the first case study, a moving cloud was tracked and analyzed, while being subject to vertical wind shear. The highly tilted cloud body was captured and its vertical profile was quantified to obtain measures like vertically resolved diameter or tilting angle. The second case study shows a life-cycle analysis of a developing cumulus, including a time-series of relevant geometric aspects, such as perimeter, vertically projected area, diameter, thickness and further derived statistics like cloud aspect ratio or perimeter scaling. The analysis confirms some aspects of cloud evolution, such as the pulse-like formation of cumulus and indicates that cloud aspect ratio (size vs height) can be described by a power-law functional relationship for an individual life-cycle.Wolken haben einen maßgeblichen Einfluss auf den Strahlungshaushalt der Erde, da sie solare Strahlung effektiv reflektieren, aber von der Erde emittierte langwellige Strahlung sowohl absorbieren als auch ihrerseits wieder emittieren. Darüber hinaus stellen Cumulus-Wolken wegen ihrer verhältnismäßig kleinen Ausdehnung von wenigen hundert bis einigen tausend Metern sowie ihres dynamischen Lebenszyklus nach wie vor eine große Herausforderung für Beobachtung und Modellierung dar. Gegenwärtig für deren Erforschung im Einsatz befindliche Instrumente wie Lidar-Ceilometer, Wolkenradar, Mikrowellenradiometer oder auch satellitengestützte Beobachtungen stellen die für eine umfassende Erforschung dieser Wolken erforderliche räumliche und zeitliche Abdeckung nicht zur Verfügung. In dieser Arbeit wird untersucht, inwieweit eine bodengebundene Beobachtung von Wolken mit hemisphärisch projizierenden Wolkenkameras geeignet ist detaillierte 3-D Wolkengeometrien zu rekonstruieren um daraus Informationen über Größe, Morphologie und Lebenszyklus einzelner Wolken und des lokalen Wolkenfeldes abzuleiten. Grundlage für die Erfassung der 3-D Wolkengeometrien in dieser Arbeit ist die 3-D Stereorekonstruktion, bei der eine Wolke von jeweils zwei im Abstand von mehreren Hundert Metern aufgestellten, synchron aufnehmenden Kameras abgebildet wird. Beidseitig sichtbare Teile einer Wolke können so mittels Triangulation rekonstruiert werden. Fischaugen-Objektive ermöglichen das hemisphärische Sichtfeld der Wolkenkameras. Während die Positionsbestimmung der Kameras mit Hilfe eines Satelliten-Navigationssystems durchgeführt wurde, konnte die absolute Orientierung der Kameras im Raum mit Hilfe von detektierten Sternen bestimmt werden, die als Referenzpunkte dienten. Die für eine Stereoanalyse wichtige relative Orientierung zweier Kameras wurde anschließend unter Zuhilfenahme von Punktkorrespondenzen zwischen den Stereobildern verfeinert. Für die Stereoanalyse wurde primär ein Bildanalyse-Algorithmus eingesetzt, welcher sich durch eine hohe geometrische Vollständigkeit auszeichnet und auch 3-D Informationen für Bildregionen mit geringem Kontrast liefert. In ausgewählten Fällen wurden die so rekonstruierten Wolkengeometrien zudem mit einem präzisen Mehrbild-Stereo-Verfahren verglichen. Eine möglichst vollständige 3-D Wolkengeometrie ist vorteilhaft für eine darauffolgende Analyse, die eine Segmentierung und Identifizierung einzelner Wolken, deren raum-zeitliche Verfolgung oder die Ableitung geometrischer Größen umfasst. Der experimentelle Aufbau im Umfeld des Jülich Observatory for Cloud Evolution (JOYCE) umfasste zuerst eine, später zwei Stereokameras, die jeweils mehrere Kilometer entfernt installiert wurden um unterschiedliche Wolkenpartien rekonstruieren zu können. Ein Vergleich zwischen Stereorekonstruktion und Lidar-Ceilometer zeigte typische Standardabweichungen der Wolkenbasishöhendifferenz von 50 m (1.5 %) bei mittelhoher Altocumulus-Bewölkung und 123 m (7 %) bis 165 m (10 %) bei heterogener Cumulus- und Stratocumulus-Bewölkung. Gleichzeitig wich die rekonstruierte Wolkenbasishöhe im Durchschnitt meist nicht weiter als 2 %, in Einzelfällen 3-5 % vom entsprechenden Wert des Lidars ab. Im Vergleich zur abgeleiteten Cumulus-Morphologie aus den 2-D Reflektivitätsprofilen des Wolkenradars, zeigten sich im Zenit-Bereich typische Differenzen zwischen 35 und 81 m. Bei weiter entfernten Wolken (> 2 km) können sich Stereorekonstruktion und Reflektivitätssignal stark unterscheiden, was neben einer abnehmenden geometrischen Genauigkeit der Stereorekonstruktion in kontrastarmen Bereichen insbesondere mit einer oftmals unzureichenden Sensitivität des Radars bei kleinen Wolkentröpfchen erklärt werden kann, wie man sie an der Wolkenbasis und in den Randbereichen von Wolken findet. Die Kombination von Stereoanalyse und der Bewegungsinformation innerhalb einer Bildsequenz erlaubt die Bestimmung von Wolkenzug- und -deformationsvektoren. Neben der Verfolgung einzelner Wolkenstrukturen und der Erfassung von Wolkendynamik (beispielsweise der Deformation von Wolken durch Windscherung), kann im Fall von stratiformen Wolken Windgeschwindigkeit und -richtung abgeschätzt werden. Ein Vergleich mit Beobachtungen eines Wind-Lidars zeigte hierfür typische Abweichungen der Windgeschwindigkeit von 1 ms-1 und der Windrichtung von 7° to 10°. Ein besonderer Mehrwert der Methode liegt in einer tiefergehenden Analyse von Morphologie und Lebenszyklus von Cumulus-Wolken. Dies wurde anhand zweier exemplarischer Fallstudien gezeigt, in denen die 3-D-Rekonstruktionen zweier entfernt aufgestellter Stereokameras kombiniert wurden. Im ersten Fall wurde ein sich unter vertikaler Windscherung entwickelnder Cumulus von zwei Seiten aufgenommen, was eine geometrische Erfassung des stark durch Scherung geneigten Wolkenkörpers ermöglichte. Kennwerte wie Vertikalprofil, Neigungswinkel der Wolke und Durchmesser einzelner Höhenschichten wurden abgeschätzt. Der zweite Fall zeigte eine statistische Analyse eines sich entwickelnden Cumulus über seinen Lebenszyklus hinweg. Dies erlaubte die Erstellung einer Zeitreihe mit relevanten Kennzahlen wie äquivalenter Durchmesser, vertikale Ausdehnung, Perimeter oder abgeleitete Größen wie Aspektrate oder Perimeter-Skalierung. Während die Analyse bisherige Ergebnisse aus Simulationen und satellitengestützten Beobachtungen bestätigt, erlaubt diese aber eine Erweiterung auf die Ebene individueller Wolken und der Ableitung funktionaler Zusammenhänge wie zum Beispiel dem Verhältnis von Wolkendurchmesser und vertikaler Dimension

    Revealing the Invisible: On the Extraction of Latent Information from Generalized Image Data

    Get PDF
    The desire to reveal the invisible in order to explain the world around us has been a source of impetus for technological and scientific progress throughout human history. Many of the phenomena that directly affect us cannot be sufficiently explained based on the observations using our primary senses alone. Often this is because their originating cause is either too small, too far away, or in other ways obstructed. To put it in other words: it is invisible to us. Without careful observation and experimentation, our models of the world remain inaccurate and research has to be conducted in order to improve our understanding of even the most basic effects. In this thesis, we1 are going to present our solutions to three challenging problems in visual computing, where a surprising amount of information is hidden in generalized image data and cannot easily be extracted by human observation or existing methods. We are able to extract the latent information using non-linear and discrete optimization methods based on physically motivated models and computer graphics methodology, such as ray tracing, real-time transient rendering, and image-based rendering
    corecore