266 research outputs found

    Digital image correlation techniques applied to LANDSAT multispectral imagery

    Get PDF
    The author has identified the following significant results. Automatic image registration and resampling techniques applied to LANDSAT data achieved accuracies, resulting in mean radial displacement errors of less than 0.2 pixel. The process method utilized recursive computational techniques and line-by-line updating on the basis of feedback error signals. Goodness of local feature matching was evaluated through the implementation of a correlation algorithm. An automatic restart allowed the system to derive control point coordinates over a portion of the image and to restart the process, utilizing this new control point information as initial estimates

    Restoration of Scene Information Reflected from Non-Specular Media

    Get PDF
    A recently published experiment called dual photography exploits Helmholtz reciprocity by illuminating a scene with a pixilated light source and imaging other parts of that scene with a camera so that light transport between every pair of source-to-camera pixels is measured. The positions of the source and camera are then computationally interchanged to generate a dual image of the scene from the viewpoint of the source illuminated from the position of the camera. Although information from parts of the scene normally hidden from the camera are made available, this technique is rather contrived and therefore limited in practical applications since it requires access to the path from the source to the scene for the pixilated illumination. By radiometrically modeling the experiment described above and expanding it to the concept of indirect photography, it has been shown theoretically, by simulation and through experimentation that information in parts of the scene not directly visible to either the camera or the controlling light source can be recovered. To that end, the camera and light source (now a laser) have been collocated. The laser is reflected from a visible surface in the scene onto hidden surfaces in the scene and the camera images collect how the light is reflected from the hidden surfaces back to the visible surface. The camera images are then used to reconstruct information from the hidden surfaces in the scene. This document discusses the theory of indirect photography, describes the simulation and experiment and used to verify the theory and describes techniques used to improve the image quality, as measured by modified modulation transfer function

    Global crop production forecasting: An analysis of the data system problems and their solutions

    Get PDF
    Data related problems in the acquisition and use of satellite data necessary for operational forecasting of global crop production are considered for the purpose of establishing a measurable baseline. For data acquisition the world was divided into 37 crop regions in 22 countries. These regions represent approximately 95 percent of the total world production of the selected crops of interest, i.e., wheat, corn, soybeans, and rice. Targets were assigned to each region. Limited time periods during which data could be taken (windows) were assigned to each target. Each target was assigned to a cloud region. The DSDS was used to measure the success of obtaining data for each target during the specified windows for the regional cloud conditions and the specific alternatives being analyzed. The results of this study suggest several approaches for an operational system that will perform satisfactorily with two LANDSAT type satellites

    Earth resources technology satellite operations control center and data processing facility. Book 2 - Systems studies Final report

    Get PDF
    Systems analysis for ERTS NASA Data Processing Facility system and subsystem

    Methodology for the Integration of Optomechanical System Software Models with a Radiative Transfer Image Simulation Model

    Get PDF
    Stray light, any unwanted radiation that reaches the focal plane of an optical system, reduces image contrast, creates false signals or obscures faint ones, and ultimately degrades radiometric accuracy. These detrimental effects can have a profound impact on the usability of collected Earth-observing remote sensing data, which must be radiometrically calibrated to be useful for scientific applications. Understanding the full impact of stray light on data scientific utility is of particular concern for lower cost, more compact imaging systems, which inherently provide fewer opportunities for stray light control. To address these concerns, this research presents a general methodology for integrating point spread function (PSF) and stray light performance data from optomechanical system models in optical engineering software with a radiative transfer image simulation model. This integration method effectively emulates the PSF and stray light performance of a detailed system model within a high-fidelity scene, thus producing realistic simulated imagery. This novel capability enables system trade studies and sensitivity analyses to be conducted on parameters of interest, particularly those that influence stray light, by analyzing their quantitative impact on user applications when imaging realistic operational scenes. For Earth science applications, this method is useful in assessing the impact of stray light performance on retrieving surface temperature, ocean color products such as chlorophyll concentration or dissolved organic matter, etc. The knowledge gained from this model integration also provides insight into how specific stray light requirements translate to user application impact, which can be leveraged in writing more informed stray light requirements. In addition to detailing the methodology\u27s radiometric framework, we describe the collection of necessary raytrace data from an optomechanical system model (in this case, using FRED Optical Engineering Software), and present PSF and stray light component validation tests through imaging Digital Imaging and Remote Sensing Image Generation (DIRSIG) model test scenes. We then demonstrate the integration method\u27s ability to produce quantitative metrics to assess the impact of stray light-focused system trade studies on user applications using a Cassegrain telescope model and a stray light-stressing coastal scene under various system and scene conditions. This case study showcases the stray light images and other detailed performance data produced by the integration method that take into account both a system\u27s stray light susceptibility and a scene\u27s at-aperture radiance profile to determine the stray light contribution of specific system components or stray light paths. The innovative contributions provided by this work represent substantial improvements over current stray light modeling and simulation techniques, where the scene image formation is decoupled from the physical system stray light modeling, and can aid in the design of future Earth-observing imaging systems. This work ultimately establishes an integrated-systems approach that combines the effects of scene content and the optomechanical components, resulting in a more realistic and higher fidelity system performance prediction

    Scene Monitoring With A Forest Of Cooperative Sensors

    Get PDF
    In this dissertation, we present vision based scene interpretation methods for monitoring of people and vehicles, in real-time, within a busy environment using a forest of co-operative electro-optical (EO) sensors. We have developed novel video understanding algorithms with learning capability, to detect and categorize people and vehicles, track them with in a camera and hand-off this information across multiple networked cameras for multi-camera tracking. The ability to learn prevents the need for extensive manual intervention, site models and camera calibration, and provides adaptability to changing environmental conditions. For object detection and categorization in the video stream, a two step detection procedure is used. First, regions of interest are determined using a novel hierarchical background subtraction algorithm that uses color and gradient information for interest region detection. Second, objects are located and classified from within these regions using a weakly supervised learning mechanism based on co-training that employs motion and appearance features. The main contribution of this approach is that it is an online procedure in which separate views (features) of the data are used for co-training, while the combined view (all features) is used to make classification decisions in a single boosted framework. The advantage of this approach is that it requires only a few initial training samples and can automatically adjust its parameters online to improve the detection and classification performance. Once objects are detected and classified they are tracked in individual cameras. Single camera tracking is performed using a voting based approach that utilizes color and shape cues to establish correspondence in individual cameras. The tracker has the capability to handle multiple occluded objects. Next, the objects are tracked across a forest of cameras with non-overlapping views. This is a hard problem because of two reasons. First, the observations of an object are often widely separated in time and space when viewed from non-overlapping cameras. Secondly, the appearance of an object in one camera view might be very different from its appearance in another camera view due to the differences in illumination, pose and camera properties. To deal with the first problem, the system learns the inter-camera relationships to constrain track correspondences. These relationships are learned in the form of multivariate probability density of space-time variables (object entry and exit locations, velocities, and inter-camera transition times) using Parzen windows. To handle the appearance change of an object as it moves from one camera to another, we show that all color transfer functions from a given camera to another camera lie in a low dimensional subspace. The tracking algorithm learns this subspace by using probabilistic principal component analysis and uses it for appearance matching. The proposed system learns the camera topology and subspace of inter-camera color transfer functions during a training phase. Once the training is complete, correspondences are assigned using the maximum a posteriori (MAP) estimation framework using both the location and appearance cues. Extensive experiments and deployment of this system in realistic scenarios has demonstrated the robustness of the proposed methods. The proposed system was able to detect and classify targets, and seamlessly tracked them across multiple cameras. It also generated a summary in terms of key frames and textual description of trajectories to a monitoring officer for final analysis and response decision. This level of interpretation was the goal of our research effort, and we believe that it is a significant step forward in the development of intelligent systems that can deal with the complexities of real world scenarios

    Earth imaging with microsatellites: An investigation, design, implementation and in-orbit demonstration of electronic imaging systems for earth observation on-board low-cost microsatellites.

    Get PDF
    This research programme has studied the possibilities and difficulties of using 50 kg microsatellites to perform remote imaging of the Earth. The design constraints of these missions are quite different to those encountered in larger, conventional spacecraft. While the main attractions of microsatellites are low cost and fast response times, they present the following key limitations: Payload mass under 5 kg, Continuous payload power under 5 Watts, peak power up to 15 Watts, Narrow communications bandwidths (9.6 / 38.4 kbps), Attitude control to within 5°, No moving mechanics. The most significant factor is the limited attitude stability. Without sub-degree attitude control, conventional scanning imaging systems cannot preserve scene geometry, and are therefore poorly suited to current microsatellite capabilities. The foremost conclusion of this thesis is that electronic cameras, which capture entire scenes in a single operation, must be used to overcome the effects of the satellite's motion. The potential applications of electronic cameras, including microsatellite remote sensing, have erupted with the recent availability of high sensitivity field-array CCD (charge-coupled device) image sensors. The research programme has established suitable techniques and architectures necessary for CCD sensors, cameras and entire imaging systems to fulfil scientific/commercial remote sensing despite the difficult conditions on microsatellites. The author has refined these theories by designing, building and exploiting in-orbit five generations of electronic cameras. The major objective of meteorological scale imaging was conclusively demonstrated by the Earth imaging camera flown on the UoSAT-5 spacecraft in 1991. Improved cameras have since been carried by the KITSAT-1 (1992) and PoSAT-1 (1993) microsatellites. PoSAT-1 also flies a medium resolution camera (200 metres) which (despite complete success) has highlighted certain limitations of microsatellites for high resolution remote sensing. A reworked, and extensively modularised, design has been developed for the four camera systems deployed on the FASat-Alfa mission (1995). Based on the success of these missions, this thesis presents many recommendations for the design of microsatellite imaging systems. The novelty of this research programme has been the principle of designing practical camera systems to fit on an existing, highly restrictive, satellite platform, rather than conceiving a fictitious small satellite to support a high performance scanning imager. This pragmatic approach has resulted in the first incontestable demonstrations of the feasibility of remote sensing of the Earth from inexpensive microsatellites

    Adaptive Speckle Filtering in Radar Imagery

    Get PDF
    • …
    corecore