7 research outputs found

    Automatic optical matched filtering an evaluation of Reflexite and Transitions lenses in an optical matched filtering role

    Get PDF
    A matched filter is a device used to detect the presence and position of known objects in a scene. The ability to create an optical matched filter has been around for many years. The current filters have some aspects that are troublesome. The problems encountered by these filters include quantization error, and complexity in generating the matched filters. The optical matched filter being studied eliminates these problems in an attempt to generate a better filter. There are also other potential implications of this research. One can envision such a system being used for real-time target detection. The primary advantage over conventional digital image processing is the ability to compute at the speed of light. The filter has two necessary components: one to compute the complex conjugate of the phase and the other to store the complement of the magnitude. Reflexiteâ„¢ , (an array of corner-cube retroreflectors), was studied for its potential use in the optical matched filter as a phase conjugator. Transitions Lensesâ„¢ , (a photochromic optical window), was studied for its potential use to store the complement of the magnitude real-time. Early on we determined that the Transitions Lensesâ„¢ would not be feasible for our optical set-up. We also concluded that the while Reflexiteâ„¢ does approximately conjugate the phase it would not be adequate to perform optical matched filtering. It did however work well enough to be used in simple optical filtering experiments

    A psychophysical investigation of global illumination algorithms used in augmented reality

    Get PDF
    Global illumination rendering algorithms are capable of producing images that are visually realistic. However, this typically comes at a large computational expense. The overarching goal of this research was to compare different rendering solutions in order to understand why some yield better results when applied to rendering synthetic objects into real photographs. As rendered images are ultimately viewed by human observers, it was logical to use psychophysics to investigate these differences. A psychophysical experiment was conducted judging the composite images for accuracy to the original photograph. In addition, iCAM, an image color appearance model, was used to calculate image differences for the same set of images. In general it was determined that any full global illumination is better than direct illumination solutions only. Also, it was discovered that the full rendering with all of its artifacts is not necessarily an indicator of judged accuracy for the final composite image. Finally, initial results show promise in using iCAM to predict a relationship similar to the psychophysics, which could eventually be used in-the-rendering-loop to achieve photo-realism

    High-Resolution Slant-Angle Scene Generation and Validation of Concealed Targets in DIRSIG

    Get PDF
    Traditionally, synthetic imagery has been constructed to simulate images captured with low resolution, nadirviewing sensors. Advances in sensor design have driven a need to simulate scenes not only at higher resolutions but also from oblique view angles. The primary efforts of this research include: real image capture, scene construction and modeling, and validation of the synthetic imagery in the reflective portion of the spectrum. High resolution imagery was collected of an area named MicroScene at the Rochester Institute of Technology using the Chester F. Carlson Center for Imaging Science’s MISI and WASP sensors using an oblique view angle. Three Humvees, the primary targets, were placed in the scene under three different levels of concealment. Following the collection, a synthetic replica of the scene was constructed and then rendered with the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model configured to recreate the scene both spatially and spectrally based on actual sensor characteristics. Finally, a validation of the synthetic imagery against the real images of MicroScene was accomplished using a combination of qualitative analysis, Gaussian maximum likelihood classification, and the RX algorithm. The model was updated following each validation using a cyclical development approach. The purpose of this research is to provide a level of confidence in the synthetic imagery produced by DIRSIG so that it can be used to train and develop algorithms for real world concealed target detection

    Surface and Buried Landmine Scene Generation and Validation Using the Digital Imaging and Remote Sensing Image Generation Model

    Get PDF
    Detection and neutralization of surface-laid and buried landmines has been a slow and dangerous endeavor for military forces and humanitarian organizations throughout the world. In an effort to make the process faster and safer, scientists have begun to exploit the ever-evolving passive electro-optical realm, both from a broadband perspective and a multi or hyperspectral perspective. Carried with this exploitation is the development of mine detection algorithms that take advantage of spectral features exhibited by mine targets, only available in a multi or hyperspectral data set. Difficulty in algorithm development arises from a lack of robust data, which is needed to appropriately test the validity of an algorithm’s results. This paper discusses the development of synthetic data using the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model. A synthetic landmine scene has been modeled after data collected at a US Army arid testing site by the University of Hawaii’s Airborne Hyperspectral Imager (AHI). The synthetic data has been created and validated to represent the surrogate minefield thermally, spatially, spectrally, and temporally over the 7.9 to 11.5 micron region using 70 bands of data. Validation of the scene has been accomplished by direct comparison to the AHI truth data using qualitative band to band visual analysis, Rank Order Correlation comparison, Principle Components dimensionality analysis, and an evaluation of the R(x) algorithm’s performance. This paper discusses landmine detection phenomenology, describes the steps taken to build the scene, modeling methods utilized to overcome input parameter limitations, and compares the synthetic scene to truth data
    corecore