4 research outputs found

    High-Resolution Slant-Angle Scene Generation and Validation of Concealed Targets in DIRSIG

    Get PDF
    Traditionally, synthetic imagery has been constructed to simulate images captured with low resolution, nadirviewing sensors. Advances in sensor design have driven a need to simulate scenes not only at higher resolutions but also from oblique view angles. The primary efforts of this research include: real image capture, scene construction and modeling, and validation of the synthetic imagery in the reflective portion of the spectrum. High resolution imagery was collected of an area named MicroScene at the Rochester Institute of Technology using the Chester F. Carlson Center for Imaging Science’s MISI and WASP sensors using an oblique view angle. Three Humvees, the primary targets, were placed in the scene under three different levels of concealment. Following the collection, a synthetic replica of the scene was constructed and then rendered with the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model configured to recreate the scene both spatially and spectrally based on actual sensor characteristics. Finally, a validation of the synthetic imagery against the real images of MicroScene was accomplished using a combination of qualitative analysis, Gaussian maximum likelihood classification, and the RX algorithm. The model was updated following each validation using a cyclical development approach. The purpose of this research is to provide a level of confidence in the synthetic imagery produced by DIRSIG so that it can be used to train and develop algorithms for real world concealed target detection

    Surface and Buried Landmine Scene Generation and Validation Using the Digital Imaging and Remote Sensing Image Generation Model

    Get PDF
    Detection and neutralization of surface-laid and buried landmines has been a slow and dangerous endeavor for military forces and humanitarian organizations throughout the world. In an effort to make the process faster and safer, scientists have begun to exploit the ever-evolving passive electro-optical realm, both from a broadband perspective and a multi or hyperspectral perspective. Carried with this exploitation is the development of mine detection algorithms that take advantage of spectral features exhibited by mine targets, only available in a multi or hyperspectral data set. Difficulty in algorithm development arises from a lack of robust data, which is needed to appropriately test the validity of an algorithm’s results. This paper discusses the development of synthetic data using the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model. A synthetic landmine scene has been modeled after data collected at a US Army arid testing site by the University of Hawaii’s Airborne Hyperspectral Imager (AHI). The synthetic data has been created and validated to represent the surrogate minefield thermally, spatially, spectrally, and temporally over the 7.9 to 11.5 micron region using 70 bands of data. Validation of the scene has been accomplished by direct comparison to the AHI truth data using qualitative band to band visual analysis, Rank Order Correlation comparison, Principle Components dimensionality analysis, and an evaluation of the R(x) algorithm’s performance. This paper discusses landmine detection phenomenology, describes the steps taken to build the scene, modeling methods utilized to overcome input parameter limitations, and compares the synthetic scene to truth data
    corecore