4 research outputs found

    An end-to-end hyperspectral scene simulator with alternate adjacency effect models and its comparison with cameoSim

    Get PDF
    In this research, we developed a new rendering-based end to end Hyperspectral scene simulator CHIMES (Cranfield Hyperspectral Image Modelling and Evaluation System), which generates nadir images of passively illuminated 3-D outdoor scenes in Visible, Near Infrared (NIR) and Short-Wave Infrared (SWIR) regions, ranging from 360 nm to 2520 nm. MODTRAN TM (MODerate resolution TRANsmission), is used to generate the sky-dome environment map which includes sun and sky radiance along with the polarisation effect of the sky due to Rayleigh scattering. Moreover, we perform path tracing and implement ray interaction with medium and volumetric backscattering at rendering time to model the adjacency effect. We propose two variants of adjacency models, the first one incorporates a single spectral albedo as the averaged background of the scene, this model is called the Background One-Spectra Adjacency Effect Model (BOAEM), which is a CameoSim like model created for performance comparison. The second model calculates background albedo from a pixel’s neighbourhood, whose size depends on the air volume between sensor and target, and differential air density up to sensor altitude. Average background reflectance of all neighbourhood pixel is computed at rendering time for estimating the total upwelled scattered radiance, by volumetric scattering. This model is termed the Texture-Spectra Incorporated Adjacency Effect Model (TIAEM). Moreover, for estimating the underlying atmospheric condition MODTRAN is run with varying aerosol optical thickness and its total ground reflected radiance (TGRR) is compared with TGRR of known in-scene material. The Goodness of fit is evaluated in each iteration, and MODTRAN’s output with the best fit is selected. We perform a tri-modal validation of simulators on a real hyperspectral scene by varying atmospheric condition, terrain surface models and proposed variants of adjacency models. We compared results of our model with Lockheed Martin’s well-established scene simulator CameoSim and acquired Ground Truth (GT) by Hyspex cameras. In clear-sky conditions, both models of CHIMES and CameoSim are in close agreement, however, in searched overcast conditions CHIMES BOAEM is shown to perform better than CameoSim in terms of ℓ1 -norm error of the whole scene with respect to GT. TIAEM produces better radiance shape and covariance of background statistics with respect to Ground Truth (GT), which is key to good target detection performance. We also report that the results of CameoSim have a many-fold higher error for the same scene when the flat surface terrain is replaced with a Digital Elevation Model (DEM) based rugged one

    HySim: a tool for space-to-space hyperspectral resolved imagery

    Get PDF
    This paper introduces HySim, a novel tool addressing the need for hyperspectral space-to-space imaging simulations, vital for in-orbit spacecraft inspection missions. This tool fills the gap by enabling the generation of hyperspectral space-to-space images across various scenarios, including fly-bys, inspections, rendezvous, and proximity operations. HySim combines open-source tools to handle complex scenarios, providing versatile configuration options for imaging scenarios, camera specifications, and material properties. It accurately simulates hyperspectral images of the target scene. This paper outlines HySim's features, validation against real space-borne images, and discusses its potential applications in space missions, emphasising its role in advancing space-to-space inspection and in-orbit servicing planning.UK Defence and Security Accelerator (DASA

    Robust hyperspectral image reconstruction for scene simulation applications

    Get PDF
    This thesis presents the development of a spectral reconstruction method for multispectral (MSI) and hyperspectral (HSI) applications through an enhanced dictionary learning and spectral unmixing methodologies. Earth observation/surveillance is largely undertaken by MSI sensing such as that given by the Landsat, WorldView, Sentinel etc, however, the practical usefulness of the MSI data set is very limited. This is mainly because of the very limited number of wave bands that can be provided by the MSI imagery. One means to remedy this major shortcoming is to extend the MSI into HSI without the need of involving expensive hardware investment. Specifically, spectral reconstruction has been one of the most critical elements in applications such as Hyperspectral scene simulation. Hyperspectral scene simulation has been an important technique particularly for defence applications. Scene simulation creates a virtual scene such that modelling of the materials in the scene can be tailored freely to allow certain parameters of the model to be studied. In the defence sector this is the most cost-effective technique to allow the vulnerability of the soldiers/vehicles to be evaluated before they are deployed to a foreign ground. The simulation of a hyperspectral scene requires the details of materials in the scene, which is normally not available. Current state-of-the-art technology is trying to make use of the MSI satellite data, and to transform it into HSI for the hyperspectral scene simulation. One way to achieve this is through a reconstruction algorithm, commonly known as spectral reconstruction, which turns the MSI into HSI using an optimisation approach. The methodology that has been adopted in this thesis is the development of a robust dictionary learning to estimate the endmember (EM) robustly. Once the EM is found the abundance of materials in the scene can be subsequently estimated through a linear unmixing approach. Conventional approaches to the material allocation of most Hyperspectral scene simulator has been using the Texture Material Mapper (TMM) algorithm, which allocates materials from a spectral library (a collection of pre-compiled endmember iii iv materials) database according to the minimum spectral Euclidean distance difference to a candidate pixel of the scene. This approach has been shown (in this work) to be highly inaccurate with large scene reconstruction error. This research attempts to use a dictionary learning technique for material allocation, solving it as an optimisation problem with the objective of: (i) to reconstruct the scene as closely as possible to the ground truth with a fraction of error as that given by the TMM method, and (ii) to learn materials which are trace (2-3 times the number of species (i.e. intrinsic dimension) in the scene) cluster to ensure all material species in the scene is included for the scene reconstruction. Furthermore, two approaches complementing the goals of the learned dictionary through a rapid orthogonal matching pursuit (r-OMP) which enhances the performance of the orthogonal matching pursuit algorithm; and secondly a semi-blind approximation of the irradiance of all pixels in the scene including those in the shaded regions, have been proposed in this work. The main result of this research is the demonstration of the effectiveness of the proposed algorithms using real data set. The SCD-SOMP has been shown capable to learn both the background and trace materials even for a dictionary with small number of atoms (≈10). Also, the KMSCD method is found to be the more versatile with overcomplete (non-orthogonal) dictionary capable to learn trace materials with high scene reconstruction accuracy (2x of accuracy enhancement over that simulated using the TMM method. Although this work has achieved an incremental improvement in spectral reconstruction, however, the need of dictionary training using hyperspectral data set in this thesis has been identified as one limitation which is needed to be removed for the future direction of research
    corecore