702 research outputs found

    High-resolution, slant-angle scene generation and validation of concealed targets in DIRSIG

    Get PDF
    Traditionally, synthetic imagery has been constructed to simulate images captured with low resolution, nadir-viewing sensors. Advances in sensor design have driven a need to simulate scenes not only at higher resolutions but also from oblique view angles. The primary efforts of this research include: real image capture, scene construction and modeling, and validation of the synthetic imagery in the reflective portion of the spectrum. High resolution imagery was collected of an area named MicroScene at the Rochester Institute of Technology using the Chester F. Carlson Center for Imaging Science\u27s MISI and WASP sensors using an oblique view angle. Three Humvees, the primary targets, were placed in the scene under three different levels of concealment. Following the collection, a synthetic replica of the scene was constructed and then rendered with the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model configured to recreate the scene both spatially and spectrally based on actual sensor characteristics. Finally, a validation of the synthetic imagery against the real images of MicroScene was accomplished using a combination of qualitative analysis, Gaussian maximum likelihood classification, grey-level co-occurrence matrix derived texture metrics, and the RX algorithm. The model was updated following each validation using a cyclical development approach. The purpose of this research is to provide a level of confidence in the synthetic imagery produced by DIRSIG so that it can be used to train and develop algorithms for real world concealed target detection

    One-Stage Cascade Refinement Networks for Infrared Small Target Detection

    Full text link
    Single-frame InfraRed Small Target (SIRST) detection has been a challenging task due to a lack of inherent characteristics, imprecise bounding box regression, a scarcity of real-world datasets, and sensitive localization evaluation. In this paper, we propose a comprehensive solution to these challenges. First, we find that the existing anchor-free label assignment method is prone to mislabeling small targets as background, leading to their omission by detectors. To overcome this issue, we propose an all-scale pseudo-box-based label assignment scheme that relaxes the constraints on scale and decouples the spatial assignment from the size of the ground-truth target. Second, motivated by the structured prior of feature pyramids, we introduce the one-stage cascade refinement network (OSCAR), which uses the high-level head as soft proposals for the low-level refinement head. This allows OSCAR to process the same target in a cascade coarse-to-fine manner. Finally, we present a new research benchmark for infrared small target detection, consisting of the SIRST-V2 dataset of real-world, high-resolution single-frame targets, the normalized contrast evaluation metric, and the DeepInfrared toolkit for detection. We conduct extensive ablation studies to evaluate the components of OSCAR and compare its performance to state-of-the-art model-driven and data-driven methods on the SIRST-V2 benchmark. Our results demonstrate that a top-down cascade refinement framework can improve the accuracy of infrared small target detection without sacrificing efficiency. The DeepInfrared toolkit, dataset, and trained models are available at https://github.com/YimianDai/open-deepinfrared to advance further research in this field.Comment: Submitted to TGR

    The LOFAR LBA Sky Survey: Deep Fields I. The Boötes Field

    Get PDF
    © ESO 2021.This is the accepted manuscript version of an article which has been published in final form at https://doi.org/10.1051/0004-6361/202141745We present the first sub-mJy (≈0.7\approx0.7 mJy beam−1^{-1}) survey to be completed below 100 MHz, which is over an order of magnitude deeper than previously achieved for widefield imaging of any field at these low frequencies. The high resolution (15×1515 \times 15 arcsec) image of the Bo\"otes field at 34-75 MHz is made from 56 hours of observation with the LOw Frequency ARray (LOFAR) Low Band Antenna (LBA) system. The observations and data reduction, including direction-dependent calibration, are described here. We present a radio source catalogue containing 1,948 sources detected over an area of 23.623.6 deg2^2, with a peak flux density threshold of 5σ5\sigma. Using existing datasets, we characterise the astrometric and flux density uncertainties, finding a positional uncertainty of ∌1.2\sim1.2 arcsec and a flux density scale uncertainty of about 5 per cent. Using the available deep 144-MHz data, we identified 144-MHz counterparts to all the 54-MHz sources, and produced a matched catalogue within the deep optical coverage area containing 829 sources. We calculate the Euclidean-normalised differential source counts and investigate the low-frequency radio source spectral indices between 54 and 144 MHz, both of which show a general flattening in the radio spectral indices for lower flux density sources, from ∌−0.75\sim-0.75 at 144-MHz flux densities between 100-1000 mJy to ∌−0.5\sim-0.5 at 144-MHz flux densities between 5-10 mJy, due to a growing population of star forming galaxies and compact core-dominated AGN.Peer reviewedFinal Accepted Versio

    Mapping urban tree species in a tropical environment using airborne multispectral and LiDAR data

    Get PDF
    Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial TechnologiesAccurate and up-to-date urban tree inventory is an essential resource for the development of strategies towards sustainable urban planning, as well as for effective management and preservation of biodiversity. Trees contribute to thermal comfort within urban centers by lessening heat island effect and have a direct impact in the reduction of air pollution. However, mapping individual trees species normally involves time-consuming field work over large areas or image interpretation performed by specialists. The integration of airborne LiDAR data with high-spatial resolution and multispectral aerial image is an alternative and effective approach to differentiate tree species at the individual crown level. This thesis aims to investigate the potential of such remotely sensed data to discriminate 5 common urban tree species using traditional Machine Learning classifiers (Random Forest, Support Vector Machine, and k-Nearest Neighbors) in the tropical environment of Salvador, Brazil. Vegetation indices and texture information were extracted from multispectral imagery, and LiDAR-derived variables for tree crowns, were tested separately and combined to perform tree species classification applying three different classifiers. Random Forest outperformed the other two classifiers, reaching overall accuracy of 82.5% when using combined multispectral and LiDAR data. The results indicate that (1) given the similarity in spectral signature, multispectral data alone is not sufficient to distinguish tropical tree species (only k-NN classifier could detect all species); (2) height values and intensity of crown returns points were the most relevant LiDAR features, combination of both datasets improved accuracy up to 20%; (3) generation of canopy height model derived from LiDAR point cloud is an effective method to delineate individual tree crowns in a semi-automatic approach

    Semi-Automated DIRSIG scene modeling from 3D lidar and passive imagery

    Get PDF
    The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model is an established, first-principles based scene simulation tool that produces synthetic multispectral and hyperspectral images from the visible to long wave infrared (0.4 to 20 microns). Over the last few years, significant enhancements such as spectral polarimetric and active Light Detection and Ranging (lidar) models have also been incorporated into the software, providing an extremely powerful tool for multi-sensor algorithm testing and sensor evaluation. However, the extensive time required to create large-scale scenes has limited DIRSIG’s ability to generate scenes ”on demand.” To date, scene generation has been a laborious, time-intensive process, as the terrain model, CAD objects and background maps have to be created and attributed manually. To shorten the time required for this process, this research developed an approach to reduce the man-in-the-loop requirements for several aspects of synthetic scene construction. Through a fusion of 3D lidar data with passive imagery, we were able to semi-automate several of the required tasks in the DIRSIG scene creation process. Additionally, many of the remaining tasks realized a shortened implementation time through this application of multi-modal imagery. Lidar data is exploited to identify ground and object features as well as to define initial tree location and building parameter estimates. These estimates are then refined by analyzing high-resolution frame array imagery using the concepts of projective geometry in lieu of the more common Euclidean approach found in most traditional photogrammetric references. Spectral imagery is also used to assign material characteristics to the modeled geometric objects. This is achieved through a modified atmospheric compensation applied to raw hyperspectral imagery. These techniques have been successfully applied to imagery collected over the RIT campus and the greater Rochester area. The data used include multiple-return point information provided by an Optech lidar linescanning sensor, multispectral frame array imagery from the Wildfire Airborne Sensor Program (WASP) and WASP-lite sensors, and hyperspectral data from the Modular Imaging Spectrometer Instrument (MISI) and the COMPact Airborne Spectral Sensor (COMPASS). Information from these image sources was fused and processed using the semi-automated approach to provide the DIRSIG input files used to define a synthetic scene. When compared to the standard manual process for creating these files, we achieved approximately a tenfold increase in speed, as well as a significant increase in geometric accuracy

    SDDNet: Infrared small and dim target detection network

    Get PDF
    This study focuses on developing deep learning methods for small and dim target detection. We model infrared images as the union of the target region and background region. Based on this model, the target detection problem is considered a two‐class segmentation problem that divides an image into the target and background. Therefore, a neural network called SDDNet for single‐frame images is constructed. The network yields target extraction results according to the original images. For multiframe images, a network called IC‐SDDNet, a combination of SDDNet and an interframe correlation network module is constructed. SDDNet and IC‐SDDNet achieve target detection rates close to 1 on typical datasets with very low false positives, thereby performing significantly better than current methods. Both models can be executed end to end, so both are very convenient to use, and their implementation efficiency is very high. Average speeds of 540+/230+ FPS and 170+/60+ FPS are achieved with SDDNet and IC‐SDDNet on a single Tesla V100 graphics processing unit and a single Jetson TX2 embedded module respectively. Additionally, neither network needs to use future information, so both networks can be directly used in real‐time systems. The well‐trained models and codes used in this study are available at https://github.com/LittlePieces/ObjectDetection
    • 

    corecore