134 research outputs found

    Performance Evaluation of Aspect Dependent-Based Ghost Suppression Methods for Through-the-Wall Radar Imaging

    Get PDF
    There are many approaches which address multipath ghost challenges in Through-the-Wall Radar Imaging (TWRI) under Compressive Sensing (CS) framework. One of the approaches, which exploits ghosts’ locations in the images, termed as Aspect Dependent (AD), does not require prior knowledge of the reflecting geometry making it superior over multipath exploitation based approaches. However, which method is superior within the AD based category is still unknown. Therefore, their performance comparison becomes inevitable, and hence this paper presents their performance evaluation in view of target reconstruction. At first, the methods were grouped based on how the subarrays were applied: multiple subarray, hybrid subarray and sparse array. The methods were fairly evaluated on varying noise level, data volume and the number of targets in the scene. Simulation results show that, when applied in a noisy environment, the hybrid subarray-based approaches were robust than the multiple subarray and sparse array. At 15 dB signal-to-noise ratio, the hybrid subarray exhibited signal to clutter ratio of 3.9 dB and 4.5 dB above the multiple subarray and sparse array, respectively. When high data volumes or in the case of multiple targets, multiple subarrays with duo subarrays became the best candidates. Keywords: Aspect dependent; compressive sensing; point target; through-wall-radar imaging

    Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches

    Get PDF
    Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensin

    Compressed sensing on terahertz imaging

    Get PDF
    Most terahertz (THz) time-domain (pulsed) imaging experiments that have been performed by raster scanning the object relative to a focused THz beam require minutes or even hours to acquire a complete image. This slow image acquisition is a major limiting factor for real-time applications. Other systems using focal plane detector arrays can acquire images in real-time, but they are too expensive or are limited by low sensitivity in the THz range. More importantly, such systems cannot provide spectroscopic information of the sample. To develop faster and more efficient THz time-domain (pulsed) imaging systems, this research used random projection approach to reconstruct THz images from the synthetic and real-world THz data based on the concept of compressed/compressive sensing/sampling (CS). Compared with conventional THz time-domain (pulsed) imaging, no raster scanning of the object is required. The simulation results demonstrated that CS has great potential for real-time THz imaging systems because its use can dramatically reduce the number of measurements in such systems. We then implemented two different CS-THz systems based on the random projection method. One is a compressive THz time-domain (pulsed) spectroscopic imaging system using a set of independent optimized masks. A single-point THz detector, together with a set of 40 optimized two-dimensional binary masks, was used to measure the THz waveforms transmitted through a sample. THz time- and frequency-domain images of the sample comprising 20×20 pixels were subsequently reconstructed. This demonstrated that both the spatial distribution and the spectral characteristics of a sample can be obtained by this means. Compared with conventional THz time-domain (pulsed) imaging, ten times fewer THz spectra need to be taken. In order to further speed up the image acquisition and reconstruction process, another hardware implementation - a single rotating mask (i.e., the spinning disk) with random binary patterns - was utilized to spatially modulate a collimated THz. After propagating through the sample, the THz beam was measured using a single detector, and a THz image was subsequently reconstructed using the CS approach. This demonstrated that a 32×32 pixel image could be obtained from 160 to 240 measurements. This spinning disk configuration allows the use of an electric motor to rotate the spinning disk, thus enabling the experiment to be performed automatically and continuously. To the best of our knowledge, this is the first experimental implementation of a spinning disk configuration for high speed compressive image acquisition. A three-dimensional (3D) joint reconstruction approach was developed to reconstruct THz images from random/incomplete subsets of THz data. Such a random sampling method provides a fast THz imaging acquisition and also simplifies the current THz imaging hardware implementation. The core idea is extended in image inpainting to the case of 3D data. Our main objective is to exploit both spatial and spectral/temporal information for recovering the missing samples. It has been shown that this approach has superiority over the case where the spectral/temporal images are treated independently. We first proposed to learn a spatio-spectral/temporal dictionary from a subset of available training data. Using this dictionary, the THz images can then be jointly recovered from an incomplete set of observations. The simulation results using the measured THz image data confirm that this 3D joint reconstruction approach also provides a significant improvement over the existing THz imaging methods

    Applied Harmonic Analysis and Data Processing

    Get PDF
    Massive data sets have their own architecture. Each data source has an inherent structure, which we should attempt to detect in order to utilize it for applications, such as denoising, clustering, anomaly detection, knowledge extraction, or classification. Harmonic analysis revolves around creating new structures for decomposition, rearrangement and reconstruction of operators and functions—in other words inventing and exploring new architectures for information and inference. Two previous very successful workshops on applied harmonic analysis and sparse approximation have taken place in 2012 and in 2015. This workshop was the an evolution and continuation of these workshops and intended to bring together world leading experts in applied harmonic analysis, data analysis, optimization, statistics, and machine learning to report on recent developments, and to foster new developments and collaborations

    Target recognition for synthetic aperture radar imagery based on convolutional neural network feature fusion

    Get PDF
    Driven by the great success of deep convolutional neural networks (CNNs) that are currently used by quite a few computer vision applications, we extend the usability of visual-based CNNs into the synthetic aperture radar (SAR) data domain without employing transfer learning. Our SAR automatic target recognition (ATR) architecture efficiently extends the pretrained Visual Geometry Group CNN from the visual domain into the X-band SAR data domain by clustering its neuron layers, bridging the visual—SAR modality gap by fusing the features extracted from the hidden layers, and by employing a local feature matching scheme. Trials on the moving and stationary target acquisition dataset under various setups and nuisances demonstrate a highly appealing ATR performance gaining 100% and 99.79% in the 3-class and 10-class ATR problem, respectively. We also confirm the validity, robustness, and conceptual coherence of the proposed method by extending it to several state-of-the-art CNNs and commonly used local feature similarity/match metrics

    Compressive Sensing and Imaging of Guided Ultrasonic Wavefields

    Get PDF
    Structural health monitoring (SHM) and Nondestructive Evaluation (NDE) technologies can be used to predict the structural remaining useful life through appropriate diagnosis and prognosis methodologies. The main goal is the detection and characterization of defects that may compromise the integrity and the operability of a structure. The use of Lamb waves, which are ultrasonic guided waves (GW), have shown potential for detecting damage in specimens as a part of SHM or NDT systems. These methods can play a significant role in monitoring and tracking the integrity of structures by estimating the presence, location, severity, and type of damage. One of the advantages of GW is their capacity to propagate over large areas with excellent sensitivity to a variety of damage types while guaranteeing a short wavelength, such that the detectability of large structural damages is guaranteed. The Guided ultrasonic wavefield imaging (GWI) is an advanced technique for Damage localization and identification on a structure. GWI is generally referred to as the analysis of a series of images representing the time evolution of propagating waves and, possibly, their interaction with defects. This technique can provide useful insights into the structural conditions. Nowadays, high-resolution wavefield imaging has been widely studied and applied in damage identification. However, full wavefield imaging techniques have some limitations, including slow data acquisition and lack of accuracy. The objectives of this dissertation are to develop novel and high resolution Guided Wavefield Imaging techniques able to detect defects in metals and composite materials while reducing the acquisition time without losing in detection accuracy

    Endmember-Guided Unmixing Network (EGU-Net): A General Deep Learning Framework for Self-Supervised Hyperspectral Unmixing

    Get PDF
    Over the past decades, enormous efforts have been made to improve the performance of linear or nonlinear mixing models for hyperspectral unmixing (HU), yet their ability to simultaneously generalize various spectral variabilities (SVs) and extract physically meaningful endmembers still remains limited due to the poor ability in data fitting and reconstruction and the sensitivity to various SVs. Inspired by the powerful learning ability of deep learning (DL), we attempt to develop a general DL approach for HU, by fully considering the properties of endmembers extracted from the hyperspectral imagery, called endmember-guided unmixing network (EGU-Net). Beyond the alone autoencoder-like architecture, EGU-Net is a two-stream Siamese deep network, which learns an additional network from the pure or nearly pure endmembers to correct the weights of another unmixing network by sharing network parameters and adding spectrally meaningful constraints (e.g., nonnegativity and sum-to-one) toward a more accurate and interpretable unmixing solution. Furthermore, the resulting general framework is not only limited to pixelwise spectral unmixing but also applicable to spatial information modeling with convolutional operators for spatial–spectral unmixing. Experimental results conducted on three different datasets with the ground truth of abundance maps corresponding to each material demonstrate the effectiveness and superiority of the EGU-Net over state-of-the-art unmixing algorithms. The codes will be available from the website: https://github.com/danfenghong/IEEE_TNNLS_EGU-Net

    Sparse/DCT (S/DCT) Two-Layered Representation of Prediction Residuals for Video Coding

    Full text link

    Novi algoritam za kompresiju seizmičkih podataka velike amplitudske rezolucije

    Get PDF
    Renewable sources cannot meet energy demand of a growing global market. Therefore, it is expected that oil & gas will remain a substantial sources of energy in a coming years. To find a new oil & gas deposits that would satisfy growing global energy demands, significant efforts are constantly involved in finding ways to increase efficiency of a seismic surveys. It is commonly considered that, in an initial phase of exploration and production of a new fields, high-resolution and high-quality images of the subsurface are of the great importance. As one part in the seismic data processing chain, efficient managing and delivering of a large data sets, that are vastly produced by the industry during seismic surveys, becomes extremely important in order to facilitate further seismic data processing and interpretation. In this respect, efficiency to a large extent relies on the efficiency of the compression scheme, which is often required to enable faster transfer and access to data, as well as efficient data storage. Motivated by the superior performance of High Efficiency Video Coding (HEVC), and driven by the rapid growth in data volume produced by seismic surveys, this work explores a 32 bits per pixel (b/p) extension of the HEVC codec for compression of seismic data. It is proposed to reassemble seismic slices in a format that corresponds to video signal and benefit from the coding gain achieved by HEVC inter mode, besides the possible advantages of the (still image) HEVC intra mode. To this end, this work modifies almost all components of the original HEVC codec to cater for high bit-depth coding of seismic data: Lagrange multiplier used in optimization of the coding parameters has been adapted to the new data statistics, core transform and quantization have been reimplemented to handle the increased bit-depth range, and modified adaptive binary arithmetic coder has been employed for efficient entropy coding. In addition, optimized block selection, reduced intra prediction modes, and flexible motion estimation are tested to adapt to the structure of seismic data. Even though the new codec after implementation of the proposed modifications goes beyond the standardized HEVC, it still maintains a generic HEVC structure, and it is developed under the general HEVC framework. There is no similar work in the field of the seismic data compression that uses the HEVC as a base codec setting. Thus, a specific codec design has been tailored which, when compared to the JPEG-XR and commercial wavelet-based codec, significantly improves the peak-signal-tonoise- ratio (PSNR) vs. compression ratio performance for 32 b/p seismic data. Depending on a proposed configurations, PSNR gain goes from 3.39 dB up to 9.48 dB. Also, relying on the specific characteristics of seismic data, an optimized encoder is proposed in this work. It reduces encoding time by 67.17% for All-I configuration on trace image dataset, and 67.39% for All-I, 97.96% for P2-configuration and 98.64% for B-configuration on 3D wavefield dataset, with negligible coding performance losses. As a side contribution of this work, HEVC is analyzed within all of its functional units, so that the presented work itself can serve as a specific overview of methods incorporated into the standard
    corecore