12 research outputs found

    Multiple Object Detection in Hyperspectral Imagery Using Spectral Fringe-Adjusted Joint Transform Correlator

    Get PDF
    Hyperspectral imaging (HSI) sensors provide plenty of spectral information to uniquely identify materials by their reflectance spectra, and this information has been effectively used for object detection and identification applications. Joint transform correlation (JTC) based object detection techniques in HSI have been proposed in the literatures, such as spectral fringe-adjusted joint transform correlation (SFJTC) and with its several improvements. However, to our knowledge, the SFJTC based techniques were designed to detect only similar patterns in hyperspectral data cube and not for dissimilar patterns. Thus, in this paper, a new deterministic object detection approach using SFJTC is proposed to perform multiple dissimilar target detection in hyperspectral imagery. In this technique, input spectral signatures from a given hyperspectral image data cube are correlated with the multiple reference signatures using the classassociative technique. To achieve better correlation output, the concept of SFJTC and the modified Fourier-plane image subtraction technique are incorporated in the multiple target detection processes. The output of this technique provides sharp and high correlation peaks for a match and negligible or no correlation peaks for a mismatch. Test results using real-life hyperspectral data cube show that the proposed algorithm can successfully detect multiple dissimilar patterns with high discrimination

    Multiclass Object Detection with Single Query in Hyperspectral Imagery Using Class-Associative Spectral Fringe-Adjusted Joint Transform Correlation

    Get PDF
    We present a deterministic object detection algorithm capable of detecting multiclass objects in hyperspectral imagery (HSI) without any training or preprocessing. The proposed method, which is named class-associative spectral fringe-adjusted joint transform correlation (CSFJTC), is based on joint transform correlation (JTC) between object and nonobject spectral signatures to search for a similar match, which only requires one query (training-free) from the object\u27s spectral signature. Our method utilizes class-associative filtering, modified Fourier plane image subtraction, and fringe-adjusted JTC techniques in spectral correlation domain to perform the object detection task. The output of CSFJTC yields a pair of sharp correlation peaks for a matched target and negligible or no correlation peaks for a mismatch. Experimental results, in terms of receiver operating characteristic (ROC) curves and area-under-ROC (AUROC), on three popular real-world hyperspectral data sets demonstrate the superiority of the proposed CSFJTC technique over other well-known hyperspectral object detection approaches

    A Robust Fringe-Adjusted Joint Transform Correlator for Efficient Object Detection

    Get PDF
    The fringe-adjusted joint transform correlation (FJTC) technique has been widely used for real-time optical pattern recognition applications. However, the classical FJTC technique suffers from target distortions due to noise, scale, rotation and illumination variations of the targets in input scenes. Several improvements of the FJTC have been proposed in the literature to accommodate these problems. Some popular techniques such as synthetic discriminant function (SDF) based FJTC was designed to alleviate the problems of scale and rotation variations of the target, whereas wavelet based FJTC has been found to yield better performance for noisy targets in the input scenes. While these techniques integrated with specific features to improve performance of the FJTC, a unified and synergistic approach to equip the FJTC with robust features is yet to be done. Thus, in this paper, a robust FJTC technique based on sequential filtering approach is proposed. The proposed method is developed in such a way that it is insensitive to rotation, scale, noise and illumination variations of the targets. Specifically, local phase (LP) features from monogenic signal is utilized to reduce the effect of background illumination thereby achieving illumination invariance. The SDF is implemented to achieve rotation and scale invariance, whereas the logarithmic fringe-adjusted filter (LFAF) is employed to reduce the noise effect. The proposed technique can be used as a real-time region-of-interest detector in wide-area surveillance for automatic object detection. The feasibility of the proposed technique has been tested on aerial imagery and has observed promising performance in detection accuracy

    Jamming Detection and Classification in OFDM-based UAVs via Feature- and Spectrogram-tailored Machine Learning

    Get PDF
    In this paper, a machine learning (ML) approach is proposed to detect and classify jamming attacks against orthogonal frequency division multiplexing (OFDM) receivers with applications to unmanned aerial vehicles (UAVs). Using software-defined radio (SDR), four types of jamming attacks; namely, barrage, protocol-aware, single-tone, and successive-pulse are launched and investigated. Each type is qualitatively evaluated considering jamming range, launch complexity, and attack severity. Then, a systematic testing procedure is established by placing an SDR in the vicinity of a UAV (i.e., drone) to extract radiometric features before and after a jamming attack is launched. Numeric features that include signal-to-noise ratio (SNR), energy threshold, and key OFDM parameters are used to develop a feature-based classification model via conventional ML algorithms. Furthermore, spectrogram images collected following the same testing procedure are exploited to build a spectrogram-based classification model via state-of-the-art deep learning algorithms (i.e., convolutional neural networks). The performance of both types of algorithms is analyzed quantitatively with metrics including detection and false alarm rates. Results show that the spectrogram-based model classifies jamming with an accuracy of 99.79% and a false-alarm of 0.03%, in comparison to 92.20% and 1.35%, respectively, with the feature-based counterpart

    Data-Driven Artificial Intelligence for Calibration of Hyperspectral Big Data

    Get PDF
    Near-earth hyperspectral big data present both huge opportunities and challenges for spurring developments in agriculture and high-throughput plant phenotyping and breeding. In this article, we present data-driven approaches to address the calibration challenges for utilizing near-earth hyperspectral data for agriculture. A data-driven, fully automated calibration workflow that includes a suite of robust algorithms for radiometric calibration, bidirectional reflectance distribution function (BRDF) correction and reflectance normalization, soil and shadow masking, and image quality assessments was developed. An empirical method that utilizes predetermined models between camera photon counts (digital numbers) and downwelling irradiance measurements for each spectral band was established to perform radiometric calibration. A kernel-driven semiempirical BRDF correction method based on the Ross Thick-Li Sparse (RTLS) model was used to normalize the data for both changes in solar elevation and sensor view angle differences attributed to pixel location within the field of view. Following rigorous radiometric and BRDF corrections, novel rule-based methods were developed to conduct automatic soil removal; and a newly proposed approach was used for image quality assessment; additionally, shadow masking and plot-level feature extraction were carried out. Our results show that the automated calibration, processing, storage, and analysis pipeline developed in this work can effectively handle massive amounts of hyperspectral data and address the urgent challenges related to the production of sustainable bioenergy and food crops, targeting methods to accelerate plant breeding for improving yield and biomass traits

    Augmented Reality and Artificial Intelligence in industry: Trends, tools, and future challenges

    No full text
    Augmented Reality (AR) is an augmented depiction of reality formed by overlaying digital information on an image of objects being seen through a device. Artificial Intelligence (AI) techniques have experienced unprecedented growth and are being applied in various industries. The combination of AR and AI is the next prominent direction in upcoming years with many industries and academia recognizing the importance of their adoption. With the advancements in the silicone industry that push the boundaries of Moore\u27s law, processors will be less expensive, more efficient, and power-optimized in the forthcoming years. This is a tremendous support and necessity for an AR boom, and with the help of AI, there is an excellent potential for smart industries to increase the production speed and workforce training along with improved manufacturing, error handling, assembly, and packaging. In this work, we provide a systematic review of recent advances, tools, techniques, and platforms of AI-empowered AR along with the challenges of using AI in AR applications. This paper will serve as a guideline for future research in the domain of AI-assisted AR in industrial applications

    C-PLES: Contextual Progressive Layer Expansion with Self-attention for Multi-class Landslide Segmentation on Mars using Multimodal Satellite Imagery

    No full text
    Landslide segmentation on Earth has been a challenging computer vision task, in which the lack of annotated data or limitation on computational resources has been a major obstacle in the development of accurate and scalable artificial intelligence-based models. However, the accelerated progress in deep learning techniques and the availability of data-sharing initiatives have enabled significant achievements in landslide segmentation on Earth. With the current capabilities in technology and data availability, replicating a similar task on other planets, such as Mars, does not seem an impossible task anymore. In this research, we present C-PLES (Contextual Progressive Layer Expansion with Self-attention), a deep learning architecture for multi-class landslide segmentation in the Valles Marineris (VM) on Mars. Even though the challenges could be different from on-Earth landslide segmentation, due to the nature of the environment and data characteristics, the outcomes of this research lead to a better understanding of the geology and terrain of the planet, in addition, to providing valuable insights regarding the importance of image modality for this task. The proposed architecture combines the merits of the progressive neuron expansion with attention mechanisms in an encoder-decoder-based framework, delivering competitive performance in comparison with state-of-the-art deep learning architectures for landslide segmentation. In addition to the new multi-class segmentation architecture, we introduce a new multi-modal multi-class Martian landslide segmentation dataset for the first time. The dataset will be available at https://github.com/MAIN-Lab/C-PLE

    U-PEN++: Redesigning U-PEN Architecture with Multi-Head Attention for Retinal Image Segmentation

    No full text
    In the era of the ever-increasing need for computing power, deep learning (DL) algorithms are becoming critical for accomplishing success in various domains, such as accessibility and processing of information from the quantum of data present in the physical, digital, and biological realms. Medical image segmentation is one such application of DL in the healthcare sector. The segmentation of medical images, such as retinal images, enables an efficient analytical process for diagnostics and medical procedures. To segment regions of interest in medical images, UNet has been primarily used as the baseline DL architecture that consists of contracting and expanding paths for capturing semantic features and precision localization. Although several forms of U-Net have shown promise, its limitations such as hardware memory requirements and inaccurate localization of nonstandard shapes still need to be addressed effectively. In this work, we propose U-PEN++, which reconfigures previously developed U-PEN (U-Net with Progressively Expanded Neuron) architecture by introducing a new module named Progressively Expanded Neuron with Attention (PEN-A) that consists of Maclaurin Series of a nonlinear function and multihead attention mechanism. The proposed PEN-A module enriches the feature representation by capturing more relevant contextual information when compared to the U-PEN model. Moreover, the proposed model removes excessive hidden layers, resulting in less trainable parameters when compared to U-PEN. Experimental analysis performed on DRIVE and CHASE datasets demonstrated more effective s egmentation a nd p arameter efficiency of the proposed U-PEN++ architecture for retinal image segmentation tasks when compared to U-Net, U-PEN, and Residual U-Net architectures

    A State-of-the-Art Survey on Deep Learning Theory and Architectures

    Get PDF
    In recent years, deep learning has garnered tremendous success in a variety of application domains. This new field of machine learning has been growing rapidly and has been applied to most traditional application domains, as well as some new areas that present more opportunities. Different methods have been proposed based on different categories of learning, including supervised, semi-supervised, and un-supervised learning. Experimental results show state-of-the-art performance using deep learning when compared to traditional machine learning approaches in the fields of image processing, computer vision, speech recognition, machine translation, art, medical imaging, medical information processing, robotics and control, bioinformatics, natural language processing, cybersecurity, and many others. This survey presents a brief survey on the advances that have occurred in the area of Deep Learning (DL), starting with the Deep Neural Network (DNN). The survey goes on to cover Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), including Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU), Auto-Encoder (AE), Deep Belief Network (DBN), Generative Adversarial Network (GAN), and Deep Reinforcement Learning (DRL). Additionally, we have discussed recent developments, such as advanced variant DL techniques based on these DL approaches. This work considers most of the papers published after 2012 from when the history of deep learning began. Furthermore, DL approaches that have been explored and evaluated in different application domains are also included in this survey. We also included recently developed frameworks, SDKs, and benchmark datasets that are used for implementing and evaluating deep learning approaches. There are some surveys that have been published on DL using neural networks and a survey on Reinforcement Learning (RL). However, those papers have not discussed individual advanced techniques for training large-scale deep learning models and the recently developed method of generative models
    corecore