142 research outputs found

    Development of Neural Network Based Adaptive Change Detection Technique for Land Terrain Monitoring with Satellite and Drone Images

    Get PDF
    Role of satellite images is increasing in day-to-day life for both civil as well as defence applications. One of the major defence application while troop’s movement is to know about the behaviour of the terrain in advance by which smooth transportation of the troops can be made possible. Therefore, it is important to identify the terrain in advance which is quite possible with the use of satellite images. However, to achieve accurate results, it is essential that the data used should be precise and quite reliable. To achieve this with a satellite image alone is a challenging task. Therefore, in this paper an attempt has been made to fuse the images obtained from drone and satellite, to achieve precise terrain information like bare land, dense vegetation and sparse vegetation. For this purpose, a test area nearby Roorkee, Uttarakhand, India has been selected, and drone and Sentinel-2 data have been taken for the same dates. A neural network based technique has been proposed to obtain precise terrain information from the Sentinel-2 image. A quantitative analysis was carried out to know the terrain information by using change detection. It is observed that the proposed technique has a good potential to identify precisely bare land, dense vegetation, and sparse vegetation which may be quite useful for defence as well as civilian application

    Signal fingerprinting and machine learning framework for UAV detection and identification.

    Get PDF
    Advancement in technology has led to creative and innovative inventions. One such invention includes unmanned aerial vehicles (UAVs). UAVs (also known as drones) are now an intrinsic part of our society because their application is becoming ubiquitous in every industry ranging from transportation and logistics to environmental monitoring among others. With the numerous benign applications of UAVs, their emergence has added a new dimension to privacy and security issues. There are little or no strict regulations on the people that can purchase or own a UAV. For this reason, nefarious actors can take advantage of these aircraft to intrude into restricted or private areas. A UAV detection and identification system is one of the ways of detecting and identifying the presence of a UAV in an area. UAV detection and identification systems employ different sensing techniques such as radio frequency (RF) signals, video, sounds, and thermal imaging for detecting an intruding UAV. Because of the passive nature (stealth) of RF sensing techniques, the ability to exploit RF sensing for identification of UAV flight mode (i.e., flying, hovering, videoing, etc.), and the capability to detect a UAV at beyond visual line-of-sight (BVLOS) or marginal line-of-sight makes RF sensing techniques promising for UAV detection and identification. More so, there is constant communication between a UAV and its ground station (i.e., flight controller). The RF signals emitting from a UAV or UAV flight controller can be exploited for UAV detection and identification. Hence, in this work, an RF-based UAV detection and identification system is proposed and investigated. In RF signal fingerprinting research, the transient and steady state of the RF signals can be used to extract a unique signature. The first part of this work is to use two different wavelet analytic transforms (i.e., continuous wavelet transform and wavelet scattering transform) to investigate and analyze the characteristics or impacts of using either state for UAV detection and identification. Coefficient-based and image-based signatures are proposed for each of the wavelet analysis transforms to detect and identify a UAV. One of the challenges of using RF sensing is that a UAV\u27s communication links operate at the industrial, scientific, and medical (ISM) band. Several devices such as Bluetooth and WiFi operate at the ISM band as well, so discriminating UAVs from other ISM devices is not a trivial task. A semi-supervised anomaly detection approach is explored and proposed in this research to differentiate UAVs from Bluetooth and WiFi devices. Both time-frequency analytical approaches and unsupervised deep neural network techniques (i.e., denoising autoencoder) are used differently for feature extraction. Finally, a hierarchical classification framework for UAV identification is proposed for the identification of the type of unmanned aerial system signal (UAV or UAV controller signal), the UAV model, and the operational mode of the UAV. This is a shift from a flat classification approach. The hierarchical learning approach provides a level-by-level classification that can be useful for identifying an intruding UAV. The proposed frameworks described here can be extended to the detection of rogue RF devices in an environment

    Trustworthy and Intelligent COVID-19 Diagnostic IoMT through XR and Deep-Learning-Based Clinic Data Access

    Get PDF
    This article presents a novel extended reality (XR) and deep-learning-based Internet-of-Medical-Things (IoMT) solution for the COVID-19 telemedicine diagnostic, which systematically combines virtual reality/augmented reality (AR) remote surgical plan/rehearse hardware, customized 5G cloud computing and deep learning algorithms to provide real-time COVID-19 treatment scheme clues. Compared to existing perception therapy techniques, our new technique can significantly improve performance and security. The system collected 25 clinic data from the 347 positive and 2270 negative COVID-19 patients in the Red Zone by 5G transmission. After that, a novel auxiliary classifier generative adversarial network-based intelligent prediction algorithm is conducted to train the new COVID-19 prediction model. Furthermore, The Copycat network is employed for the model stealing and attack for the IoMT to improve the security performance. To simplify the user interface and achieve an excellent user experience, we combined the Red Zone's guiding images with the Green Zone's view through the AR navigate clue by using 5G. The XR surgical plan/rehearse framework is designed, including all COVID-19 surgical requisite details that were developed with a real-time response guaranteed. The accuracy, recall, F1-score, and area under the ROC curve (AUC) area of our new IoMT were 0.92, 0.98, 0.95, and 0.98, respectively, which outperforms the existing perception techniques with significantly higher accuracy performance. The model stealing also has excellent performance, with the AUC area of 0.90 in Copycat slightly lower than the original model. This study suggests a new framework in the COVID-19 diagnostic integration and opens the new research about the integration of XR and deep learning for IoMT implementation

    Estimating Retinal Sensitivity Using Optical Coherence Tomography With Deep-Learning Algorithms in Macular Telangiectasia Type 2

    Get PDF
    IMPORTANCE: As currently used, microperimetry is a burdensome clinical testing modality for testing retinal sensitivity requiring long testing times and trained technicians. OBJECTIVE: To create a deep-learning network that could directly estimate function from structure de novo to provide an en face high-resolution map of estimated retinal sensitivity. DESIGN, SETTING, AND PARTICIPANTS: A cross-sectional imaging study using data collected between January 1, 2016, and November 30, 2017, from the Natural History Observation and Registry of macular telangiectasia type 2 (MacTel) evaluated 38 participants with confirmed MacTel from 2 centers. MAIN OUTCOMES AND MEASURES: Mean absolute error of estimated compared with observed retinal sensitivity. Observed retinal sensitivity was obtained with fundus-controlled perimetry (microperimetry). Estimates of retinal sensitivity were made with deep-learning models that learned on superpositions of high-resolution optical coherence tomography (OCT) scans and microperimetry results. Those predictions were used to create high-density en face sensitivity maps of the macula. Training, validation, and test sets were segregated at the patient level. RESULTS: A total of 2499 microperimetry sensitivities were mapped onto 1708 OCT B-scans from 63 eyes of 38 patients (mean [SD] age, 74.3 [9.7] years; 15 men [39.5%]). The numbers of examples for our algorithm were 67 899 (103 053 after data augmentation) for training, 1695 for validation, and 1212 for testing. Mean absolute error results were 4.51 dB (95% CI, 4.36-4.65 dB) when using linear regression and 3.66 dB (95% CI, 3.53-3.78 dB) when using the LeNet model. Using a 49.9 million–variable deep-learning model, a mean absolute error of 3.36 dB (95% CI, 3.25-3.48 dB) of retinal sensitivity for validation and test was achieved. Correlation showed a high degree of agreement (Pearson correlation r = 0.78). By paired Wilcoxon rank sum test, our model significantly outperformed these 2 baseline models (P < .001). CONCLUSIONS AND RELEVANCE: High-resolution en face maps of estimated retinal sensitivities were created in eyes with MacTel. The maps were of unequalled resolution compared with microperimetry and were able to correctly delineate functionally healthy and impaired retina. This model may be useful to monitor structural and functional disease progression and has potential as an objective surrogate outcome measure in investigational trials

    Object Detection and Classification in the Visible and Infrared Spectrums

    Get PDF
    The over-arching theme of this dissertation is the development of automated detection and/or classification systems for challenging infrared scenarios. The six works presented herein can be categorized into four problem scenarios. In the first scenario, long-distance detection and classification of vehicles in thermal imagery, a custom convolutional network architecture is proposed for small thermal target detection. For the second scenario, thermal face landmark detection and thermal cross-spectral face verification, a publicly-available visible and thermal face dataset is introduced, along with benchmark results for several landmark detection and face verification algorithms. Furthermore, a novel visible-to-thermal transfer learning algorithm for face landmark detection is presented. The third scenario addresses near-infrared cross-spectral periocular recognition with a coupled conditional generative adversarial network guided by auxiliary synthetic loss functions. Finally, a deep sparse feature selection and fusion is proposed to detect the presence of textured contact lenses prior to near-infrared iris recognition

    Investigation of Dual-Flow Deep Learning Models LSTM-FCN and GRU-FCN Efficiency against Single-Flow CNN Models for the Host-Based Intrusion and Malware Detection Task on Univariate Times Series Data

    Get PDF
    Intrusion and malware detection tasks on a host level are a critical part of the overall information security infrastructure of a modern enterprise. While classical host-based intrusion detection systems (HIDS) and antivirus (AV) approaches are based on change monitoring of critical files and malware signatures, respectively, some recent research, utilizing relatively vanilla deep learning (DL) methods, has demonstrated promising anomaly-based detection results that already have practical applicability due low false positive rate (FPR). More complex DL methods typically provide better results in natural language processing and image recognition tasks. In this paper, we analyze applicability of more complex dual-flow DL methods, such as long short-term memory fully convolutional network (LSTM-FCN), gated recurrent unit (GRU)-FCN, and several others, for the task specified on the attack-caused Windows OS system calls traces dataset (AWSCTD) and compare it with vanilla single-flow convolutional neural network (CNN) models. The results obtained do not demonstrate any advantages of dual-flow models while processing univariate times series data and introducing unnecessary level of complexity, increasing training, and anomaly detection time, which is crucial in the intrusion containment process. On the other hand, the newly tested AWSCTD-CNN-static (S) single-flow model demonstrated three times better training and testing times, preserving the high detection accuracy.This article belongs to the Special Issue Machine Learning for Cybersecurity Threats, Challenges, and Opportunitie

    Error analysis of programmable metasurfaces for beam steering

    Get PDF
    © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Recent years have seen the emergence of programmable metasurfaces, where the user can modify the electromagnetic (EM) response of the device via software. Adding reconfigurability to the already powerful EM capabilities of metasurfaces opens the door to novel cyber-physical systems with exciting applications in domains such as holography, cloaking, or wireless communications. This paradigm shift, however, comes with a non-trivial increase of the complexity of the metasurfaces that will pose new reliability challenges stemming from the need to integrate tuning, control, and communication resources to implement the programmability. While metasurfaces will become prone to failures, little is known about their tolerance to errors. To bridge this gap, this paper examines the reliability problem in programmable metamaterials by proposing an error model and a general methodology for error analysis. To derive the error model, the causes and potential impact of faults are identified and discussed qualitatively. The methodology is presented and exemplified for beam steering, which constitutes a relevant case for programmable metasurfaces. Results show that performance degradation depends on the type of error and its spatial distribution and that, in beam steering, error rates over 20% can still be considered acceptable.This work has been supported by the European Commission under grant H2020-FETOPEN-736876 (VISORSURF) and by ICREA under the ICREA Academia programme. The person and base station icons in Figure 1 were created by Jens Tärningand Clea Doltz from the Noun Project.Peer ReviewedPostprint (author's final draft

    Recent Developments in Atomic Force Microscopy and Raman Spectroscopy for Materials Characterization

    Get PDF
    This book contains chapters that describe advanced atomic force microscopy (AFM) modes and Raman spectroscopy. It also provides an in-depth understanding of advanced AFM modes and Raman spectroscopy for characterizing various materials. This volume is a useful resource for a wide range of readers, including scientists, engineers, graduate students, postdoctoral fellows, and scientific professionals working in specialized fields such as AFM, photovoltaics, 2D materials, carbon nanotubes, nanomaterials, and Raman spectroscopy
    corecore