268 research outputs found

    High-Resolution Remotely Sensed Small Target Detection by Imitating Fly Visual Perception Mechanism

    Get PDF
    The difficulty and limitation of small target detection methods for high-resolution remote sensing data have been a recent research hot spot. Inspired by the information capture and processing theory of fly visual system, this paper endeavors to construct a characterized model of information perception and make use of the advantages of fast and accurate small target detection under complex varied nature environment. The proposed model forms a theoretical basis of small target detection for high-resolution remote sensing data. After the comparison of prevailing simulation mechanism behind fly visual systems, we propose a fly-imitated visual system method of information processing for high-resolution remote sensing data. A small target detector and corresponding detection algorithm are designed by simulating the mechanism of information acquisition, compression, and fusion of fly visual system and the function of pool cell and the character of nonlinear self-adaption. Experiments verify the feasibility and rationality of the proposed small target detection model and fly-imitated visual perception method

    Spectrum Sensing and Signal Identification with Deep Learning based on Spectral Correlation Function

    Full text link
    Spectrum sensing is one of the means of utilizing the scarce source of wireless spectrum efficiently. In this paper, a convolutional neural network (CNN) model employing spectral correlation function which is an effective characterization of cyclostationarity property, is proposed for wireless spectrum sensing and signal identification. The proposed method classifies wireless signals without a priori information and it is implemented in two different settings entitled CASE1 and CASE2. In CASE1, signals are jointly sensed and classified. In CASE2, sensing and classification are conducted in a sequential manner. In contrary to the classical spectrum sensing techniques, the proposed CNN method does not require a statistical decision process and does not need to know the distinct features of signals beforehand. Implementation of the method on the measured overthe-air real-world signals in cellular bands indicates important performance gains when compared to the signal classifying deep learning networks available in the literature and against classical sensing methods. Even though the implementation herein is over cellular signals, the proposed approach can be extended to the detection and classification of any signal that exhibits cyclostationary features. Finally, the measurement-based dataset which is utilized to validate the method is shared for the purposes of reproduction of the results and further research and development

    Neural Network-Based Multi-Target Detection within Correlated Heavy-Tailed Clutter

    Full text link
    This work addresses the problem of range-Doppler multiple target detection in a radar system in the presence of slow-time correlated and heavy-tailed distributed clutter. Conventional target detection algorithms assume Gaussian-distributed clutter, but their performance is significantly degraded in the presence of correlated heavy-tailed distributed clutter. Derivation of optimal detection algorithms with heavy-tailed distributed clutter is analytically intractable. Furthermore, the clutter distribution is frequently unknown. This work proposes a deep learning-based approach for multiple target detection in the range-Doppler domain. The proposed approach is based on a unified NN model to process the time-domain radar signal for a variety of signal-to-clutter-plus-noise ratios (SCNRs) and clutter distributions, simplifying the detector architecture and the neural network training procedure. The performance of the proposed approach is evaluated in various experiments using recorded radar echoes, and via simulations, it is shown that the proposed method outperforms the conventional cell-averaging constant false-alarm rate (CA-CFAR), the ordered-statistic CFAR (OS-CFAR), and the adaptive normalized matched-filter (ANMF) detectors in terms of probability of detection in the majority of tested SCNRs and clutter scenarios.Comment: Accepted to IEEE Transactions on Aerospace and Electronic System

    Target detection and localization using thermal camera, mmWave radar and deep learning.

    Get PDF
    Reliable detection, and localization of tiny unmanned aerial vehicles (UAVs), birds, and other aerial vehicles with small cross-sections is an ongoing challenge. The detection task becomes even more challenging in harsh weather conditions such as snow, fog, and dust. RGB camera-based sensing is widely used for some tasks, especially navigation. However, the RGB camera's performance degrades in poor lighting conditions. On the other hand, mmWave radars perform very well in harsh weather conditions also. Additionally, thermal cameras perform reliably in low lighting conditions too. The combination of these two sensors makes an excellent choice for many of these applications. In this work, a model to detect and localize UAVs is made using an integrated system of a thermal camera and mmWave radar. Data collected with the integrated sensors are used to train a model for object detection using the yolov5 algorithm. The model detects and classifies objects such as humans, cars and UAVs. The images from the thermal camera are used in combination with the trained model to localize UAVs in the cameras Field of View(FOV)

    Target detection and localization using thermal camera, mmWave radar and deep learning

    Get PDF
    Reliable detection, and localization of tiny unmanned aerial vehicles (UAVs), birds, and other aerial vehicles with small cross-sections is an ongoing challenge. The detection task becomes even more challenging in harsh weather conditions such as snow, fog, and dust. RGB camera-based sensing is widely used for some tasks, especially navigation. However, the RGB camera's performance degrades in poor lighting conditions. On the other hand, mmWave radars perform very well in harsh weather conditions also. Additionally, thermal cameras perform reliably in low lighting conditions too. The combination of these two sensors makes an excellent choice for many of these applications. In this work, a model to detect and localize UAVs is made using an integrated system of a thermal camera and mmWave radar. Data collected with the integrated sensors are used to train a model for object detection using the yolov5 algorithm. The model detects and classifies objects such as humans, cars and UAVs. The images from the thermal camera are used in combination with the trained model to localize UAVs in the cameras Field of View(FOV)

    A New Wave in Robotics: Survey on Recent mmWave Radar Applications in Robotics

    Full text link
    We survey the current state of millimeterwave (mmWave) radar applications in robotics with a focus on unique capabilities, and discuss future opportunities based on the state of the art. Frequency Modulated Continuous Wave (FMCW) mmWave radars operating in the 76--81GHz range are an appealing alternative to lidars, cameras and other sensors operating in the near visual spectrum. Radar has been made more widely available in new packaging classes, more convenient for robotics and its longer wavelengths have the ability to bypass visual clutter such as fog, dust, and smoke. We begin by covering radar principles as they relate to robotics. We then review the relevant new research across a broad spectrum of robotics applications beginning with motion estimation, localization, and mapping. We then cover object detection and classification, and then close with an analysis of current datasets and calibration techniques that provide entry points into radar research.Comment: 19 Pages, 11 Figures, 2 Tables, TRO Submission pendin

    Weakly supervised deep learning method for vulnerable road user detection in FMCW radar

    Get PDF
    Millimeter-wave radar is currently the most effective automotive sensor capable of all-weather perception. In order to detect Vulnerable Road Users (VRUs) in cluttered radar data, it is necessary to model the time-frequency signal patterns of human motion, i.e. the micro-Doppler signature. In this paper we propose a spatio-temporal Convolutional Neural Network (CNN) capable of detecting VRUs in cluttered radar data. The main contribution is a weakly supervised training method which uses abundant, automatically generated labels from camera and lidar for training the model. The input to the network is a tensor of temporally concatenated range-azimuth-Doppler arrays, while the ground truth is an occupancy grid formed by objects detected jointly in-camera images and lidar. Lidar provides accurate ranging ground truth, while camera information helps distinguish between VRUs and background. Experimental evaluation shows that the CNN model has superior detection performance compared to classical techniques. Moreover, the model trained with imperfect, weak supervision labels outperforms the one trained with a limited number of perfect, hand-annotated labels. Finally, the proposed method has excellent scalability due to the low cost of automatic annotation
    corecore