68 research outputs found

    λ ˆμ΄λ” μŠ€νŽ™νŠΈλ‘œκ·Έλž¨μ„ μ‚¬μš©ν•œ μ»¨λ³Όλ£¨μ…˜ 신경망 기반 무인항곡기 λΆ„λ₯˜

    Get PDF
    ν•™μœ„λ…Όλ¬Έ (석사) -- μ„œμšΈλŒ€ν•™κ΅ λŒ€ν•™μ› : μœ΅ν•©κ³Όν•™κΈ°μˆ λŒ€ν•™μ› μœ΅ν•©κ³Όν•™λΆ€(지λŠ₯ν˜•μœ΅ν•©μ‹œμŠ€ν…œμ „κ³΅), 2021. 2. κ³½λ…Έμ€€.With the upsurge in using Unmanned Aerial Vehicles (UAVs) in various fields, identifying them in real-time is becoming an important issue. However, the identification of UAVs is difficult due to their characteristics such as Low altitude, Slow speed and Small radar cross-section (LSS). To identify UAVs with existing deterministic systems, the algorithm becomes more complex and requires large computations, making it unsuitable for real-time systems. Hence, we need a new approach to these threats. Deep learning models extract features from a large amount of data by themselves and have shown outstanding performance in various tasks. Using these advantages, deep learning-based UAV classification models using various sensors are being studied recently. In this paper, we propose a deep learning-based classification model that learns the micro-Doppler signatures (MDS) of targets represented on radar spectrogram images. To enable this, first, we recorded five LSS targets (three types of UAVs and two different types of human activities) with a frequency modulated continuous wave (FMCW) radar in various scenarios. Then, we converted signals into spectrograms in the form of images by Short-time Fourier transform (STFT). After the data refinement and augmentation, we made our own radar spectrogram dataset. Secondly, we analyzed characteristics of the radar spectrogram dataset using the ResNet-18 model and designed the lightweight ResNet-SP model for the real-time system. The results show that the proposed ResNet-SP has a training time of 242 seconds and an accuracy of 83.39%, which is superior to the ResNet-18 that takes 640 seconds for training with an accuracy of 79.88%.λ³Έ λ…Όλ¬Έμ—μ„œλŠ”, λ ˆμ΄λ” μŠ€νŽ™νŠΈλ‘œκ·Έλž¨ 상에 ν˜•μ„±λœ μ„œλ‘œ λ‹€λ₯Έ μ΄λ™ν‘œμ μ˜ κ³ μœ ν•œ 마이크둜 λ„ν”ŒλŸ¬μ‹ ν˜Έλ₯Ό ν•™μŠ΅ν•˜λŠ” λ”₯λŸ¬λ‹ 기반 λΆ„λ₯˜λͺ¨λΈμ„ μ œμ•ˆν•œλ‹€. 이λ₯Όμœ„ν•΄ μš°λ¦¬λŠ” 닀섯가지 μ†Œν˜• μ΄λ™ν‘œμ (무인항곡기 3μ’…κ³Ό μ‚¬λžŒν–‰λ™ 2μ’…)을 μ„ μ •ν•˜μ—¬ μ£ΌνŒŒμˆ˜λ³€μ‘° μ—°μ†νŒŒλ ˆμ΄λ”λ‘œ ν‘œμ λ“€μ˜ λ‹€μ–‘ν•œ μ›€μ§μž„μ„ μΈ‘μ •ν•˜κ³  μΈ‘μ •ν•œ μ‹ ν˜Έμ— λ‹¨μ‹œκ°„ 푸리에 λ³€ν™˜μ˜ μ‹ ν˜Έμ²˜λ¦¬κ³Όμ •κ³Ό 데이터 μ •μ œ 및 μ¦κ°•μ˜ μ „μ²˜λ¦¬κ³Όμ •μ„ μ μš©ν•˜μ—¬ 자체 λ ˆμ΄λ” μŠ€νŽ™νŠΈλ‘œκ·Έλž¨ 데이터셋을 μƒμ„±ν•œλ‹€. 이후 광학이미지 λΆ„λ₯˜λͺ¨λΈμΈ ResNet-18을 μ‚¬μš©ν•˜μ—¬ λ ˆμ΄λ” μŠ€νŽ™νŠΈλ‘œκ·Έλž¨ λ°μ΄ν„°μ…‹μ˜ νŠΉμ„±μ„ λΆ„μ„ν•œλ‹€. λ ˆμ΄λ”μ‹ ν˜Έλ₯Ό κ΄‘ν•™μ΄λ―Έμ§€λ‘œ λ³€ν˜•ν•˜λŠ” κ³Όμ •μ—μ„œμ˜ μ •λ³΄μ™œκ³‘ 및 손싀을 κ°€μ •ν•˜μ—¬ 세가지 λ ˆμ΄λ” μ‹ ν˜Έν˜•νƒœμ— λ”°λ₯Έ μ„±λŠ₯을 λΉ„κ΅ν•˜κ³  졜적의 λ°μ΄ν„°ν˜•νƒœλ₯Ό ν™•μΈν•œλ‹€. λ…Έμ΄μ¦ˆ μ‹œν—˜ 및 ꡬ쑰에 λ”°λ₯Έ μ„±λŠ₯λ³€ν™”λ₯Ό 톡해 λͺ¨λΈμ΄ ν•™μŠ΅ν•˜λŠ” μ£Όμš”ν•œ 데이터 νŠΉμ§•κ³Ό 이상적인 λͺ¨λΈκ΅¬μ‘°λ₯Ό ν™•μΈν•œλ‹€. λ§ˆμ§€λ§‰μœΌλ‘œ λ ˆμ΄λ” μŠ€νŽ™νŠΈλ‘œκ·Έλž¨ 데이터셋 νŠΉμ„±λΆ„μ„μ„ 기반으둜 좔가적인 κ²½λŸ‰ν™” 및 μ•ˆμ •ν™” 기법을 μ μš©ν•˜μ—¬ μ‹€μ‹œκ°„ μ‹œμŠ€ν…œμ„ μœ„ν•œ ResNet-SP λͺ¨λΈμ„ μ„€κ³„ν•˜κ³  ResNet-18λͺ¨λΈκ³Όμ˜ μ„±λŠ₯비ꡐλ₯Ό ν†΅ν•˜μ—¬ 연산속도 증가와 μ•ˆμ •μ„± 및 μ •ν™•μ„± ν–₯상 λ“±μ˜ μ„±λŠ₯κ°œμ„ μ„ ν™•μΈν•œλ‹€.Abstract . . . . . . . . . . . . . . i Contents . . . . . . . . . . . . . . ii List of Tables . . . . . . . . . . . . iv List of Figures . . . . . . . . . . . . v 1 Introduction . . . . . . . . . . . . . . . . . . . . 1 2 Related Works . . . . . . . . . . . . . . . . . . . 5 2.1 Micro Doppler Signature (MDS) . . . . . . . . 5 2.2 Classification of UAVs using MDS . . . . . . . 6 3 Dataset Generation . . . . . . . . . . . . . . . . . 9 3.1 Measurement . . . . . . . . . . . . . . . . . . 10 3.2 Pre-processing . . . . . . . . . . . . . . . . . . 12 4 Models . . . . . . . . . . . . . . . . . . . . . . . 21 4.1 ResNet-18 . . . . . . . . . . . . . . . . . . . 22 4.2 ResNet-SP . . . . . . . . . . . . . . . . . . . . 27 5 Experiment . . . . . . . . . . . . . . . . . . . . . 32 5.1 Experiment Result . . . . . . . . . . . . . . . 32 5.2 Training Details . . . . . . . . . . . . . . . . . 33 6 Conclusion . . . . . . . . . . . . . . . . . . . . . 34 Abstract (In Korean) . . . . . . . . . . . . . . . . . 38Maste

    Classification of drones and birds using convolutional neural networks applied to radar micro-Doppler spectrogram images

    Get PDF
    Funding: UK Science and Technology Facilities Council ST/N006569/1 (DR).This study presents a convolutional neural network (CNN) based drone classification method. The primary criterion for a high-fidelity neural network based classification is a real dataset of large size and diversity for training. The first goal of the study was to create a large database of micro-Doppler spectrogram images of in-flight drones and birds. Two separate datasets with the same images have been created, one with RGB images and other with grayscale images. The RGB dataset was used for GoogLeNet architecture-based training. The grayscale dataset was used for training with a series architecture developed during this study. Each dataset was further divided into two categories, one with four classes (drone, bird, clutter and noise) and the other with two classes (drone and non-drone). During training, 20% of the dataset has been used as a validation set. After the completion of training, the models were tested with previously unseen and unlabelled sets of data. The validation and testing accuracy for the developed series network have been found to be 99.6% and 94.4% respectively for four classes and 99.3% and 98.3% respectively for two classes. The GoogLenet based model showed both validation and testing accuracies to be around 99% for all the cases.PostprintPeer reviewe

    NRC-Net: Automated noise robust cardio net for detecting valvular cardiac diseases using optimum transformation method with heart sound signals

    Full text link
    Cardiovascular diseases (CVDs) can be effectively treated when detected early, reducing mortality rates significantly. Traditionally, phonocardiogram (PCG) signals have been utilized for detecting cardiovascular disease due to their cost-effectiveness and simplicity. Nevertheless, various environmental and physiological noises frequently affect the PCG signals, compromising their essential distinctive characteristics. The prevalence of this issue in overcrowded and resource-constrained hospitals can compromise the accuracy of medical diagnoses. Therefore, this study aims to discover the optimal transformation method for detecting CVDs using noisy heart sound signals and propose a noise robust network to improve the CVDs classification performance.For the identification of the optimal transformation method for noisy heart sound data mel-frequency cepstral coefficients (MFCCs), short-time Fourier transform (STFT), constant-Q nonstationary Gabor transform (CQT) and continuous wavelet transform (CWT) has been used with VGG16. Furthermore, we propose a novel convolutional recurrent neural network (CRNN) architecture called noise robust cardio net (NRC-Net), which is a lightweight model to classify mitral regurgitation, aortic stenosis, mitral stenosis, mitral valve prolapse, and normal heart sounds using PCG signals contaminated with respiratory and random noises. An attention block is included to extract important temporal and spatial features from the noisy corrupted heart sound.The results of this study indicate that,CWT is the optimal transformation method for noisy heart sound signals. When evaluated on the GitHub heart sound dataset, CWT demonstrates an accuracy of 95.69% for VGG16, which is 1.95% better than the second-best CQT transformation technique. Moreover, our proposed NRC-Net with CWT obtained an accuracy of 97.4%, which is 1.71% higher than the VGG16

    A survey on artificial intelligence-based acoustic source identification

    Get PDF
    The concept of Acoustic Source Identification (ASI), which refers to the process of identifying noise sources has attracted increasing attention in recent years. The ASI technology can be used for surveillance, monitoring, and maintenance applications in a wide range of sectors, such as defence, manufacturing, healthcare, and agriculture. Acoustic signature analysis and pattern recognition remain the core technologies for noise source identification. Manual identification of acoustic signatures, however, has become increasingly challenging as dataset sizes grow. As a result, the use of Artificial Intelligence (AI) techniques for identifying noise sources has become increasingly relevant and useful. In this paper, we provide a comprehensive review of AI-based acoustic source identification techniques. We analyze the strengths and weaknesses of AI-based ASI processes and associated methods proposed by researchers in the literature. Additionally, we did a detailed survey of ASI applications in machinery, underwater applications, environment/event source recognition, healthcare, and other fields. We also highlight relevant research directions
    • …
    corecore