68 research outputs found
λ μ΄λ μ€ννΈλ‘κ·Έλ¨μ μ¬μ©ν 컨볼루μ μ κ²½λ§ κΈ°λ° λ¬΄μΈν곡기 λΆλ₯
νμλ
Όλ¬Έ (μμ¬) -- μμΈλνκ΅ λνμ : μ΅ν©κ³ΌνκΈ°μ λνμ μ΅ν©κ³ΌνλΆ(μ§λ₯νμ΅ν©μμ€ν
μ 곡), 2021. 2. κ³½λ
Έμ€.With the upsurge in using Unmanned Aerial Vehicles (UAVs) in various fields, identifying them in real-time is becoming an important issue. However, the identification of UAVs is difficult due to their characteristics such as Low altitude, Slow speed and Small radar cross-section (LSS). To identify UAVs with existing deterministic systems, the algorithm becomes more complex and requires large computations, making it unsuitable for real-time systems. Hence, we need a new approach to these threats. Deep learning models extract features from a large amount of data by themselves and have shown outstanding performance in various tasks. Using these advantages, deep learning-based UAV classification models using various sensors are being studied recently.
In this paper, we propose a deep learning-based classification model that learns the micro-Doppler signatures (MDS) of targets represented on radar spectrogram images. To enable this, first, we recorded five LSS targets (three types of UAVs and two different types of human activities) with a frequency modulated continuous wave (FMCW) radar in various scenarios. Then, we converted signals into spectrograms in the form of images by Short-time Fourier transform (STFT). After the data refinement and augmentation, we made our own radar spectrogram dataset. Secondly, we analyzed characteristics of the radar spectrogram dataset using the ResNet-18 model and designed the lightweight ResNet-SP model for the real-time system. The results show that the proposed ResNet-SP has a training time of 242 seconds and an accuracy of 83.39%, which is superior to the ResNet-18 that takes 640 seconds for training with an accuracy of 79.88%.λ³Έ λ
Όλ¬Έμμλ, λ μ΄λ μ€ννΈλ‘κ·Έλ¨ μμ νμ±λ μλ‘ λ€λ₯Έ μ΄λνμ μ κ³ μ ν λ§μ΄ν¬λ‘ λνλ¬μ νΈλ₯Ό νμ΅νλ λ₯λ¬λ κΈ°λ° λΆλ₯λͺ¨λΈμ μ μνλ€. μ΄λ₯Όμν΄ μ°λ¦¬λ λ€μ―κ°μ§ μν μ΄λνμ (무μΈν곡기 3μ’
κ³Ό μ¬λνλ 2μ’
)μ μ μ νμ¬ μ£Όνμλ³μ‘° μ°μνλ μ΄λλ‘ νμ λ€μ λ€μν μμ§μμ μΈ‘μ νκ³ μΈ‘μ ν μ νΈμ λ¨μκ° νΈλ¦¬μ λ³νμ μ νΈμ²λ¦¬κ³Όμ κ³Ό λ°μ΄ν° μ μ λ° μ¦κ°μ μ μ²λ¦¬κ³Όμ μ μ μ©νμ¬ μ체 λ μ΄λ μ€ννΈλ‘κ·Έλ¨ λ°μ΄ν°μ
μ μμ±νλ€. μ΄ν κ΄νμ΄λ―Έμ§ λΆλ₯λͺ¨λΈμΈ ResNet-18μ μ¬μ©νμ¬ λ μ΄λ μ€ννΈλ‘κ·Έλ¨ λ°μ΄ν°μ
μ νΉμ±μ λΆμνλ€. λ μ΄λμ νΈλ₯Ό κ΄νμ΄λ―Έμ§λ‘ λ³ννλ κ³Όμ μμμ μ 보μ곑 λ° μμ€μ κ°μ νμ¬ μΈκ°μ§ λ μ΄λ μ νΈννμ λ°λ₯Έ μ±λ₯μ λΉκ΅νκ³ μ΅μ μ λ°μ΄ν°ννλ₯Ό νμΈνλ€. λ
Έμ΄μ¦ μν λ° κ΅¬μ‘°μ λ°λ₯Έ μ±λ₯λ³νλ₯Ό ν΅ν΄ λͺ¨λΈμ΄ νμ΅νλ μ£Όμν λ°μ΄ν° νΉμ§κ³Ό μ΄μμ μΈ λͺ¨λΈκ΅¬μ‘°λ₯Ό νμΈνλ€. λ§μ§λ§μΌλ‘ λ μ΄λ μ€ννΈλ‘κ·Έλ¨ λ°μ΄ν°μ
νΉμ±λΆμμ κΈ°λ°μΌλ‘ μΆκ°μ μΈ κ²½λν λ° μμ ν κΈ°λ²μ μ μ©νμ¬ μ€μκ° μμ€ν
μ μν ResNet-SP λͺ¨λΈμ μ€κ³νκ³ ResNet-18λͺ¨λΈκ³Όμ μ±λ₯λΉκ΅λ₯Ό ν΅νμ¬ μ°μ°μλ μ¦κ°μ μμ μ± λ° μ νμ± ν₯μ λ±μ μ±λ₯κ°μ μ νμΈνλ€.Abstract . . . . . . . . . . . . . . i
Contents . . . . . . . . . . . . . . ii
List of Tables . . . . . . . . . . . . iv
List of Figures . . . . . . . . . . . . v
1 Introduction . . . . . . . . . . . . . . . . . . . . 1
2 Related Works . . . . . . . . . . . . . . . . . . . 5
2.1 Micro Doppler Signature (MDS) . . . . . . . . 5
2.2 Classification of UAVs using MDS . . . . . . . 6
3 Dataset Generation . . . . . . . . . . . . . . . . . 9
3.1 Measurement . . . . . . . . . . . . . . . . . . 10
3.2 Pre-processing . . . . . . . . . . . . . . . . . . 12
4 Models . . . . . . . . . . . . . . . . . . . . . . . 21
4.1 ResNet-18 . . . . . . . . . . . . . . . . . . . 22
4.2 ResNet-SP . . . . . . . . . . . . . . . . . . . . 27
5 Experiment . . . . . . . . . . . . . . . . . . . . . 32
5.1 Experiment Result . . . . . . . . . . . . . . . 32
5.2 Training Details . . . . . . . . . . . . . . . . . 33
6 Conclusion . . . . . . . . . . . . . . . . . . . . . 34
Abstract (In Korean) . . . . . . . . . . . . . . . . . 38Maste
Classification of drones and birds using convolutional neural networks applied to radar micro-Doppler spectrogram images
Funding: UK Science and Technology Facilities Council ST/N006569/1 (DR).This study presents a convolutional neural network (CNN) based drone classification method. The primary criterion for a high-fidelity neural network based classification is a real dataset of large size and diversity for training. The first goal of the study was to create a large database of micro-Doppler spectrogram images of in-flight drones and birds. Two separate datasets with the same images have been created, one with RGB images and other with grayscale images. The RGB dataset was used for GoogLeNet architecture-based training. The grayscale dataset was used for training with a series architecture developed during this study. Each dataset was further divided into two categories, one with four classes (drone, bird, clutter and noise) and the other with two classes (drone and non-drone). During training, 20% of the dataset has been used as a validation set. After the completion of training, the models were tested with previously unseen and unlabelled sets of data. The validation and testing accuracy for the developed series network have been found to be 99.6% and 94.4% respectively for four classes and 99.3% and 98.3% respectively for two classes. The GoogLenet based model showed both validation and testing accuracies to be around 99% for all the cases.PostprintPeer reviewe
NRC-Net: Automated noise robust cardio net for detecting valvular cardiac diseases using optimum transformation method with heart sound signals
Cardiovascular diseases (CVDs) can be effectively treated when detected
early, reducing mortality rates significantly. Traditionally, phonocardiogram
(PCG) signals have been utilized for detecting cardiovascular disease due to
their cost-effectiveness and simplicity. Nevertheless, various environmental
and physiological noises frequently affect the PCG signals, compromising their
essential distinctive characteristics. The prevalence of this issue in
overcrowded and resource-constrained hospitals can compromise the accuracy of
medical diagnoses. Therefore, this study aims to discover the optimal
transformation method for detecting CVDs using noisy heart sound signals and
propose a noise robust network to improve the CVDs classification
performance.For the identification of the optimal transformation method for
noisy heart sound data mel-frequency cepstral coefficients (MFCCs), short-time
Fourier transform (STFT), constant-Q nonstationary Gabor transform (CQT) and
continuous wavelet transform (CWT) has been used with VGG16. Furthermore, we
propose a novel convolutional recurrent neural network (CRNN) architecture
called noise robust cardio net (NRC-Net), which is a lightweight model to
classify mitral regurgitation, aortic stenosis, mitral stenosis, mitral valve
prolapse, and normal heart sounds using PCG signals contaminated with
respiratory and random noises. An attention block is included to extract
important temporal and spatial features from the noisy corrupted heart
sound.The results of this study indicate that,CWT is the optimal transformation
method for noisy heart sound signals. When evaluated on the GitHub heart sound
dataset, CWT demonstrates an accuracy of 95.69% for VGG16, which is 1.95%
better than the second-best CQT transformation technique. Moreover, our
proposed NRC-Net with CWT obtained an accuracy of 97.4%, which is 1.71% higher
than the VGG16
A survey on artificial intelligence-based acoustic source identification
The concept of Acoustic Source Identification (ASI), which refers to the process of identifying noise sources has attracted increasing attention in recent years. The ASI technology can be used for surveillance, monitoring, and maintenance applications in a wide range of sectors, such as defence, manufacturing, healthcare, and agriculture. Acoustic signature analysis and pattern recognition remain the core technologies for noise source identification. Manual identification of acoustic signatures, however, has become increasingly challenging as dataset sizes grow. As a result, the use of Artificial Intelligence (AI) techniques for identifying noise sources has become increasingly relevant and useful. In this paper, we provide a comprehensive review of AI-based acoustic source identification techniques. We analyze the strengths and weaknesses of AI-based ASI processes and associated methods proposed by researchers in the literature. Additionally, we did a detailed survey of ASI applications in machinery, underwater applications, environment/event source recognition, healthcare, and other fields. We also highlight relevant research directions
- β¦