11,966 research outputs found

    Deep Neural Network Architectures for Modulation Classification

    Full text link
    In this work, we investigate the value of employing deep learning for the task of wireless signal modulation recognition. Recently in [1], a framework has been introduced by generating a dataset using GNU radio that mimics the imperfections in a real wireless channel, and uses 10 different modulation types. Further, a convolutional neural network (CNN) architecture was developed and shown to deliver performance that exceeds that of expert-based approaches. Here, we follow the framework of [1] and find deep neural network architectures that deliver higher accuracy than the state of the art. We tested the architecture of [1] and found it to achieve an accuracy of approximately 75% of correctly recognizing the modulation type. We first tune the CNN architecture of [1] and find a design with four convolutional layers and two dense layers that gives an accuracy of approximately 83.8% at high SNR. We then develop architectures based on the recently introduced ideas of Residual Networks (ResNet [2]) and Densely Connected Networks (DenseNet [3]) to achieve high SNR accuracies of approximately 83.5% and 86.6%, respectively. Finally, we introduce a Convolutional Long Short-term Deep Neural Network (CLDNN [4]) to achieve an accuracy of approximately 88.5% at high SNR.Comment: 5 pages, 10 figures, In proc. Asilomar Conference on Signals, Systems, and Computers, Nov. 201

    Deep Neural Network Architectures for Modulation Classification

    Get PDF
    This thesis investigates the value of employing deep learning for the task of wireless signal modulation recognition. Recently in deep learning research on AMC, a framework has been introduced by generating a dataset using GNU radio that mimics the imperfections in a real wireless channel, and uses 10 different modulation types. Further, a CNN architecture was developed and shown to deliver performance that exceeds that of expert-based approaches. Here, we follow the framework of O’shea [1] and find deep neural network architectures that deliver higher accuracy than the state of the art. We tested the architecture of O’shea [1] and found it to achieve an accuracy of approximately 75% of correctly recognizing the modulation type. We first tune the CNN architecture and find a design with four convolutional layers and two dense layers that gives an accuracy of approximately 83.8% at high SNR. We then develop architectures based on the recently introduced ideas of Residual Networks (ResNet) and Densely Connected Network (DenseNet) to achieve high SNR accuracies of approximately 83% and 86.6%, respectively. We also introduce a CLDNN to achieve an accuracy of approximately 88.5% at high SNR. To improve the classification accuracy of QAM, we calculate the high order cumulants of QAM16 and QAM64 as the expert feature and improve the total accuracy to approximately 90%. Finally, by preprocessing the input and send them into a LSTM model, we improve all classification success rates to 100% except the WBFM which is 46%. The average modulation classification accuracy got a improvement of roughly 22% in this thesis

    Deep Neural Network Architectures for Modulation Classification using Principal Component Analysis

    Get PDF
    In this work, we investigate the application of Principal Component Analysis to the task of wireless signal modulation recognition using deep neural network architectures. Sampling signals at the Nyquist rate, which is often very high, requires a large amount of energy and space to collect and store the samples. Moreover, the time taken to train neural networks for the task of modulation classification is large due to the large number of samples. These problems can be drastically reduced using Principal Component Analysis, which is a technique that allows us to reduce the dimensionality or number of features of the samples used for training the neural networks. We used a framework for generating a dataset using GNU radio that mimics the imperfections in a real wireless channel and uses 10 different types of modulations with 128 sampling points where samples are collected at the Nyquist rate. The code implements Principal Component Analysis to reduce the number of features of the samples. We found that using the dataset that uses samples collected at Sub-Nyquist rates obtained using Principal Component Analysis requires drastically lower time to train the neural networks as compared to the time required to train the neural networks with a data set that uses samples collected at the Nyquist rate. Furthermore, the space required for the storage of the samples is also reduced after the application of Principal Component Analysis to the dataset

    A PyTorch Framework for Automatic Modulation Classification using Deep Neural Networks

    Get PDF
    Automatic modulation classification of wireless signals is an important feature for both military and civilian applications as it contributes to the intelligence capabilities of a wireless signal receiver. Signals that travel in space are usually modulated using different methods. It is important for a receiver or a demodulator of a system to be able to recognize the modulation type of the signal accurately and efficiently. The goal of our research is to use deep learning for the task of automatic modulation classification and fine tune the model parameters to achieve faster run-time. Different deep learning architectures were investigated in previous work such as the Convolutional Neural Network (CNN) and the Convolutional Long Short-Term Memory Dense Neural Network (CLDNN). Our task here is to migrate the existing framework from Theano to PyTorch to be able to better exploit the available multiple Graphics Processing Units (GPUs) for training the neural networks. The new PyTorch framework yielded similar accuracies with faster run speed by utilizing data parallelism across multiple GPUs compared to the original framework developed using Theano. We found – from experiments so far – that the reduction in run time is linearly proportional to the number of GPUs available
    • …
    corecore