73 research outputs found

    機械学習を用いたコグニティブ無線における変調方式識別に関する研究

    Get PDF
    The current spectrum allocation cannot satisfy the demand for future wireless communications, which prompts extensive studies in search of feasible solutions for the spectrum scarcity. The burden in terms of the spectral efficiency on the radio frequency terminal is intended to be small by cognitive radio (CR) systems that prefer low power transmission, changeable carrier frequencies, and diverse modulation schemes. However, the recent surge in the application of the CR has been accompanied by an indispensable component: the spectrum sensing, to avoid interference towards the primary user. This requirement leads to a complex strategy for sensing and transmission and an increased demand for signal processing at the secondary user. However, the performance of the spectrum sensing can be extended by a robust modulation classification (MC) scheme to distinguish between a primary user and a secondary user along with the interference identification. For instance, the underlying paradigm that enables a concurrent transmission of the primary and secondary links may need a precise measure of the interference that the secondary users cause to the primary users. An adjustment to the transmission power should be made, if there is a change in the modulation of the primary users, implying a noise oor excess at the primary user location; else, the primary user will be subject to interference and a collision may occur.Alternatively, the interweave paradigm that progresses the spectrum efficiency by reusing the allocated spectrum over a temporary space, requires a classification of the intercepted signal into primary and secondary systems. Moreover, a distinction between noise and interference can be accomplished by modulation classification, if spectrum sensing is impossible. Therefore, modulation classification has been a fruitful area of study for over three decades.In this thesis, the modulation classification algorithms using machine learning are investigated while new methods are proposed. Firstly, a supervised machine learning based modulation classification algorithm is proposed. The higher-order cumulants are selected as features, due to its robustness to noise. Stacked denoising autoencoders,which is an extended edition of the neural network, is chosen as the classifier. On one hand stacked pre-train overcomes the shortcoming of local optimization, on the other, denoising function further enhances the anti-noise performance. The performance of this method is compared with the conventional methods in terms of the classification accuracy and execution speed. Secondly, an unsupervised machine learning based modulation classification algorithm is proposed.The features from time-frequency distribution are extracted. Density-based spatial clustering of applications with noise (DBSCAN) is used as the classifier because it is impossible to decide the number of clusters in advance. The simulation reveals that this method has higher classification accuracy than the conventional methods. Moreover, the training phase is unnecessary for this method. Therefore, it has higher workability then supervised method. Finally, the advantages and dis-advantages of them are summarized.For the future work, algorithm optimization is still a challenging task, because the computation capability of hardware is limited. On one hand, for the supervised machine learning, GPU computation is a potential solution for supervised machine learning, to reduce the execution cost. Altering the modulation pool, the network structure has to be redesigned as well. On the other hand, for the unsupervised machine learning, that shifting the symbols to carrier frequency consumes extra computing resources.電気通信大学201

    Sleep Stage Classification: A Deep Learning Approach

    Get PDF
    Sleep occupies significant part of human life. The diagnoses of sleep related disorders are of great importance. To record specific physical and electrical activities of the brain and body, a multi-parameter test, called polysomnography (PSG), is normally used. The visual process of sleep stage classification is time consuming, subjective and costly. To improve the accuracy and efficiency of the sleep stage classification, automatic classification algorithms were developed. In this research work, we focused on pre-processing (filtering boundaries and de-noising algorithms) and classification steps of automatic sleep stage classification. The main motivation for this work was to develop a pre-processing and classification framework to clean the input EEG signal without manipulating the original data thus enhancing the learning stage of deep learning classifiers. For pre-processing EEG signals, a lossless adaptive artefact removal method was proposed. Rather than other works that used artificial noise, we used real EEG data contaminated with EOG and EMG for evaluating the proposed method. The proposed adaptive algorithm led to a significant enhancement in the overall classification accuracy. In the classification area, we evaluated the performance of the most common sleep stage classifiers using a comprehensive set of features extracted from PSG signals. Considering the challenges and limitations of conventional methods, we proposed two deep learning-based methods for classification of sleep stages based on Stacked Sparse AutoEncoder (SSAE) and Convolutional Neural Network (CNN). The proposed methods performed more efficiently by eliminating the need for conventional feature selection and feature extraction steps respectively. Moreover, although our systems were trained with lower number of samples compared to the similar studies, they were able to achieve state of art accuracy and higher overall sensitivity

    Deep Recurrent Learning for Efficient Image Recognition Using Small Data

    Get PDF
    Recognition is fundamental yet open and challenging problem in computer vision. Recognition involves the detection and interpretation of complex shapes of objects or persons from previous encounters or knowledge. Biological systems are considered as the most powerful, robust and generalized recognition models. The recent success of learning based mathematical models known as artificial neural networks, especially deep neural networks, have propelled researchers to utilize such architectures for developing bio-inspired computational recognition models. However, the computational complexity of these models increases proportionally to the challenges posed by the recognition problem, and more importantly, these models require a large amount of data for successful learning. Additionally, the feedforward-based hierarchical models do not exploit another important biological learning paradigm, known as recurrency, which ubiquitously exists in the biological visual system and has been shown to be quite crucial for recognition. Consequently, this work aims to develop novel biologically relevant deep recurrent learning models for robust recognition using limited training data. First, we design an efficient deep simultaneous recurrent network (DSRN) architecture for solving several challenging image recognition tasks. The use of simultaneous recurrency in the proposed model improves the recognition performance and offers reduced computational complexity compared to the existing hierarchical deep learning models. Moreover, the DSRN architecture inherently learns meaningful representations of data during the training process which is essential to achieve superior recognition performance. However, probabilistic models such as deep generative models are particularly adept at learning representations directly from unlabeled input data. Accordingly, we show the generalization of the proposed deep simultaneous recurrency concept by developing a probabilistic deep simultaneous recurrent belief network (DSRBN) architecture which is more efficient in learning the underlying representation of the data compared to the state-of-the-art generative models. Finally, we propose a deep recurrent learning framework for solving the image recognition task using small data. We incorporate Bayesian statistics to the DSRBN generative model to propose a deep recurrent generative Bayesian model that addresses the challenge of learning from a small amount of data. Our findings suggest that the proposed deep recurrent Bayesian framework demonstrates better image recognition performance compared to the state-of-the-art models in a small data learning scenario. In conclusion, this dissertation proposes novel deep recurrent learning pipelines, which utilize not only limited training data to achieve improved image recognition performance but also require significantly reduced training parameters

    Joint 1D and 2D Neural Networks for Automatic Modulation Recognition

    Get PDF
    The digital communication and radar community has recently manifested more interest in using data-driven approaches for tasks such as modulation recognition, channel estimation and distortion correction. In this research we seek to apply an object detector for parameter estimation to perform waveform separation in the time and frequency domain prior to classification. This enables the full automation of detecting and classifying simultaneously occurring waveforms. We leverage a lD ResNet implemented by O\u27Shea et al. in [1] and the YOLO v3 object detector designed by Redmon et al. in [2]. We conducted an in depth study of the performance of these architectures and integrated the models to perform joint detection and classification. To our knowledge, the present research is the first to study and successfully combine a lD ResNet classifier and Yolo v3 object detector to fully automate the process of AMR for parameter estimation, pulse extraction and waveform classification for non-cooperative scenarios. The overall performance of the joint detector/ classifier is 90 at 10 dB signal to noise ratio for 24 digital and analog modulations

    Deep Learning For Sequential Pattern Recognition

    Get PDF
    Projecte realitzat en el marc d’un programa de mobilitat amb la Technische Universität München (TUM)In recent years, deep learning has opened a new research line in pattern recognition tasks. It has been hypothesized that this kind of learning would capture more abstract patterns concealed in data. It is motivated by the new findings both in biological aspects of the brain and hardware developments which have made the parallel processing possible. Deep learning methods come along with the conventional algorithms for optimization and training make them efficient for variety of applications in signal processing and pattern recognition. This thesis explores these novel techniques and their related algorithms. It addresses and compares different attributes of these methods, sketches in their possible advantages and disadvantages

    Modularity and Neural Integration in Large-Vocabulary Continuous Speech Recognition

    Get PDF
    This Thesis tackles the problems of modularity in Large-Vocabulary Continuous Speech Recognition with use of Neural Network
    corecore