31 research outputs found

    Bit error performance of diffuse indoor optical wireless channel pulse position modulation system employing artificial neural networks for channel equalisation

    Get PDF
    The bit-error rate (BER) performance of a pulse position modulation (PPM) scheme for non-line-of-sight indoor optical links employing channel equalisation based on the artificial neural network (ANN) is reported. Channel equalisation is achieved by training a multilayer perceptrons ANN. A comparative study of the unequalised `soft' decision decoding and the `hard' decision decoding along with the neural equalised `soft' decision decoding is presented for different bit resolutions for optical channels with different delay spread. We show that the unequalised `hard' decision decoding performs the worst for all values of normalised delayed spread, becoming impractical beyond a normalised delayed spread of 0.6. However, `soft' decision decoding with/without equalisation displays relatively improved performance for all values of the delay spread. The study shows that for a highly diffuse channel, the signal-to-noise ratio requirement to achieve a BER of 10−5 for the ANN-based equaliser is ~10 dB lower compared with the unequalised `soft' decoding for 16-PPM at a data rate of 155 Mbps. Our results indicate that for all range of delay spread, neural network equalisation is an effective tool of mitigating the inter-symbol interference

    Adaptive Channel Equalization using Radial Basis Function Networks and MLP

    Get PDF
    One of the major practical problems in digital communication systems is channel distortion which causes errors due to intersymbol interference. Since the source signal is in general broadband, the various frequency components experience different steady state amplitude and phase changes as they pass through the channel, causing distortion in the received message. This distortion translates into errors in the received sequence. Our problem as communication engineers is to restore the transmitted sequence or, equivalently, to identify the inverse of the channel, given the observed sequence at the channel output. This task is accomplished by adaptive equalizers. Typically, adaptive equalizers used in digital communications require an initial training period, during which a known data sequence is transmitted. A replica of this sequence is made available at the receiver in proper synchronism with the transmitter, thereby making it possible for adjustments to be made to the equalizer coefficients in accordance with the adaptive filtering algorithm employed in the equalizer design. When the training is completed, the equalizer is switched to its decision directed mode. Decision feedback equalizers are used extensively in practical communication systems. They are more powerful than linear equalizers especially for severe inter-symbol interference (ISI) channels without as much noise enhancement as the linear equalizers. This thesis addresses the problem of adaptive channel equalization in environments where the interfering noise exhibits Gaussian behavior. In this thesis, radial basis function (RBF) network is used to implement DFE. Advantages and problems of this system are discussed and its results are then compared with DFE using multi layer perceptron net (MLP).Results indicate that the implemented system outperforms both the least-mean square(LMS) algorithm and MLP, given the same signal-to-noise ratio as it offers minimum mean square error. The learning rate of the implemented system is also faster than both LMS and the multilayered case

    Machine Learning Tips and Tricks for Power Line Communications

    Get PDF
    4openopenTonello A.M.; Letizia N.A.; Righini D.; Marcuzzi F.Tonello, A. M.; Letizia, N. A.; Righini, D.; Marcuzzi, F

    Sleep Stage Classification: A Deep Learning Approach

    Get PDF
    Sleep occupies significant part of human life. The diagnoses of sleep related disorders are of great importance. To record specific physical and electrical activities of the brain and body, a multi-parameter test, called polysomnography (PSG), is normally used. The visual process of sleep stage classification is time consuming, subjective and costly. To improve the accuracy and efficiency of the sleep stage classification, automatic classification algorithms were developed. In this research work, we focused on pre-processing (filtering boundaries and de-noising algorithms) and classification steps of automatic sleep stage classification. The main motivation for this work was to develop a pre-processing and classification framework to clean the input EEG signal without manipulating the original data thus enhancing the learning stage of deep learning classifiers. For pre-processing EEG signals, a lossless adaptive artefact removal method was proposed. Rather than other works that used artificial noise, we used real EEG data contaminated with EOG and EMG for evaluating the proposed method. The proposed adaptive algorithm led to a significant enhancement in the overall classification accuracy. In the classification area, we evaluated the performance of the most common sleep stage classifiers using a comprehensive set of features extracted from PSG signals. Considering the challenges and limitations of conventional methods, we proposed two deep learning-based methods for classification of sleep stages based on Stacked Sparse AutoEncoder (SSAE) and Convolutional Neural Network (CNN). The proposed methods performed more efficiently by eliminating the need for conventional feature selection and feature extraction steps respectively. Moreover, although our systems were trained with lower number of samples compared to the similar studies, they were able to achieve state of art accuracy and higher overall sensitivity

    High-speed Channel Analysis and Design using Polynomial Chaos Theory and Machine Learning

    Get PDF
    With the exponential increase in the data rate of high-speed serial channels, their efficient and accurate analysis and design has become of crucial importance. Signal integrity analysis of these channels is often done with the eye diagram analysis, which demonstrates jitter and noise of the channel. Conventional methods for this type of analysis are either exorbitantly time and memory consuming, or only applicable to linear time invariant (LTI) systems. On the other hand, recently advancements in numerical methods and machine learning has shown a great potential for analysis and design of high-speed electronics. Therefore, in this dissertation we introduce two novel approaches for efficient eye analysis, based on machine learning and numerical techniques. These methods are focused on the data dependent jitter and noise, and the intersymbol interference. In the first approach, a complete surrogate model of the channel is trained using a short transient simulation. This model is based on the Polynomial Chaos theory. It can directly and quickly provide distribution of the jitter and other statistics of the eye diagram. In addition, it provides an estimation of the full eye diagram. The second analysis method is for faster analysis when we are interested in finding the worst-case eye width, eye height, and inner eye opening, which would be achieved by the conventional eye analysis if its transient simulation is continued for an arbitrary amount of time. The proposed approach quickly finds the data patterns resulting in the worst signal integrity; hence, in the closest eye. This method is based on the Bayesian optimization. Although majority of the contributions of this dissertation are on the analysis part, for the sake of completeness the final portion of this work is dedicated to design of high-speed channels with machine learning since the interference and complex interactions in modern channels has made their design challenging and time consuming too. The proposed design approach focuses on inverse design of CTLE, where the desired eye height and eye width are given, and the algorithm finds the corresponding peaking and DC gain of CTLE. This approach is based on the invertible neural networks. Main advantage of this network is the possibility to provide multiple solutions for cases where the answer to the inverse problem is not unique. Numerical examples are provided to evaluate efficiency and accuracy of the proposed approaches. The results show up to 11.5X speedup for direct estimation of the jitter distribution using the PC surrogate model approach. In addition, up to 23X speedup using the worst-case eye analysis approach is achieved, and the inverse design of CTLE shows promising results.Ph.D

    Large-scale Machine Learning in High-dimensional Datasets

    Get PDF
    corecore