863 research outputs found

    Machine Learning in Digital Signal Processing for Optical Transmission Systems

    Get PDF
    The future demand for digital information will exceed the capabilities of current optical communication systems, which are approaching their limits due to component and fiber intrinsic non-linear effects. Machine learning methods are promising to find new ways of leverage the available resources and to explore new solutions. Although, some of the machine learning methods such as adaptive non-linear filtering and probabilistic modeling are not novel in the field of telecommunication, enhanced powerful architecture designs together with increasing computing power make it possible to tackle more complex problems today. The methods presented in this work apply machine learning on optical communication systems with two main contributions. First, an unsupervised learning algorithm with embedded additive white Gaussian noise (AWGN) channel and appropriate power constraint is trained end-to-end, learning a geometric constellation shape for lowest bit-error rates over amplified and unamplified links. Second, supervised machine learning methods, especially deep neural networks with and without internal cyclical connections, are investigated to combat linear and non-linear inter-symbol interference (ISI) as well as colored noise effects introduced by the components and the fiber. On high-bandwidth coherent optical transmission setups their performances and complexities are experimentally evaluated and benchmarked against conventional digital signal processing (DSP) approaches. This thesis shows how machine learning can be applied to optical communication systems. In particular, it is demonstrated that machine learning is a viable designing and DSP tool to increase the capabilities of optical communication systems

    Flexible methods for blind separation of complex signals

    Get PDF
    One of the main matter in Blind Source Separation (BSS) performed with a neural network approach is the choice of the nonlinear activation function (AF). In fact if the shape of the activation function is chosen as the cumulative density function (c.d.f.) of the original source the problem is solved. For this scope in this thesis a flexible approach is introduced and the shape of the activation functions is changed during the learning process using the so-called “spline functions”. The problem is complicated in the case of separation of complex sources where there is the problem of the dichotomy between analyticity and boundedness of the complex activation functions. The problem is solved introducing the “splitting function” model as activation function. The “splitting function” is a couple of “spline function” which wind off the real and the imaginary part of the complex activation function, each of one depending from the real and imaginary variable. A more realistic model is the “generalized splitting function”, which is formed by a couple of two bi-dimensional functions (surfaces), one for the real and one for the imaginary part of the complex function, each depending by both the real and imaginary part of the complex variable. Unfortunately the linear environment is unrealistic in many practical applications. In this way there is the need of extending BSS problem in the nonlinear environment: in this case both the activation function than the nonlinear distorting function are realized by the “splitting function” made of “spline function”. The complex and instantaneous separation in linear and nonlinear environment allow us to perform a complex-valued extension of the well-known INFOMAX algorithm in several practical situations, such as convolutive mixtures, fMRI signal analysis and bandpass signal transmission. In addition advanced characteristics on the proposed approach are introduced and deeply described. First of all it is shows as splines are universal nonlinear functions for BSS problem: they are able to perform separation in anyway. Then it is analyzed as the “splitting solution” allows the algorithm to obtain a phase recovery: usually there is a phase ambiguity. Finally a Cramér-Rao lower bound for ICA is discussed. Several experimental results, tested by different objective indexes, show the effectiveness of the proposed approaches

    Flexible methods for blind separation of complex signals

    Get PDF
    One of the main matter in Blind Source Separation (BSS) performed with a neural network approach is the choice of the nonlinear activation function (AF). In fact if the shape of the activation function is chosen as the cumulative density function (c.d.f.) of the original source the problem is solved. For this scope in this thesis a flexible approach is introduced and the shape of the activation functions is changed during the learning process using the so-called “spline functions”. The problem is complicated in the case of separation of complex sources where there is the problem of the dichotomy between analyticity and boundedness of the complex activation functions. The problem is solved introducing the “splitting function” model as activation function. The “splitting function” is a couple of “spline function” which wind off the real and the imaginary part of the complex activation function, each of one depending from the real and imaginary variable. A more realistic model is the “generalized splitting function”, which is formed by a couple of two bi-dimensional functions (surfaces), one for the real and one for the imaginary part of the complex function, each depending by both the real and imaginary part of the complex variable. Unfortunately the linear environment is unrealistic in many practical applications. In this way there is the need of extending BSS problem in the nonlinear environment: in this case both the activation function than the nonlinear distorting function are realized by the “splitting function” made of “spline function”. The complex and instantaneous separation in linear and nonlinear environment allow us to perform a complex-valued extension of the well-known INFOMAX algorithm in several practical situations, such as convolutive mixtures, fMRI signal analysis and bandpass signal transmission. In addition advanced characteristics on the proposed approach are introduced and deeply described. First of all it is shows as splines are universal nonlinear functions for BSS problem: they are able to perform separation in anyway. Then it is analyzed as the “splitting solution” allows the algorithm to obtain a phase recovery: usually there is a phase ambiguity. Finally a Cramér-Rao lower bound for ICA is discussed. Several experimental results, tested by different objective indexes, show the effectiveness of the proposed approaches

    Survey of FPGA applications in the period 2000 – 2015 (Technical Report)

    Get PDF
    Romoth J, Porrmann M, Rückert U. Survey of FPGA applications in the period 2000 – 2015 (Technical Report).; 2017.Since their introduction, FPGAs can be seen in more and more different fields of applications. The key advantage is the combination of software-like flexibility with the performance otherwise common to hardware. Nevertheless, every application field introduces special requirements to the used computational architecture. This paper provides an overview of the different topics FPGAs have been used for in the last 15 years of research and why they have been chosen over other processing units like e.g. CPUs

    Design of large polyphase filters in the Quadratic Residue Number System

    Full text link

    Numerical Approaches Towards the Galactic Synchrotron Emission

    Get PDF
    The Galactic synchrotron emission contains abundant physics of the magnetized Galactic interstellar medium and has a non-negligible influence on detecting the B-mode polarization of the Cosmic microwave background radiation and understanding the physics during the re-ionization epoch. To catch up with the growing precision in astrophysical measurements, we need not only better theoretical modelings, but also more powerful numerical simulations and analyzing pipelines for acquiring deeper understandings in both the Galactic environment and the origin of the Universe. In this dissertation, we focus on the Galactic synchrotron emission which involves the turbulent and magnetized interstellar medium and energetic cosmic-ray electrons. To study the Galactic synchrotron emission consistently we need a non-trivial Bayesian analyzer with specially designed likelihood function, a fast and precise radiative transfer simulator, and cosmic ray electron propagation solver. We first present version X of the hammurabi package, the HEALPix-based numeric simulator for Galactic polarized emission. Two fast methods are proposed for realizing divergence-free Gaussian random magnetic fields either on the Galactic scale where a field alignment and strength modulation are imposed or on a local scale where more physically motivated models like a parameterized magneto-hydrodynamic turbulence can be applied. Secondly, we present our effort in using the finite element method for solving the cosmic ray (electron) transport equation within the phase-space domain that has a number of dimensions varying from two to six. The numeric package BIFET is developed on top of the deal.ii library with support in the adaptive mesh refinement. Our first aim with BIFET is to build the basic framework that can support a high dimensional PDE solving. Finally, we introduce the work related to the complete design of IMAGINE, which is proposed particularly with the ensemble likelihood for inferring the distributions of Galactic components

    Reliability of Extreme Learning Machines

    Get PDF
    Neumann K. Reliability of Extreme Learning Machines. Bielefeld: Bielefeld University Library; 2014.The reliable application of machine learning methods becomes increasingly important in challenging engineering domains. In particular, the application of extreme learning machines (ELM) seems promising because of their apparent simplicity and the capability of very efficient processing of large and high-dimensional data sets. However, the ELM paradigm is based on the concept of single hidden-layer neural networks with randomly initialized and fixed input weights and is thus inherently unreliable. This black-box character usually repels engineers from application in potentially safety critical tasks. The problem becomes even more severe since, in principle, only sparse and noisy data sets can be provided in such domains. The goal of this thesis is therefore to equip the ELM approach with the abilities to perform in a reliable manner. This goal is approached in three aspects by enhancing the robustness of ELMs to initializations, make ELMs able to handle slow changes in the environment (i.e. input drifts), and allow the incorporation of continuous constraints derived from prior knowledge. It is shown in several diverse scenarios that the novel ELM approach proposed in this thesis ensures a safe and reliable application while simultaneously sustaining the full modeling power of data-driven methods
    • …
    corecore