12,284 research outputs found

    Stochastic Digital Backpropagation

    Get PDF
    In this paper, we propose a novel detector for single-channel long-haul coherent optical communications, termed stochastic digital backpropagation (SDBP), which takes into account noise from the optical amplifiers in addition to handling deterministic linear and nonlinear impairments. We discuss the design approach behind this detector, which is based on the maximum a posteriori (MAP) principle. As closed-form expressions of the MAP detector are not tractable for coherent optical transmission, we employ the framework of Bayesian graphical models, which allows a numerical evaluation of the proposed detector. Through simulations, we observe that by accounting for nonlinear signal–noise interactions, we achieve a significant improvement in system reach with SDBP over digital backpropagation (DBP) for systems with periodic inline optical dispersion compensation. In uncompensated links with high symbol rates, the performance difference in terms of system reach for SDBP over DBP is small. In the absence of noise, the proposed detector is equivalent to the well-known DBP detector

    Stochastic Digital Backpropagation with Residual Memory Compensation

    Full text link
    Stochastic digital backpropagation (SDBP) is an extension of digital backpropagation (DBP) and is based on the maximum a posteriori principle. SDBP takes into account noise from the optical amplifiers in addition to handling deterministic linear and nonlinear impairments. The decisions in SDBP are taken on a symbol-by-symbol (SBS) basis, ignoring any residual memory, which may be present due to non-optimal processing in SDBP. In this paper, we extend SDBP to account for memory between symbols. In particular, two different methods are proposed: a Viterbi algorithm (VA) and a decision directed approach. Symbol error rate (SER) for memory-based SDBP is significantly lower than the previously proposed SBS-SDBP. For inline dispersion-managed links, the VA-SDBP has up to 10 and 14 times lower SER than DBP for QPSK and 16-QAM, respectively.Comment: 7 pages, accepted to publication in 'Journal of Lightwave Technology (JLT)

    Improved Lower Bounds on Mutual Information Accounting for Nonlinear Signal-Noise Interaction

    Get PDF
    In fiber-optic communications, evaluation of mutual information (MI) is still an open issue due to the unavailability of an exact and mathematically tractable channel model. Traditionally, lower bounds on MI are computed by approximating the (original) channel with an auxiliary forward channel. In this paper, lower bounds are computed using an auxiliary backward channel, which has not been previously considered in the context of fiber-optic communications. Distributions obtained through two variations of the stochastic digital backpropagation (SDBP) algorithm are used as auxiliary backward channels and these bounds are compared with bounds obtained through the conventional digital backpropagation (DBP). Through simulations, higher information rates were achieved with SDBP, {which can be explained by the ability of SDBP to account for nonlinear signal--noise interactionsComment: 8 pages, 5 figures, accepted for publication in Journal of Lightwave Technolog

    Spiking Neural Networks for Inference and Learning: A Memristor-based Design Perspective

    Get PDF
    On metrics of density and power efficiency, neuromorphic technologies have the potential to surpass mainstream computing technologies in tasks where real-time functionality, adaptability, and autonomy are essential. While algorithmic advances in neuromorphic computing are proceeding successfully, the potential of memristors to improve neuromorphic computing have not yet born fruit, primarily because they are often used as a drop-in replacement to conventional memory. However, interdisciplinary approaches anchored in machine learning theory suggest that multifactor plasticity rules matching neural and synaptic dynamics to the device capabilities can take better advantage of memristor dynamics and its stochasticity. Furthermore, such plasticity rules generally show much higher performance than that of classical Spike Time Dependent Plasticity (STDP) rules. This chapter reviews the recent development in learning with spiking neural network models and their possible implementation with memristor-based hardware

    Equilibrium Propagation: Bridging the Gap Between Energy-Based Models and Backpropagation

    Full text link
    We introduce Equilibrium Propagation, a learning framework for energy-based models. It involves only one kind of neural computation, performed in both the first phase (when the prediction is made) and the second phase of training (after the target or prediction error is revealed). Although this algorithm computes the gradient of an objective function just like Backpropagation, it does not need a special computation or circuit for the second phase, where errors are implicitly propagated. Equilibrium Propagation shares similarities with Contrastive Hebbian Learning and Contrastive Divergence while solving the theoretical issues of both algorithms: our algorithm computes the gradient of a well defined objective function. Because the objective function is defined in terms of local perturbations, the second phase of Equilibrium Propagation corresponds to only nudging the prediction (fixed point, or stationary distribution) towards a configuration that reduces prediction error. In the case of a recurrent multi-layer supervised network, the output units are slightly nudged towards their target in the second phase, and the perturbation introduced at the output layer propagates backward in the hidden layers. We show that the signal 'back-propagated' during this second phase corresponds to the propagation of error derivatives and encodes the gradient of the objective function, when the synaptic update corresponds to a standard form of spike-timing dependent plasticity. This work makes it more plausible that a mechanism similar to Backpropagation could be implemented by brains, since leaky integrator neural computation performs both inference and error back-propagation in our model. The only local difference between the two phases is whether synaptic changes are allowed or not

    Hardware-efficient on-line learning through pipelined truncated-error backpropagation in binary-state networks

    Get PDF
    Artificial neural networks (ANNs) trained using backpropagation are powerful learning architectures that have achieved state-of-the-art performance in various benchmarks. Significant effort has been devoted to developing custom silicon devices to accelerate inference in ANNs. Accelerating the training phase, however, has attracted relatively little attention. In this paper, we describe a hardware-efficient on-line learning technique for feedforward multi-layer ANNs that is based on pipelined backpropagation. Learning is performed in parallel with inference in the forward pass, removing the need for an explicit backward pass and requiring no extra weight lookup. By using binary state variables in the feedforward network and ternary errors in truncated-error backpropagation, the need for any multiplications in the forward and backward passes is removed, and memory requirements for the pipelining are drastically reduced. Further reduction in addition operations owing to the sparsity in the forward neural and backpropagating error signal paths contributes to highly efficient hardware implementation. For proof-of-concept validation, we demonstrate on-line learning of MNIST handwritten digit classification on a Spartan 6 FPGA interfacing with an external 1Gb DDR2 DRAM, that shows small degradation in test error performance compared to an equivalently sized binary ANN trained off-line using standard back-propagation and exact errors. Our results highlight an attractive synergy between pipelined backpropagation and binary-state networks in substantially reducing computation and memory requirements, making pipelined on-line learning practical in deep networks.Comment: Now also consider 0/1 binary activations. Memory access statistics reporte
    • …
    corecore