132,582 research outputs found

    Expressivity of Spiking Neural Networks

    Full text link
    This article studies the expressive power of spiking neural networks where information is encoded in the firing time of neurons. The implementation of spiking neural networks on neuromorphic hardware presents a promising choice for future energy-efficient AI applications. However, there exist very few results that compare the computational power of spiking neurons to arbitrary threshold circuits and sigmoidal neurons. Additionally, it has also been shown that a network of spiking neurons is capable of approximating any continuous function. By using the Spike Response Model as a mathematical model of a spiking neuron and assuming a linear response function, we prove that the mapping generated by a network of spiking neurons is continuous piecewise linear. We also show that a spiking neural network can emulate the output of any multi-layer (ReLU) neural network. Furthermore, we show that the maximum number of linear regions generated by a spiking neuron scales exponentially with respect to the input dimension, a characteristic that distinguishes it significantly from an artificial (ReLU) neuron. Our results further extend the understanding of the approximation properties of spiking neural networks and open up new avenues where spiking neural networks can be deployed instead of artificial neural networks without any performance loss

    An on-line training radial basis function neural network for optimum operation of the UPFC

    Get PDF
    The concept of Flexible A.C. Transmission Systems (FACTS) technology was developed to enhance the performance of electric power networks (both in steady-state and transient-state) and to make better utilization of existing power transmission facilities. The continuous improvement in power ratings and switching performance of power electronic devices together with advances in circuit design and control techniques are making this concept and devices employed in FACTS more commercially attractive. The Unified Power Flow Controller (UPFC) is one of the main FACTS devices that have a wide implication on the power transmission systems and distribution. The purpose of this paper is to explore the use of Radial Basis Function Neural Network (RBFNN) to control the operation of the UPFC in order to improve its dynamic performance. The performance of the proposed controller compares favourably with the conventional PI and the off-line trained controller. The simple structure of the proposed controller reduces the computational requirements and emphasizes its appropriateness for on-line operation. Real-time implementation of the controller is achieved through using dSPACE ds1103 control and data acquisition board. Simulation and experimental results are presented to demonstrate the robustness of the proposed controller against changes in the transmission system operating conditions

    Neural Koopman prior for data assimilation

    Full text link
    With the increasing availability of large scale datasets, computational power and tools like automatic differentiation and expressive neural network architectures, sequential data are now often treated in a data-driven way, with a dynamical model trained from the observation data. While neural networks are often seen as uninterpretable black-box architectures, they can still benefit from physical priors on the data and from mathematical knowledge. In this paper, we use a neural network architecture which leverages the long-known Koopman operator theory to embed dynamical systems in latent spaces where their dynamics can be described linearly, enabling a number of appealing features. We introduce methods that enable to train such a model for long-term continuous reconstruction, even in difficult contexts where the data comes in irregularly-sampled time series. The potential for self-supervised learning is also demonstrated, as we show the promising use of trained dynamical models as priors for variational data assimilation techniques, with applications to e.g. time series interpolation and forecasting

    Optimal and Robust Neural Network Controllers for Proximal Spacecraft Maneuvers

    Get PDF
    Recent successes in machine learning research, buoyed by advances in computational power, have revitalized interest in neural networks and demonstrated their potential in solving complex controls problems. In this research, the reinforcement learning framework is combined with traditional direct shooting methods to generate optimal proximal spacecraft maneuvers. Open-loop and closed-loop feedback controllers, parameterized by multi-layer feed-forward artificial neural networks, are developed with evolutionary and gradient-based optimization algorithms. Utilizing Clohessy- Wiltshire relative motion dynamics, terminally constrained fixed-time, fuel-optimal trajectories are solved for intercept, rendezvous, and natural motion circumnavigation transfer maneuvers using three different thrust models: impulsive, finite, and continuous. In addition to optimality, the neurocontroller performance robustness to parametric uncertainty and bounded initial conditions is assessed. By bridging the gap between existing optimal and nonlinear control techniques, this research demonstrates that neurocontrollers offer a flexible and robust alternative approach to the solution of complex controls problems in the space domain and present a promising path forward to more capable, autonomous spacecraft

    Machine learning: statistical physics based theory and smart industry applications

    Get PDF
    The increasing computational power and the availability of data have made it possible to train ever-bigger artificial neural networks. These so-called deep neural networks have been used for impressive applications, like advanced driver assistance and support in medical diagnoses. However, various vulnerabilities have been revealed and there are many open questions concerning the workings of neural networks. Theoretical analyses are therefore essential for further progress. One current question is: why is it that networks with Rectified Linear Unit (ReLU) activation seemingly perform better than networks with sigmoidal activation?We contribute to the answer to this question by comparing ReLU networks with sigmoidal networks in diverse theoretical learning scenarios. In contrast to analysing specific datasets, we use a theoretical modelling using methods from statistical physics. They give the typical learning behaviour for chosen model scenarios. We analyse both the learning behaviour on a fixed dataset and on a data stream in the presence of a changing task. The emphasis is on the analysis of the network’s transition to a state wherein specific concepts have been learnt. We find significant benefits of ReLU networks: they exhibit continuous increases of their performance and adapt more quickly to changing tasks.In the second part of the thesis we treat applications of machine learning: we design a quick quality control method for material in a production line and study the relationship with product faults. Furthermore, we introduce a methodology for the interpretable classification of time series data

    Learning without Data: Physics-Informed Neural Networks for Fast Time-Domain Simulation

    Full text link
    In order to drastically reduce the heavy computational burden associated with time-domain simulations, this paper introduces a Physics-Informed Neural Network (PINN) to directly learn the solutions of power system dynamics. In contrast to the limitations of classical model order reduction approaches, commonly used to accelerate time-domain simulations, PINNs can universally approximate any continuous function with an arbitrary degree of accuracy. One of the novelties of this paper is that we avoid the need for any training data. We achieve this by incorporating the governing differential equations and an implicit Runge-Kutta (RK) integration scheme directly into the training process of the PINN; through this approach, PINNs can predict the trajectory of a dynamical power system at any discrete time step. The resulting Runge-Kutta-based physics-informed neural networks (RK-PINNs) can yield up to 100 times faster evaluations of the dynamics compared to standard time-domain simulations. We demonstrate the methodology on a single-machine infinite bus system governed by the swing equation. We show that RK-PINNs can accurately and quickly predict the solution trajectories.Comment: 6 pages, 6 figures, submitted to IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids 2021(SmartGridComm

    A Survey on Continuous Time Computations

    Full text link
    We provide an overview of theories of continuous time computation. These theories allow us to understand both the hardness of questions related to continuous time dynamical systems and the computational power of continuous time analog models. We survey the existing models, summarizing results, and point to relevant references in the literature
    • 

    corecore