135 research outputs found

    When Deep Learning Meets Polyhedral Theory: A Survey

    Full text link
    In the past decade, deep learning became the prevalent methodology for predictive modeling thanks to the remarkable accuracy of deep neural networks in tasks such as computer vision and natural language processing. Meanwhile, the structure of neural networks converged back to simpler representations based on piecewise constant and piecewise linear functions such as the Rectified Linear Unit (ReLU), which became the most commonly used type of activation function in neural networks. That made certain types of network structure \unicode{x2014}such as the typical fully-connected feedforward neural network\unicode{x2014} amenable to analysis through polyhedral theory and to the application of methodologies such as Linear Programming (LP) and Mixed-Integer Linear Programming (MILP) for a variety of purposes. In this paper, we survey the main topics emerging from this fast-paced area of work, which bring a fresh perspective to understanding neural networks in more detail as well as to applying linear optimization techniques to train, verify, and reduce the size of such networks

    Invariance and Invertibility in Deep Neural Networks

    Get PDF
    Machine learning is concerned with computer systems that learn from data instead of being explicitly programmed to solve a particular task. One of the main approaches behind recent advances in machine learning involves neural networks with a large number of layers, often referred to as deep learning. In this dissertation, we study how to equip deep neural networks with two useful properties: invariance and invertibility. The first part of our work is focused on constructing neural networks that are invariant to certain transformations in the input, that is, some outputs of the network stay the same even if the input is altered. Furthermore, we want the network to learn the appropriate invariance from training data, instead of being explicitly constructed to achieve invariance to a pre-defined transformation type. The second part of our work is centered on two recently proposed types of deep networks: neural ordinary differential equations and invertible residual networks. These networks are invertible, that is, we can reconstruct the input from the output. However, there are some classes of functions that these networks cannot approximate. We show how to modify these two architectures to provably equip them with the capacity to approximate any smooth invertible function

    Generalized non-autonomous Cohen-Grossberg neural network model

    Full text link
    In the present paper, we investigate both the global exponential stability and the existence of a periodic solution of a general differential equation with unbounded distributed delays. The main stability criterion depends on the dominance of the non-delay terms over the delay terms. The criterion for the existence of a periodic solution is obtained with the application of the coincide degree theorem. We use the main results to get criteria for the existence and global exponential stability of periodic solutions of a generalized higher-order periodic Cohen-Grossberg neural network model with discrete-time varying delays and infinite distributed delays. Additionally, we provide a comparison with the results in the literature and a numerical simulation to illustrate the effectiveness of some of our results.Comment: 30 page

    Deep Learning for Stable Monotone Dynamical Systems

    Full text link
    Monotone systems, originating from real-world (e.g., biological or chemical) applications, are a class of dynamical systems that preserves a partial order of system states over time. In this work, we introduce a feedforward neural networks (FNNs)-based method to learn the dynamics of unknown stable nonlinear monotone systems. We propose the use of nonnegative neural networks and batch normalization, which in general enables the FNNs to capture the monotonicity conditions without reducing the expressiveness. To concurrently ensure stability during training, we adopt an alternating learning method to simultaneously learn the system dynamics and corresponding Lyapunov function, while exploiting monotonicity of the system.~The combination of the monotonicity and stability constraints ensures that the learned dynamics preserves both properties, while significantly reducing learning errors. Finally, our techniques are evaluated on two complex biological and chemical systems

    Structure-preserving deep learning

    Get PDF
    Over the past few years, deep learning has risen to the foreground as a topic of massive interest, mainly as a result of successes obtained in solving large-scale image processing tasks. There are multiple challenging mathematical problems involved in applying deep learning: most deep learning methods require the solution of hard optimisation problems, and a good understanding of the tradeoff between computational effort, amount of data and model complexity is required to successfully design a deep learning approach for a given problem. A large amount of progress made in deep learning has been based on heuristic explorations, but there is a growing effort to mathematically understand the structure in existing deep learning methods and to systematically design new deep learning methods to preserve certain types of structure in deep learning. In this article, we review a number of these directions: some deep neural networks can be understood as discretisations of dynamical systems, neural networks can be designed to have desirable properties such as invertibility or group equivariance, and new algorithmic frameworks based on conformal Hamiltonian systems and Riemannian manifolds to solve the optimisation problems have been proposed. We conclude our review of each of these topics by discussing some open problems that we consider to be interesting directions for future research

    Recent Advances and Applications of Fractional-Order Neural Networks

    Get PDF
    This paper focuses on the growth, development, and future of various forms of fractional-order neural networks. Multiple advances in structure, learning algorithms, and methods have been critically investigated and summarized. This also includes the recent trends in the dynamics of various fractional-order neural networks. The multiple forms of fractional-order neural networks considered in this study are Hopfield, cellular, memristive, complex, and quaternion-valued based networks. Further, the application of fractional-order neural networks in various computational fields such as system identification, control, optimization, and stability have been critically analyzed and discussed

    Connecting mathematical models for image processing and neural networks

    Get PDF
    This thesis deals with the connections between mathematical models for image processing and deep learning. While data-driven deep learning models such as neural networks are flexible and well performing, they are often used as a black box. This makes it hard to provide theoretical model guarantees and scientific insights. On the other hand, more traditional, model-driven approaches such as diffusion, wavelet shrinkage, and variational models offer a rich set of mathematical foundations. Our goal is to transfer these foundations to neural networks. To this end, we pursue three strategies. First, we design trainable variants of traditional models and reduce their parameter set after training to obtain transparent and adaptive models. Moreover, we investigate the architectural design of numerical solvers for partial differential equations and translate them into building blocks of popular neural network architectures. This yields criteria for stable networks and inspires novel design concepts. Lastly, we present novel hybrid models for inpainting that rely on our theoretical findings. These strategies provide three ways for combining the best of the two worlds of model- and data-driven approaches. Our work contributes to the overarching goal of closing the gap between these worlds that still exists in performance and understanding.Gegenstand dieser Arbeit sind die ZusammenhĂ€nge zwischen mathematischen Modellen zur Bildverarbeitung und Deep Learning. WĂ€hrend datengetriebene Modelle des Deep Learning wie z.B. neuronale Netze flexibel sind und gute Ergebnisse liefern, werden sie oft als Black Box eingesetzt. Das macht es schwierig, theoretische Modellgarantien zu liefern und wissenschaftliche Erkenntnisse zu gewinnen. Im Gegensatz dazu bieten traditionellere, modellgetriebene AnsĂ€tze wie Diffusion, Wavelet Shrinkage und VariationsansĂ€tze eine FĂŒlle von mathematischen Grundlagen. Unser Ziel ist es, diese auf neuronale Netze zu ĂŒbertragen. Zu diesem Zweck verfolgen wir drei Strategien. ZunĂ€chst entwerfen wir trainierbare Varianten von traditionellen Modellen und reduzieren ihren Parametersatz, um transparente und adaptive Modelle zu erhalten. Außerdem untersuchen wir die Architekturen von numerischen Lösern fĂŒr partielle Differentialgleichungen und ĂŒbersetzen sie in Bausteine von populĂ€ren neuronalen Netzwerken. Daraus ergeben sich Kriterien fĂŒr stabile Netzwerke und neue Designkonzepte. Schließlich prĂ€sentieren wir neuartige hybride Modelle fĂŒr Inpainting, die auf unseren theoretischen Erkenntnissen beruhen. Diese Strategien bieten drei Möglichkeiten, das Beste aus den beiden Welten der modell- und datengetriebenen AnsĂ€tzen zu vereinen. Diese Arbeit liefert einen Beitrag zum ĂŒbergeordneten Ziel, die LĂŒcke zwischen den zwei Welten zu schließen, die noch in Bezug auf Leistung und ModellverstĂ€ndnis besteht.ERC Advanced Grant INCOVI
    • 

    corecore