210 research outputs found

    Phase Harmonic Correlations and Convolutional Neural Networks

    Full text link
    A major issue in harmonic analysis is to capture the phase dependence of frequency representations, which carries important signal properties. It seems that convolutional neural networks have found a way. Over time-series and images, convolutional networks often learn a first layer of filters which are well localized in the frequency domain, with different phases. We show that a rectifier then acts as a filter on the phase of the resulting coefficients. It computes signal descriptors which are local in space, frequency and phase. The non-linear phase filter becomes a multiplicative operator over phase harmonics computed with a Fourier transform along the phase. We prove that it defines a bi-Lipschitz and invertible representation. The correlations of phase harmonics coefficients characterise coherent structures from their phase dependence across frequencies. For wavelet filters, we show numerically that signals having sparse wavelet coefficients can be recovered from few phase harmonic correlations, which provide a compressive representationComment: 26 pages, 8 figure

    Solid Spherical Energy (SSE) CNNs for Efficient 3D Medical Image Analysis

    Get PDF
    Invariance to local rotation, to differentiate from the global rotation of images and objects, is required in various texture analysis problems. It has led to several breakthrough methods such as local binary patterns, maximum response and steerable filterbanks. In particular, textures in medical images often exhibit local structures at arbitrary orientations. Locally Rotation Invariant (LRI) Convolutional Neural Networks (CNN) were recently proposed using 3D steerable filters to combine LRI with Directional Sensitivity (DS). The steerability avoids the expensive cost of convolutions with rotated kernels and comes with a parametric representation that results in a drastic reduction of the number of trainable parameters. Yet, the potential bottleneck (memory and computation) of this approach lies in the necessity to recombine responses for a set of predefined discretized orientations. In this paper, we propose to calculate invariants from the responses to the set of spherical harmonics projected onto 3D kernels in the form of a lightweight Solid Spherical Energy (SSE) CNN. It offers a compromise between the high kernel specificity of the LRI-CNN and a low memory/operations requirement. The computational gain is evaluated on 3D synthetic and pulmonary nodule classification experiments. The performance of the proposed approach is compared with steerable LRI-CNNs and standard 3D CNNs, showing competitive results with the state of the art

    Learning the shape of protein micro-environments with a holographic convolutional neural network

    Full text link
    Proteins play a central role in biology from immune recognition to brain activity. While major advances in machine learning have improved our ability to predict protein structure from sequence, determining protein function from structure remains a major challenge. Here, we introduce Holographic Convolutional Neural Network (H-CNN) for proteins, which is a physically motivated machine learning approach to model amino acid preferences in protein structures. H-CNN reflects physical interactions in a protein structure and recapitulates the functional information stored in evolutionary data. H-CNN accurately predicts the impact of mutations on protein function, including stability and binding of protein complexes. Our interpretable computational model for protein structure-function maps could guide design of novel proteins with desired function

    Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey

    Full text link
    While Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings, they are affected by some crucial weaknesses. On the one hand, DNNs depend on exploiting a vast amount of training data, whose labeling process is time-consuming and expensive. On the other hand, DNNs are often treated as black box systems, which complicates their evaluation and validation. Both problems can be mitigated by incorporating prior knowledge into the DNN. One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations of the problem to solve. This promises an increased data-efficiency and filter responses that are interpretable more easily. In this survey, we try to give a concise overview about different approaches to incorporate geometrical prior knowledge into DNNs. Additionally, we try to connect those methods to the field of 3D object detection for autonomous driving, where we expect promising results applying those methods.Comment: Survey Pape

    SE(3)-equivariant Graph Neural Networks for Learning Glassy Liquids Representations

    Full text link
    Within the glassy liquids community, the use of Machine Learning (ML) to model particles' static structure in order to predict their future dynamics is currently a hot topic. The actual state of the art consists in Graph Neural Networks (GNNs) (Bapst 2020) which, beside having a great expressive power, are heavy models with numerous parameters and lack interpretability. Inspired by recent advances (Thomas 2018), we build a GNN that learns a robust representation of the glass' static structure by constraining it to preserve the roto-translation (SE(3)) equivariance. We show that this constraint not only significantly improves the predictive power but also allows to reduce the number of parameters while improving the interpretability. Furthermore, we relate our learned equivariant features to well-known invariant expert features, which are easily expressible with a single layer of our network.Comment: 8 pages, 7 figures plus appendi

    Synergies between Numerical Methods for Kinetic Equations and Neural Networks

    Get PDF
    The overarching theme of this work is the efficient computation of large-scale systems. Here we deal with two types of mathematical challenges, which are quite different at first glance but offer similar opportunities and challenges upon closer examination. Physical descriptions of phenomena and their mathematical modeling are performed on diverse scales, ranging from nano-scale interactions of single atoms to the macroscopic dynamics of the earth\u27s atmosphere. We consider such systems of interacting particles and explore methods to simulate them efficiently and accurately, with a focus on the kinetic and macroscopic description of interacting particle systems. Macroscopic governing equations describe the time evolution of a system in time and space, whereas the more fine-grained kinetic description additionally takes the particle velocity into account. The study of discretizing kinetic equations that depend on space, time, and velocity variables is a challenge due to the need to preserve physical solution bounds, e.g. positivity, avoiding spurious artifacts and computational efficiency. In the pursuit of overcoming the challenge of computability in both kinetic and multi-scale modeling, a wide variety of approximative methods have been established in the realm of reduced order and surrogate modeling, and model compression. For kinetic models, this may manifest in hybrid numerical solvers, that switch between macroscopic and mesoscopic simulation, asymptotic preserving schemes, that bridge the gap between both physical resolution levels, or surrogate models that operate on a kinetic level but replace computationally heavy operations of the simulation by fast approximations. Thus, for the simulation of kinetic and multi-scale systems with a high spatial resolution and long temporal horizon, the quote by Paul Dirac is as relevant as it was almost a century ago. The first goal of the dissertation is therefore the development of acceleration strategies for kinetic discretization methods, that preserve the structure of their governing equations. Particularly, we investigate the use of convex neural networks, to accelerate the minimal entropy closure method. Further, we develop a neural network-based hybrid solver for multi-scale systems, where kinetic and macroscopic methods are chosen based on local flow conditions. Furthermore, we deal with the compression and efficient computation of neural networks. In the meantime, neural networks are successfully used in different forms in countless scientific works and technical systems, with well-known applications in image recognition, and computer-aided language translation, but also as surrogate models for numerical mathematics. Although the first neural networks were already presented in the 1950s, the scientific discipline has enjoyed increasing popularity mainly during the last 15 years, since only now sufficient computing capacity is available. Remarkably, the increasing availability of computing resources is accompanied by a hunger for larger models, fueled by the common conception of machine learning practitioners and researchers that more trainable parameters equal higher performance and better generalization capabilities. The increase in model size exceeds the growth of available computing resources by orders of magnitude. Since 20122012, the computational resources used in the largest neural network models doubled every 3.43.4 months\footnote{\url{https://openai.com/blog/ai-and-compute/}}, opposed to Moore\u27s Law that proposes a 22-year doubling period in available computing power. To some extent, Dirac\u27s statement also applies to the recent computational challenges in the machine-learning community. The desire to evaluate and train on resource-limited devices sparked interest in model compression, where neural networks are sparsified or factorized, typically after training. The second goal of this dissertation is thus a low-rank method, originating from numerical methods for kinetic equations, to compress neural networks already during training by low-rank factorization. This dissertation thus considers synergies between kinetic models, neural networks, and numerical methods in both disciplines to develop time-, memory- and energy-efficient computational methods for both research areas

    Breaking the Curse of Dimensionality in Deep Neural Networks by Learning Invariant Representations

    Full text link
    Artificial intelligence, particularly the subfield of machine learning, has seen a paradigm shift towards data-driven models that learn from and adapt to data. This has resulted in unprecedented advancements in various domains such as natural language processing and computer vision, largely attributed to deep learning, a special class of machine learning models. Deep learning arguably surpasses traditional approaches by learning the relevant features from raw data through a series of computational layers. This thesis explores the theoretical foundations of deep learning by studying the relationship between the architecture of these models and the inherent structures found within the data they process. In particular, we ask What drives the efficacy of deep learning algorithms and allows them to beat the so-called curse of dimensionality-i.e. the difficulty of generally learning functions in high dimensions due to the exponentially increasing need for data points with increased dimensionality? Is it their ability to learn relevant representations of the data by exploiting their structure? How do different architectures exploit different data structures? In order to address these questions, we push forward the idea that the structure of the data can be effectively characterized by its invariances-i.e. aspects that are irrelevant for the task at hand. Our methodology takes an empirical approach to deep learning, combining experimental studies with physics-inspired toy models. These simplified models allow us to investigate and interpret the complex behaviors we observe in deep learning systems, offering insights into their inner workings, with the far-reaching goal of bridging the gap between theory and practice.Comment: PhD Thesis @ EPF
    corecore