36 research outputs found

    Deep Learning Methods for Partial Differential Equations and Related Parameter Identification Problems

    Full text link
    Recent years have witnessed a growth in mathematics for deep learning--which seeks a deeper understanding of the concepts of deep learning with mathematics and explores how to make it more robust--and deep learning for mathematics, where deep learning algorithms are used to solve problems in mathematics. The latter has popularised the field of scientific machine learning where deep learning is applied to problems in scientific computing. Specifically, more and more neural network architectures have been developed to solve specific classes of partial differential equations (PDEs). Such methods exploit properties that are inherent to PDEs and thus solve the PDEs better than standard feed-forward neural networks, recurrent neural networks, or convolutional neural networks. This has had a great impact in the area of mathematical modeling where parametric PDEs are widely used to model most natural and physical processes arising in science and engineering. In this work, we review such methods as well as their extensions for parametric studies and for solving the related inverse problems. We equally proceed to show their relevance in some industrial applications

    Spectral and High Order Methods for Partial Differential Equations ICOSAHOM 2018

    Get PDF
    This open access book features a selection of high-quality papers from the presentations at the International Conference on Spectral and High-Order Methods 2018, offering an overview of the depth and breadth of the activities within this important research area. The carefully reviewed papers provide a snapshot of the state of the art, while the extensive bibliography helps initiate new research directions

    Quaternion Matrices : Statistical Properties and Applications to Signal Processing and Wavelets

    Get PDF
    Similarly to how complex numbers provide a possible framework for extending scalar signal processing techniques to 2-channel signals, the 4-dimensional hypercomplex algebra of quaternions can be used to represent signals with 3 or 4 components. For a quaternion random vector to be suited for quaternion linear processing, it must be (second-order) proper. We consider the likelihood ratio test (LRT) for propriety, and compute the exact distribution for statistics of Box type, which include this LRT. Various approximate distributions are compared. The Wishart distribution of a quaternion sample covariance matrix is derived from first principles. Quaternions are isomorphic to an algebra of structured 4x4 real matrices. This mapping is our main tool, and suggests considering more general real matrix problems as a way of investigating quaternion linear algorithms. A quaternion vector autoregressive (VAR) time-series model is equivalent to a structured real VAR model. We show that generalised least squares (and Gaussian maximum likelihood) estimation of the parameters reduces to ordinary least squares, but only if the innovations are proper. A LRT is suggested to simultaneously test for quaternion structure in the regression coefficients and innovation covariance. Matrix-valued wavelets (MVWs) are generalised (multi)wavelets for vector-valued signals. Quaternion wavelets are equivalent to structured MVWs. Taking into account orthogonal similarity, all MVWs can be constructed from non-trivial MVWs. We show that there are no non-scalar non-trivial MVWs with short support [0,3]. Through symbolic computation we construct the families of shortest non-trivial 2x2 Daubechies MVWs and quaternion Daubechies wavelets.Open Acces

    On Improving Generalization of CNN-Based Image Classification with Delineation Maps Using the CORF Push-Pull Inhibition Operator

    Get PDF
    Deployed image classification pipelines are typically dependent on the images captured in real-world environments. This means that images might be affected by different sources of perturbations (e.g. sensor noise in low-light environments). The main challenge arises by the fact that image quality directly impacts the reliability and consistency of classification tasks. This challenge has, hence, attracted wide interest within the computer vision communities. We propose a transformation step that attempts to enhance the generalization ability of CNN models in the presence of unseen noise in the test set. Concretely, the delineation maps of given images are determined using the CORF push-pull inhibition operator. Such an operation transforms an input image into a space that is more robust to noise before being processed by a CNN. We evaluated our approach on the Fashion MNIST data set with an AlexNet model. It turned out that the proposed CORF-augmented pipeline achieved comparable results on noise-free images to those of a conventional AlexNet classification model without CORF delineation maps, but it consistently achieved significantly superior performance on test images perturbed with different levels of Gaussian and uniform noise

    Harmonic Analysis and Machine Learning

    Get PDF
    This dissertation considers data representations that lie at the interesection of harmonic analysis and neural networks. The unifying theme of this work is the goal for robust and reliable machine learning. Our specific contributions include a new variant of scattering transforms based on a Haar-type directional wavelet, a new study of deep neural network instability in the context of remote sensing problems, and new empirical studies of biomedical applications of neural networks
    corecore