36 research outputs found
Deep Learning Methods for Partial Differential Equations and Related Parameter Identification Problems
Recent years have witnessed a growth in mathematics for deep learning--which
seeks a deeper understanding of the concepts of deep learning with mathematics
and explores how to make it more robust--and deep learning for mathematics,
where deep learning algorithms are used to solve problems in mathematics. The
latter has popularised the field of scientific machine learning where deep
learning is applied to problems in scientific computing. Specifically, more and
more neural network architectures have been developed to solve specific classes
of partial differential equations (PDEs). Such methods exploit properties that
are inherent to PDEs and thus solve the PDEs better than standard feed-forward
neural networks, recurrent neural networks, or convolutional neural networks.
This has had a great impact in the area of mathematical modeling where
parametric PDEs are widely used to model most natural and physical processes
arising in science and engineering. In this work, we review such methods as
well as their extensions for parametric studies and for solving the related
inverse problems. We equally proceed to show their relevance in some industrial
applications
Recommended from our members
Recent Developments in the Numerics of Nonlinear Hyperbolic Conservation Laws
The development of reliable numerical methods for the simulation of real life problems requires both a fundamental knowledge in the field of numerical analysis and a proper experience in practical applications as well as their mathematical modeling.
Thus, the purpose of the workshop was to bring together experts not only from the field of applied mathematics but also from civil and mechanical engineering working in the area of modern high order methods for the solution of partial differential equations or even approximation theory necessary to improve the accuracy as well as robustness of numerical algorithms
Spectral and High Order Methods for Partial Differential Equations ICOSAHOM 2018
This open access book features a selection of high-quality papers from the presentations at the International Conference on Spectral and High-Order Methods 2018, offering an overview of the depth and breadth of the activities within this important research area. The carefully reviewed papers provide a snapshot of the state of the art, while the extensive bibliography helps initiate new research directions
Quaternion Matrices : Statistical Properties and Applications to Signal Processing and Wavelets
Similarly to how complex numbers provide a possible framework for extending scalar signal processing techniques to 2-channel signals, the 4-dimensional hypercomplex algebra of quaternions can be used to represent signals with 3 or 4 components.
For a quaternion random vector to be suited for quaternion linear processing, it must be (second-order) proper.
We consider the likelihood ratio test (LRT) for propriety, and compute the exact distribution for statistics of Box type, which include this LRT. Various approximate distributions are compared. The Wishart distribution of a quaternion sample covariance matrix is derived from first principles.
Quaternions are isomorphic to an algebra of structured 4x4 real matrices.
This mapping is our main tool, and suggests considering more general real matrix problems as a way of investigating quaternion linear algorithms.
A quaternion vector autoregressive (VAR) time-series model is equivalent to a structured real VAR model. We show that generalised least squares (and Gaussian maximum likelihood) estimation of the parameters reduces to ordinary least squares, but only if the innovations are proper. A LRT is suggested to simultaneously test for quaternion structure in the regression coefficients and innovation covariance.
Matrix-valued wavelets (MVWs) are generalised (multi)wavelets for vector-valued signals. Quaternion wavelets are equivalent to structured MVWs.
Taking into account orthogonal similarity, all MVWs can be constructed from non-trivial MVWs. We show that there are no non-scalar non-trivial MVWs with short support [0,3]. Through symbolic computation we construct the families of shortest non-trivial 2x2 Daubechies MVWs and quaternion Daubechies wavelets.Open Acces
Recommended from our members
Reduction of Multivariate Mixtures and Its Applications
We consider a fast deterministic algorithm to identify the "best" linearly independent terms in multivariate mixtures and use them to compute an equivalent representation with fewer terms, up to user-selected accuracy. Our algorithm employs the well-known pivoted Cholesky decomposition of the Gram matrix constructed using terms of the mixture. Importantly, the multivariate mixtures do not have to be a separated representation of a function and complexity of the algorithm is independent of the number of variables (dimensions). The algorithm requires operations, where is the initial number of terms in a multivariate mixture and is the number of selected terms. Due to the condition number of the Gram matrix, the resulting accuracy is limited to about digits of the used floating point arithmetic. We also consider two additional reduction algorithms for the same purpose. The first algorithm is based on orthogonalization of the multivariate mixture and have a similar performance as the approach based on Cholesky factorization. The second algorithm yields a better accuracy, but currently in high dimensions is only applicable to multivariate mixtures in a separated representation.
We use the reduction algorithm to develop a new adaptive numerical method for solving differential and integral equations in quantum chemistry. We demonstrate the performance of this approach by solving the Hartree-Fock equations in two cases of small molecules. We also describe a number of initial applications of the reduction algorithm to solve partial differential and integral equations and to address several problems in data sciences. For data science applications in high dimensions we consider kernel density estimation (KDE) approach for constructing a probability density function (PDF) of a cloud of points, a far-field kernel summation method and the construction of equivalent sources for non-oscillatory kernels (used in both, computational physics and data science) and, finally, show how to use the reduction algorithm to produce seeds for subdividing a cloud of points into groups
Recommended from our members
Reduction of Multivariate Mixtures and Its Applications
We consider a fast deterministic algorithm to identify the "best" linearly independent terms in multivariate mixtures and use them to compute an equivalent representation with fewer terms, up to user-selected accuracy. Our algorithm employs the well-known pivoted Cholesky decomposition of the Gram matrix constructed using terms of the mixture. Importantly, the multivariate mixtures do not have to be a separated representation of a function and complexity of the algorithm is independent of the number of variables (dimensions). The algorithm requires (r²N) operations, where N is the initial number of terms in a multivariate mixture and r is the number of selected terms. Due to the condition number of the Gram matrix, the resulting accuracy is limited to about 1/2 digits of the used floating point arithmetic. We also consider two additional reduction algorithms for the same purpose. The first algorithm is based on orthogonalization of the multivariate mixture and have a similar performance as the approach based on Cholesky factorization. The second algorithm yields a better accuracy, but currently in high dimensions is only applicable to multivariate mixtures in a separated representation.
We use the reduction algorithm to develop a new adaptive numerical method for solving differential and integral equations in quantum chemistry. We demonstrate the performance of this approach by solving the Hartree-Fock equations in two cases of small molecules. We also describe a number of initial applications of the reduction algorithm to solve partial differential and integral equations and to address several problems in data sciences. For data science applications in high dimensions we consider kernel density estimation (KDE) approach for constructing a probability density function (PDF) of a cloud of points, a far-field kernel summation method and the construction of equivalent sources for non-oscillatory kernels (used in both, computational physics and data science) and, finally, show how to use the reduction algorithm to produce seeds for subdividing a cloud of points into groups.</p
On Improving Generalization of CNN-Based Image Classification with Delineation Maps Using the CORF Push-Pull Inhibition Operator
Deployed image classification pipelines are typically dependent on the images captured in real-world environments. This means that images might be affected by different sources of perturbations (e.g. sensor noise in low-light environments). The main challenge arises by the fact that image quality directly impacts the reliability and consistency of classification tasks. This challenge has, hence, attracted wide interest within the computer vision communities. We propose a transformation step that attempts to enhance the generalization ability of CNN models in the presence of unseen noise in the test set. Concretely, the delineation maps of given images are determined using the CORF push-pull inhibition operator. Such an operation transforms an input image into a space that is more robust to noise before being processed by a CNN. We evaluated our approach on the Fashion MNIST data set with an AlexNet model. It turned out that the proposed CORF-augmented pipeline achieved comparable results on noise-free images to those of a conventional AlexNet classification model without CORF delineation maps, but it consistently achieved significantly superior performance on test images perturbed with different levels of Gaussian and uniform noise
Harmonic Analysis and Machine Learning
This dissertation considers data representations that lie at the interesection of harmonic analysis and neural networks. The unifying theme of this work is the goal for robust and reliable machine learning. Our specific contributions include a new variant of scattering
transforms based on a Haar-type directional wavelet, a new study of deep neural network instability in the context of remote sensing problems, and new empirical studies of biomedical applications of neural networks