6 research outputs found
On Bernstein-Euler-Jacobi Operators
In the doctoral dissertation âOn Bernstein-Euler-Jacobi Operatorsâ we deal with composition of positive linear operators.
Many operators arising in the theory of positive linear operators are compositions of other mappings of this type. Among the most common building blocks we find the classical Bernstein operator and the (Euler-Jacobi) Beta-type operators of various kinds.
The purpose of this research is to use these building blocks to provide an overview of the various operators that fit into this pattern and thus to emphasize the importance of understanding the singular pieces that form the composition.
To better illustrate this we took one of the less studied operators of this class, namely PÄltÄneaâs operator, and looked closely at its properties and at the way it relates to the properties of the operators that compose it.Ăber Bernstein-Euler-Jacobi-Operatoren
Die Arbeit "On Bernstein-Euler-Jacobi Operators" befasst sich mit der Komposition spezieller positiver linearer Operatoren (PLO). Viele Operatoren, die in der entsprechenden Theorie untersucht werden, sind Zusammensetzungen von anderen Abbildungen diesen
Typs. Diesem Aspekt wurde in der Vergangenheit jedoch wenig Aufmerksamkeit geschenkt.
Er wird am Beispiel einer speziellen Klasse von PLO in der vorliegenden Arbeit systematisch untersucht
Recommended from our members
A Robust Normalizing Flow using Bernstein-type Polynomials
The conference website provides online access to the PDF file of the conference paper, a poster, a video of the conference presentation and supplementary material at: https://bmvc2022.mpi-inf.mpg.de/532/ .Modeling real-world distributions can often be challenging due to sample data that are subjected to perturbations, e.g., instrumentation errors, or added random noise. Since flow models are typically nonlinear algorithms, they amplify these initial errors, leading to poor generalizations. This paper proposes a framework to construct Normalizing Flows (NFs) which demonstrate higher robustness against such initial errors. To this end, we utilize Bernstein-type polynomials inspired by the optimal stability of the Bernstein basis. Further, compared to the existing NF frameworks, our method provides compelling advantages like theoretical upper bounds for the approximation error, better suitability for compactly supported densities, and the ability to employ higher degree polynomials without training instability. We conduct a theoretical analysis and empirically demonstrate the efficacy of the proposed technique using experiments on both real-world and synthetic datasets
Approximation Theory and Related Applications
In recent years, we have seen a growing interest in various aspects of approximation theory. This happened due to the increasing complexity of mathematical models that require computer calculations and the development of the theoretical foundations of the approximation theory. Approximation theory has broad and important applications in many areas of mathematics, including functional analysis, differential equations, dynamical systems theory, mathematical physics, control theory, probability theory and mathematical statistics, and others. Approximation theory is also of great practical importance, as approximate methods and estimation of approximation errors are used in physics, economics, chemistry, signal theory, neural networks and many other areas. This book presents the works published in the Special Issue "Approximation Theory and Related Applications". The research of the worldâs leading scientists presented in this book reflect new trends in approximation theory and related topics
On incorporating inductive biases into deep neural networks
A machine learning (ML) algorithm can be interpreted as a system that learns to capture patterns in data distributions. Before the modern \emph{deep learning era}, emulating the human brain, the use of structured representations and strong inductive bias have been prevalent in building ML models, partly due to the expensive computational resources and the limited availability of data. On the contrary, armed with increasingly cheaper hardware and abundant data, deep learning has made unprecedented progress during the past decade, showcasing incredible performance on a diverse set of ML tasks. In contrast to \emph{classical ML} models, the latter seeks to minimize structured representations and inductive bias when learning, implicitly favoring the flexibility of learning over manual intervention. Despite the impressive performance, attention is being drawn towards enhancing the (relatively) weaker areas of deep models such as learning with limited resources, robustness, minimal overhead to realize simple relationships, and ability to generalize the learned representations beyond the training conditions, which were (arguably) the forte of classical ML. Consequently, a recent hybrid trend is surfacing that aims to blend structured representations and substantial inductive bias into deep models, with the hope of improving them. Based on the above motivation, this thesis investigates methods to improve the performance of deep models using inductive bias and structured representations across multiple problem domains. To this end, we inject a priori knowledge into deep models in the form of enhanced feature extraction techniques, geometrical priors, engineered features, and optimization constraints. Especially, we show that by leveraging the prior knowledge about the task in hand and the structure of data, the performance of deep learning models can be significantly elevated. We begin by exploring equivariant representation learning. In general, the real-world observations are prone to fundamental transformations (e.g., translation, rotation), and deep models typically demand expensive data-augmentations and a high number of filters to tackle such variance. In comparison, carefully designed equivariant filters possess this ability by nature. Henceforth, we propose a novel \emph{volumetric convolution} operation that can convolve arbitrary functions in the unit-ball () while preserving rotational equivariance by projecting the input data onto the Zernike basis. We conduct extensive experiments and show that our formulations can be used to construct significantly cheaper ML models. Next, we study generative modeling of 3D objects and propose a principled approach to synthesize 3D point-clouds in the spectral-domain by obtaining a structured representation of 3D points as functions on the unit sphere (). Using the prior knowledge about the spectral moments and the output data manifold, we design an architecture that can maximally utilize the information in the inputs and generate high-resolution point-clouds with minimal computational overhead. Finally, we propose a framework to build normalizing flows (NF) based on increasing triangular maps and Bernstein-type polynomials. Compared to the existing NF approaches, our framework consists of favorable characteristics for fusing inductive bias within the model i.e., theoretical upper bounds for the approximation error, robustness, higher interpretability, suitability for compactly supported densities, and the ability to employ higher degree polynomials without training instability. Most importantly, we present a constructive universality proof, which permits us to analytically derive the optimal model coefficients for known transformations without training