524 research outputs found

    On concentration in vortex sheets

    Full text link
    The question of energy concentration in approximate solution sequences uϵu^\epsilon, as ϵ→0\epsilon \to 0, of the two-dimensional incompressible Euler equations with vortex-sheet initial data is revisited. Building on a novel identity for the structure function in terms of vorticity, the vorticity maximal function is proposed as a quantitative tool to detect concentration effects in approximate solution sequences. This tool is applied to numerical experiments based on the vortex-blob method, where vortex sheet initial data without distinguished sign are considered, as introduced in \emph{[R.~Krasny, J. Fluid Mech. \textbf{167}:65-93 (1986)]}. Numerical evidence suggests that no energy concentration appears in the limit of zero blob-regularization ϵ→0\epsilon \to 0, for the considered initial data

    Statistical solutions of hyperbolic conservation laws I: Foundations

    Full text link
    We seek to define statistical solutions of hyperbolic systems of conservation laws as time-parametrized probability measures on pp-integrable functions. To do so, we prove the equivalence between probability measures on LpL^p spaces and infinite families of \textit{correlation measures}. Each member of this family, termed a \textit{correlation marginal}, is a Young measure on a finite-dimensional tensor product domain and provides information about multi-point correlations of the underlying integrable functions. We also prove that any probability measure on a LpL^p space is uniquely determined by certain moments (correlation functions) of the equivalent correlation measure. We utilize this equivalence to define statistical solutions of multi-dimensional conservation laws in terms of an infinite set of equations, each evolving a moment of the correlation marginal. These evolution equations can be interpreted as augmenting entropy measure-valued solutions, with additional information about the evolution of all possible multi-point correlation functions. Our concept of statistical solutions can accommodate uncertain initial data as well as possibly non-atomic solutions even for atomic initial data. For multi-dimensional scalar conservation laws we impose additional entropy conditions and prove that the resulting \textit{entropy statistical solutions} exist, are unique and are stable with respect to the 11-Wasserstein metric on probability measures on L1L^1

    Operator learning with PCA-Net: upper and lower complexity bounds

    Full text link
    PCA-Net is a recently proposed neural operator architecture which combines principal component analysis (PCA) with neural networks to approximate operators between infinite-dimensional function spaces. The present work develops approximation theory for this approach, improving and significantly extending previous work in this direction: First, a novel universal approximation result is derived, under minimal assumptions on the underlying operator and the data-generating distribution. Then, two potential obstacles to efficient operator learning with PCA-Net are identified, and made precise through lower complexity bounds; the first relates to the complexity of the output distribution, measured by a slow decay of the PCA eigenvalues. The other obstacle relates to the inherent complexity of the space of operators between infinite-dimensional input and output spaces, resulting in a rigorous and quantifiable statement of a "curse of parametric complexity", an infinite-dimensional analogue of the well-known curse of dimensionality encountered in high-dimensional approximation problems. In addition to these lower bounds, upper complexity bounds are finally derived. A suitable smoothness criterion is shown to ensure an algebraic decay of the PCA eigenvalues. Furthermore, it is shown that PCA-Net can overcome the general curse for specific operators of interest, arising from the Darcy flow and the Navier-Stokes equations

    Computation of measure-valued solutions for the incompressible Euler equations

    Full text link
    We combine the spectral (viscosity) method and ensemble averaging to propose an algorithm that computes admissible measure valued solutions of the incompressible Euler equations. The resulting approximate young measures are proved to converge (with increasing numerical resolution) to a measure valued solution. We present numerical experiments demonstrating the robustness and efficiency of the proposed algorithm, as well as the appropriateness of measure valued solutions as a solution framework for the Euler equations. Furthermore, we report an extensive computational study of the two dimensional vortex sheet, which indicates that the computed measure valued solution is non-atomic and implies possible non-uniqueness of weak solutions constructed by Delort

    Seamless Integration of RESTful Services into the Web of Data

    Get PDF
    We live in an era of ever-increasing abundance of data. To cope with the information overload we suffer from every single day, more sophisticated methods are required to access, manipulate, and analyze these humongous amounts of data. By embracing the heterogeneity, which is unavoidable at such a scale, and accepting the fact that the data quality and meaning are fuzzy, more adaptable, flexible, and extensible systems can be built. RESTful services combined with Semantic Web technologies could prove to be a viable path to achieve that. Their combination a1lows data integration on an unprecedented sca1e and solves some of the problems Web developers are continuously struggling with. This paper introduces a novel approach to create machine-readable descriptions for RESTful services as a first step towards this ambitious goal. It also shows how these descriptions along with analgorithm to translate SPARQL queries to HTTP requests can be used to integrate RESTful services into a global read-write Web of Data

    The curse of dimensionality in operator learning

    Full text link
    Neural operator architectures employ neural networks to approximate operators mapping between Banach spaces of functions; they may be used to accelerate model evaluations via emulation, or to discover models from data. Consequently, the methodology has received increasing attention over recent years, giving rise to the rapidly growing field of operator learning. The first contribution of this paper is to prove that for general classes of operators which are characterized only by their CrC^r- or Lipschitz-regularity, operator learning suffers from a curse of dimensionality, defined precisely here in terms of representations of the infinite-dimensional input and output function spaces. The result is applicable to a wide variety of existing neural operators, including PCA-Net, DeepONet and the FNO. The second contribution of the paper is to prove that the general curse of dimensionality can be overcome for solution operators defined by the Hamilton-Jacobi equation; this is achieved by leveraging additional structure in the underlying solution operator, going beyond regularity. To this end, a novel neural operator architecture is introduced, termed HJ-Net, which explicitly takes into account characteristic information of the underlying Hamiltonian system. Error and complexity estimates are derived for HJ-Net which show that this architecture can provably beat the curse of dimensionality related to the infinite-dimensional input and output function spaces

    Error Bounds for Learning with Vector-Valued Random Features

    Full text link
    This paper provides a comprehensive error analysis of learning with vector-valued random features (RF). The theory is developed for RF ridge regression in a fully general infinite-dimensional input-output setting, but nonetheless applies to and improves existing finite-dimensional analyses. In contrast to comparable work in the literature, the approach proposed here relies on a direct analysis of the underlying risk functional and completely avoids the explicit RF ridge regression solution formula in terms of random matrices. This removes the need for concentration results in random matrix theory or their generalizations to random operators. The main results established in this paper include strong consistency of vector-valued RF estimators under model misspecification and minimax optimal convergence rates in the well-specified setting. The parameter complexity (number of random features) and sample complexity (number of labeled data) required to achieve such rates are comparable with Monte Carlo intuition and free from logarithmic factors.Comment: 25 pages, 1 tabl

    The Nonlocal Neural Operator: Universal Approximation

    Full text link
    Neural operator architectures approximate operators between infinite-dimensional Banach spaces of functions. They are gaining increased attention in computational science and engineering, due to their potential both to accelerate traditional numerical methods and to enable data-driven discovery. A popular variant of neural operators is the Fourier neural operator (FNO). Previous analysis proving universal operator approximation theorems for FNOs resorts to use of an unbounded number of Fourier modes and limits the basic form of the method to problems with periodic geometry. Prior work relies on intuition from traditional numerical methods, and interprets the FNO as a nonstandard and highly nonlinear spectral method. The present work challenges this point of view in two ways: (i) the work introduces a new broad class of operator approximators, termed nonlocal neural operators (NNOs), which allow for operator approximation between functions defined on arbitrary geometries, and includes the FNO as a special case; and (ii) analysis of the NNOs shows that, provided this architecture includes computation of a spatial average (corresponding to retaining only a single Fourier mode in the special case of the FNO) it benefits from universal approximation. It is demonstrated that this theoretical result unifies the analysis of a wide range of neural operator architectures. Furthermore, it sheds new light on the role of nonlocality, and its interaction with nonlinearity, thereby paving the way for a more systematic exploration of nonlocality, both through the development of new operator learning architectures and the analysis of existing and new architectures
    • …
    corecore