14,145 research outputs found

    Part I. The Cosmological Vacuum from a Topological Perspective

    Full text link
    This article examines how the physical presence of field energy and particulate matter can be interpreted in terms of the topological properties of space-time. The theory is developed in terms of vector and matrix equations of exterior differential systems, which are not constrained by tensor diffeomorphic equivalences. The first postulate defines the field properties (a vector space continuum) of the Cosmological Vacuum in terms of matrices of basis functions that map exact differentials into neighborhoods of exterior differential 1-forms (potentials). The second postulate requires that the field equations must satisfy the First Law of Thermodynamics dynamically created in terms of the Lie differential with respect to a process direction field acting on the exterior differential forms that encode the thermodynamic system. The vector space of infinitesimals need not be global and its compliment is used to define particle properties as topological defects embedded in the field vector space. The potentials, as exterior differential 1-forms, are not (necessarily) uniquely integrable: the fibers can be twisted, leading to possible Chiral matrix arrays of certain 3-forms defined as Topological Torsion and Topological Spin. A significant result demonstrates how the coefficients of Affine Torsion are related to the concept of Field excitations (mass and charge); another demonstrates how thermodynamic evolution can describe the emergence of topological defects in the physical vacuum.Comment: 70 pages, 5 figure

    Learning to Approximate a Bregman Divergence

    Full text link
    Bregman divergences generalize measures such as the squared Euclidean distance and the KL divergence, and arise throughout many areas of machine learning. In this paper, we focus on the problem of approximating an arbitrary Bregman divergence from supervision, and we provide a well-principled approach to analyzing such approximations. We develop a formulation and algorithm for learning arbitrary Bregman divergences based on approximating their underlying convex generating function via a piecewise linear function. We provide theoretical approximation bounds using our parameterization and show that the generalization error Op(m1/2)O_p(m^{-1/2}) for metric learning using our framework matches the known generalization error in the strictly less general Mahalanobis metric learning setting. We further demonstrate empirically that our method performs well in comparison to existing metric learning methods, particularly for clustering and ranking problems.Comment: 19 pages, 4 figure

    High-order DG solvers for under-resolved turbulent incompressible flows: A comparison of L2L^2 and HH(div) methods

    Get PDF
    The accurate numerical simulation of turbulent incompressible flows is a challenging topic in computational fluid dynamics. For discretisation methods to be robust in the under-resolved regime, mass conservation as well as energy stability are key ingredients to obtain robust and accurate discretisations. Recently, two approaches have been proposed in the context of high-order discontinuous Galerkin (DG) discretisations that address these aspects differently. On the one hand, standard L2L^2-based DG discretisations enforce mass conservation and energy stability weakly by the use of additional stabilisation terms. On the other hand, pointwise divergence-free H(div)H(\operatorname{div})-conforming approaches ensure exact mass conservation and energy stability by the use of tailored finite element function spaces. The present work raises the question whether and to which extent these two approaches are equivalent when applied to under-resolved turbulent flows. This comparative study highlights similarities and differences of these two approaches. The numerical results emphasise that both discretisation strategies are promising for under-resolved simulations of turbulent flows due to their inherent dissipation mechanisms.Comment: 24 pages, 13 figure

    On Degrees of Freedom of Projection Estimators with Applications to Multivariate Nonparametric Regression

    Full text link
    In this paper, we consider the nonparametric regression problem with multivariate predictors. We provide a characterization of the degrees of freedom and divergence for estimators of the unknown regression function, which are obtained as outputs of linearly constrained quadratic optimization procedures, namely, minimizers of the least squares criterion with linear constraints and/or quadratic penalties. As special cases of our results, we derive explicit expressions for the degrees of freedom in many nonparametric regression problems, e.g., bounded isotonic regression, multivariate (penalized) convex regression, and additive total variation regularization. Our theory also yields, as special cases, known results on the degrees of freedom of many well-studied estimators in the statistics literature, such as ridge regression, Lasso and generalized Lasso. Our results can be readily used to choose the tuning parameter(s) involved in the estimation procedure by minimizing the Stein's unbiased risk estimate. As a by-product of our analysis we derive an interesting connection between bounded isotonic regression and isotonic regression on a general partially ordered set, which is of independent interest.Comment: 72 pages, 7 figures, Journal of the American Statistical Association (Theory and Methods), 201

    A data driven equivariant approach to constrained Gaussian mixture modeling

    Full text link
    Maximum likelihood estimation of Gaussian mixture models with different class-specific covariance matrices is known to be problematic. This is due to the unboundedness of the likelihood, together with the presence of spurious maximizers. Existing methods to bypass this obstacle are based on the fact that unboundedness is avoided if the eigenvalues of the covariance matrices are bounded away from zero. This can be done imposing some constraints on the covariance matrices, i.e. by incorporating a priori information on the covariance structure of the mixture components. The present work introduces a constrained equivariant approach, where the class conditional covariance matrices are shrunk towards a pre-specified matrix Psi. Data-driven choices of the matrix Psi, when a priori information is not available, and the optimal amount of shrinkage are investigated. The effectiveness of the proposal is evaluated on the basis of a simulation study and an empirical example

    The Price equation program: simple invariances unify population dynamics, thermodynamics, probability, information and inference

    Full text link
    The fundamental equations of various disciplines often seem to share the same basic structure. Natural selection increases information in the same way that Bayesian updating increases information. Thermodynamics and the forms of common probability distributions express maximum increase in entropy, which appears mathematically as loss of information. Physical mechanics follows paths of change that maximize Fisher information. The information expressions typically have analogous interpretations as the Newtonian balance between force and acceleration, representing a partition between direct causes of change and opposing changes in the frame of reference. This web of vague analogies hints at a deeper common mathematical structure. I suggest that the Price equation expresses that underlying universal structure. The abstract Price equation describes dynamics as the change between two sets. One component of dynamics expresses the change in the frequency of things, holding constant the values associated with things. The other component of dynamics expresses the change in the values of things, holding constant the frequency of things. The separation of frequency from value generalizes Shannon's separation of the frequency of symbols from the meaning of symbols in information theory. The Price equation's generalized separation of frequency and value reveals a few simple invariances that define universal geometric aspects of change. For example, the conservation of total frequency, although a trivial invariance by itself, creates a powerful constraint on the geometry of change. That constraint plus a few others seem to explain the common structural forms of the equations in different disciplines. From that abstract perspective, interpretations such as selection, information, entropy, force, acceleration, and physical work arise from the same underlying geometry expressed by the Price equation.Comment: Version 3: added figure illustrating geometry; added table of symbols and two tables summarizing mathematical relations; this version accepted for publication in Entrop
    corecore