6 research outputs found

    The fundamental theorem of calculus for Lipschitz functions

    Get PDF
    Every smooth function in several variables with a Lipschitz derivative, when considered on a compact convex set, is the difference of a convex function and a convex quadratic function. We use this result to decompose anti - derivatives of continuous Lipschitz functions and augment the fundamental theorem of calculus. The augmentation makes it possible to convexify and monotonize ordinary differential equations and obtain possibly new results for integrals of scalar functions and for line integrals. The result is also used in linear algebra where new bounds for the determinant and the spectral radius of symmetric matrices are obtained

    Bayesian inference of ODEs with Gaussian processes

    Full text link
    Recent machine learning advances have proposed black-box estimation of unknown continuous-time system dynamics directly from data. However, earlier works are based on approximative ODE solutions or point estimates. We propose a novel Bayesian nonparametric model that uses Gaussian processes to infer posteriors of unknown ODE systems directly from data. We derive sparse variational inference with decoupled functional sampling to represent vector field posteriors. We also introduce a probabilistic shooting augmentation to enable efficient inference from arbitrarily long trajectories. The method demonstrates the benefit of computing vector field posteriors, with predictive uncertainty scores outperforming alternative methods on multiple ODE learning tasks

    The fundamental theorem of calculus for Lipschitz functions

    Get PDF
    Every smooth function in several variables with a Lipschitz derivative, when considered on a compact convex set, is the difference of a convex function and a convex quadratic function. We use this result to decompose anti - derivatives of continuous Lipschitz functions and augment the fundamental theorem of calculus. The augmentation makes it possible to convexify and monotonize ordinary differential equations and obtain possibly new results for integrals of scalar functions and for line integrals. The result is also used in linear algebra where new bounds for the determinant and the spectral radius of symmetric matrices are obtained

    The BG News November 12, 1999

    Get PDF
    The BGSU campus student newspaper November 12, 1999. Volume 84 - Issue 57https://scholarworks.bgsu.edu/bg-news/7563/thumbnail.jp

    An Ode to an ODE

    No full text
    We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the orthogonal group O(d). This nested system of two flows, where the parameter-flow is constrained to lie on the compact manifold, provides stability and effectiveness of training and provably solves the gradient vanishing-explosion problem which is intrinsically related to training deep neural network architectures such as Neural ODEs. Consequently, it leads to better downstream models, as we show on the example of training reinforcement learning policies with evolution strategies, and in the supervised learning setting, by comparing with previous SOTA baselines. We provide strong convergence results for our proposed mechanism that are independent of the depth of the network, supporting our empirical studies. Our results show an intriguing connection between the theory of deep neural networks and the field of matrix flows on compact manifolds
    corecore