475,893 research outputs found

    A probabilistic interpretation of the parametrix method

    Get PDF
    In this article, we introduce the parametrix technique in order to construct fundamental solutions as a general method based on semigroups and their generators. This leads to a probabilistic interpretation of the parametrix method that is amenable to Monte Carlo simulation. We consider the explicit examples of continuous diffusions and jump driven stochastic differential equations with H\"{o}lder continuous coefficients.Comment: Published at http://dx.doi.org/10.1214/14-AAP1068 in the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Counting connected hypergraphs via the probabilistic method

    Full text link
    In 1990 Bender, Canfield and McKay gave an asymptotic formula for the number of connected graphs on [n][n] with mm edges, whenever nn and the nullity mn+1m-n+1 tend to infinity. Asymptotic formulae for the number of connected rr-uniform hypergraphs on [n][n] with mm edges and so nullity t=(r1)mn+1t=(r-1)m-n+1 were proved by Karo\'nski and \L uczak for the case t=o(logn/loglogn)t=o(\log n/\log\log n), and Behrisch, Coja-Oghlan and Kang for t=Θ(n)t=\Theta(n). Here we prove such a formula for any r3r\ge 3 fixed, and any t=t(n)t=t(n) satisfying t=o(n)t=o(n) and tt\to\infty as nn\to\infty. This leaves open only the (much simpler) case t/nt/n\to\infty, which we will consider in future work. ( arXiv:1511.04739 ) Our approach is probabilistic. Let Hn,prH^r_{n,p} denote the random rr-uniform hypergraph on [n][n] in which each edge is present independently with probability pp. Let L1L_1 and M1M_1 be the numbers of vertices and edges in the largest component of Hn,prH^r_{n,p}. We prove a local limit theorem giving an asymptotic formula for the probability that L1L_1 and M1M_1 take any given pair of values within the `typical' range, for any p=p(n)p=p(n) in the supercritical regime, i.e., when p=p(n)=(1+ϵ(n))(r2)!nr+1p=p(n)=(1+\epsilon(n))(r-2)!n^{-r+1} where ϵ3n\epsilon^3n\to\infty and ϵ0\epsilon\to 0; our enumerative result then follows easily. Taking as a starting point the recent joint central limit theorem for L1L_1 and M1M_1, we use smoothing techniques to show that `nearby' pairs of values arise with about the same probability, leading to the local limit theorem. Behrisch et al used similar ideas in a very different way, that does not seem to work in our setting. Independently, Sato and Wormald have recently proved the special case r=3r=3, with an additional restriction on tt. They use complementary, more enumerative methods, which seem to have a more limited scope, but to give additional information when they do work.Comment: Expanded; asymptotics clarified - no significant mathematical changes. 67 pages (including appendix

    Rigid abelian groups and the probabilistic method

    Full text link
    The construction of torsion-free abelian groups with prescribed endomorphism rings starting with Corner's seminal work is a well-studied subject in the theory of abelian groups. Usually these construction work by adding elements from a (topological) completion in order to get rid of (kill) unwanted homomorphisms. The critical part is to actually prove that every unwanted homomorphism can be killed by adding a suitable element. We will demonstrate that some of those constructions can be significantly simplified by choosing the elements at random. As a result, the endomorphism ring will be almost surely prescribed, i.e., with probability one.Comment: 12 pages, submitted to the special volume of Contemporary Mathematics for the proceedings of the conference Group and Model Theory, 201

    Probabilistic Linear Solvers: A Unifying View

    Full text link
    Several recent works have developed a new, probabilistic interpretation for numerical algorithms solving linear systems in which the solution is inferred in a Bayesian framework, either directly or by inferring the unknown action of the matrix inverse. These approaches have typically focused on replicating the behavior of the conjugate gradient method as a prototypical iterative method. In this work surprisingly general conditions for equivalence of these disparate methods are presented. We also describe connections between probabilistic linear solvers and projection methods for linear systems, providing a probabilistic interpretation of a far more general class of iterative methods. In particular, this provides such an interpretation of the generalised minimum residual method. A probabilistic view of preconditioning is also introduced. These developments unify the literature on probabilistic linear solvers, and provide foundational connections to the literature on iterative solvers for linear systems

    Global convergence rate analysis of unconstrained optimization methods based on probabilistic models

    Full text link
    We present global convergence rates for a line-search method which is based on random first-order models and directions whose quality is ensured only with certain probability. We show that in terms of the order of the accuracy, the evaluation complexity of such a method is the same as its counterparts that use deterministic accurate models; the use of probabilistic models only increases the complexity by a constant, which depends on the probability of the models being good. We particularize and improve these results in the convex and strongly convex case. We also analyze a probabilistic cubic regularization variant that allows approximate probabilistic second-order models and show improved complexity bounds compared to probabilistic first-order methods; again, as a function of the accuracy, the probabilistic cubic regularization bounds are of the same (optimal) order as for the deterministic case
    corecore