2 research outputs found

    Risk Bounds for Infinitely Divisible Distribution

    Full text link
    In this paper, we study the risk bounds for samples independently drawn from an infinitely divisible (ID) distribution. In particular, based on a martingale method, we develop two deviation inequalities for a sequence of random variables of an ID distribution with zero Gaussian component. By applying the deviation inequalities, we obtain the risk bounds based on the covering number for the ID distribution. Finally, we analyze the asymptotic convergence of the risk bound derived from one of the two deviation inequalities and show that the convergence rate of the bound is faster than the result for the generic i.i.d. empirical process (Mendelson, 2003)

    Risk bounds of learning processes for Lévy processes

    Full text link
    Lévy processes refer to a class of stochastic processes, for example, Poisson processes and Brownian motions, and play an important role in stochastic processes and machine learning. Therefore, it is essential to study risk bounds of the learning process for time-dependent samples drawn from a Lévy process (or briefly called learning process for Lévy process). It is noteworthy that samples in this learning process are not independently and identically distributed (i.i.d.). Therefore, results in traditional statistical learning theory are not applicable (or at least cannot be applied directly), because they are obtained under the sample-i.i.d. assumption. In this paper, we study risk bounds of the learning process for time-dependent samples drawn from a Lévy process, and then analyze the asymptotical behavior of the learning process. In particular, we first develop the deviation inequalities and the symmetrization inequality for the learning process. By using the resultant inequalities, we then obtain the risk bounds based on the covering number. Finally, based on the resulting risk bounds, we study the asymptotic convergence and the rate of convergence of the learning process for Lévy process. Meanwhile, we also give a comparison to the related results under the sample-i.i.d. assumption. © 2013 Chao Zhang and Dacheng Tao
    corecore