6 research outputs found

    Rayleigh quotient with bolzano booster for faster convergence of dominant eigenvalues

    Get PDF
    Computation ranking algorithms are widely used in several informatics fields. One of them is the PageRank algorithm, recognized as the most popular search engine globally. Many researchers have improvised the ranking algorithm in order to get better results. Recent research using Rayleigh Quotient to speed up PageRank can guarantee the convergence of the dominant eigenvalues as a key value for stopping computation. Bolzano's method has a convergence character on a linear function by dividing an interval into two intervals for better convergence. This research aims to implant the Bolzano algorithm into Rayleigh for faster computation. This research produces an algorithm that has been tested and validated by mathematicians, which shows an optimization speed of a maximum 7.08% compared to the sole Rayleigh approach. Analysis of computation results using statistics software shows that the degree of the curve of the new algorithm, which is Rayleigh with Bolzano booster (RB), is positive and more significant than the original method. In other words, the linear function will always be faster in the subsequent computation than the previous method

    â„“1\ell_1-regression with Heavy-tailed Distributions

    Full text link
    In this paper, we consider the problem of linear regression with heavy-tailed distributions. Different from previous studies that use the squared loss to measure the performance, we choose the absolute loss, which is capable of estimating the conditional median. To address the challenge that both the input and output could be heavy-tailed, we propose a truncated minimization problem, and demonstrate that it enjoys an O~(d/n)\widetilde{O}(\sqrt{d/n}) excess risk, where dd is the dimensionality and nn is the number of samples. Compared with traditional work on â„“1\ell_1-regression, the main advantage of our result is that we achieve a high-probability risk bound without exponential moment conditions on the input and output. Furthermore, if the input is bounded, we show that the classical empirical risk minimization is competent for â„“1\ell_1-regression even when the output is heavy-tailed

    Fast rates for general unbounded loss functions: From ERM to generalized bayes

    Get PDF
    We present new excess risk bounds for general unbounded loss functions including log loss and squared loss, where the distribution of the losses may be heavy-tailed. The bounds hold for general estimators, but they are optimized when applied to η-generalized Bayesian, MDL, and empirical risk minimization estimators. In the case of log loss, the bounds imply convergence rates for generalized Bayesian inference under misspecification in terms of a generalization of the Hellinger metric as long as the learning rate η is set correctly. For general loss functions, our bounds rely on two separate conditions: the v-GRIP (generalized reversed information projection) conditions, which control the lower tail of the excess loss; and the newly introduced witness condition, which controls the upper tail. The parameter v in the v-GRIP conditions determines the achievable rate and is akin to the exponent in the Tsybakov margin condition and the Bernstein condition for bounded losses, which the v-GRIP conditions generalize; favorable v in combination with small model complexity leads to Õ(1/n) rates. The witness condition allows us to connect the excess risk to an “annealed” version thereof, by which we generalize several previous results connecting Hellinger and Rényi divergence to KL divergence
    corecore