60,030 research outputs found

    Bias-Variance Trade-offs Analysis Using Uniform CR Bound for Images

    Full text link
    We apply a uniform Cramer-Rao (CR) bound to study the bias-variance trade-offs in parameter estimation. The uniform CR bound is used to specify achievable and unachievable regions in the bias-variance trade-off plane. The applications considered are: (1) two-dimensional single photon emission computed tomography (SPECT) system, and (2) one dimensional edge localization.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85969/1/Fessler131.pd

    Uniform CR Bound: Implement ation Issues And Applications

    Full text link
    The authors apply a uniform Cramer-Rao (CR) bound (A.O. Hero, 1992) to study the bias-variance trade-offs in single photon emission computed tomography (SPECT) image reconstruction. The uniform CR bound is used to specify achievable and unachievable regions in the bias-variance trade-off plane. The image reconstruction algorithms considered here are: 1) space alternating generalized EM and 2) penalized weighted least-squares.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85905/1/Fessler128.pd

    Constant Step Size Least-Mean-Square: Bias-Variance Trade-offs and Optimal Sampling Distributions

    Get PDF
    We consider the least-squares regression problem and provide a detailed asymptotic analysis of the performance of averaged constant-step-size stochastic gradient descent (a.k.a. least-mean-squares). In the strongly-convex case, we provide an asymptotic expansion up to explicit exponentially decaying terms. Our analysis leads to new insights into stochastic approximation algorithms: (a) it gives a tighter bound on the allowed step-size; (b) the generalization error may be divided into a variance term which is decaying as O(1/n), independently of the step-size γ\gamma, and a bias term that decays as O(1/γ\gamma 2 n 2); (c) when allowing non-uniform sampling, the choice of a good sampling density depends on whether the variance or bias terms dominate. In particular, when the variance term dominates, optimal sampling densities do not lead to much gain, while when the bias term dominates, we can choose larger step-sizes that leads to significant improvements

    Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression

    Get PDF
    We consider the optimization of a quadratic objective function whose gradients are only accessible through a stochastic oracle that returns the gradient at any given point plus a zero-mean finite variance random error. We present the first algorithm that achieves jointly the optimal prediction error rates for least-squares regression, both in terms of forgetting of initial conditions in O(1/n 2), and in terms of dependence on the noise and dimension d of the problem, as O(d/n). Our new algorithm is based on averaged accelerated regularized gradient descent, and may also be analyzed through finer assumptions on initial conditions and the Hessian matrix, leading to dimension-free quantities that may still be small while the " optimal " terms above are large. In order to characterize the tightness of these new bounds, we consider an application to non-parametric regression and use the known lower bounds on the statistical performance (without computational limits), which happen to match our bounds obtained from a single pass on the data and thus show optimality of our algorithm in a wide variety of particular trade-offs between bias and variance

    Optimizing the noise versus bias trade-off for Illumina whole genome expression BeadChips

    Get PDF
    Five strategies for pre-processing intensities from Illumina expression BeadChips are assessed from the point of view of precision and bias. The strategies include a popular variance stabilizing transformation and model-based background corrections that either use or ignore the control probes. Four calibration data sets are used to evaluate precision, bias and false discovery rate (FDR). The original algorithms are shown to have operating characteristics that are not easily comparable. Some tend to minimize noise while others minimize bias. Each original algorithm is shown to have an innate intensity offset, by which unlogged intensities are bounded away from zero, and the size of this offset determines its position on the noise–bias spectrum. By adding extra offsets, a continuum of related algorithms with different noise–bias trade-offs is generated, allowing direct comparison of the performance of the strategies on equivalent terms. Adding a positive offset is shown to decrease the FDR of each original algorithm. The potential of each strategy to generate an algorithm with an optimal noise–bias trade-off is explored by finding the offset that minimizes its FDR. The use of control probes as part of the background correction and normalization strategy is shown to achieve the lowest FDR for a given bias

    Black-Litterman, Bayesian Shrinkage, and Factor Models in Portfolio Selection: You Can Have It All

    Full text link
    Mean-variance analysis is widely used in portfolio management to identify the best portfolio that makes an optimal trade-off between expected return and volatility. Yet, this method has its limitations, notably its vulnerability to estimation errors and its reliance on historical data. While shrinkage estimators and factor models have been introduced to improve estimation accuracy through bias-variance trade-offs, and the Black-Litterman model has been developed to integrate investor opinions, a unified framework combining three approaches has been lacking. Our study debuts a Bayesian blueprint that fuses shrinkage estimation with view inclusion, conceptualizing both as Bayesian updates. This model is then applied within the context of the Fama-French approach factor models, thereby integrating the advantages of each methodology. Finally, through a comprehensive empirical study in the US equity market spanning a decade, we show that the model outperforms both the simple 1/N1/N portfolio and the optimal portfolios based on sample estimators
    corecore