2,392,707 research outputs found

    Inferences from prior-based loss functions

    Full text link
    Inferences that arise from loss functions determined by the prior are considered and it is shown that these lead to limiting Bayes rules that are closely connected with likelihood. The procedures obtained via these loss functions are invariant under reparameterizations and are Bayesian unbiased or limits of Bayesian unbiased inferences. These inferences serve as well-supported alternatives to MAP-based inferences

    Are analysts? loss functions asymmetric?

    Get PDF
    Recent research by Gu and Wu (2003) and Basu and Markov (2004) suggests that the well-known optimism bias in analysts? earnings forecasts is attributable to analysts minimizing symmetric, linear loss functions when the distribution of forecast errors is skewed. An alternative explanation for forecast bias is that analysts have asymmetric loss functions. We test this alternative explanation. Theory predicts that if loss functions are asymmetric then forecast error bias depends on forecast error variance, but not necessarily on forecast error skewness. Our results confirm that the ex ante forecast error variance is a significant determinant of forecast error and that, after controlling for variance, the sign of the coefficient on forecast error skewness is opposite to that found in prior research. Our results are consistent with financial analysts having asymmetric loss functions. Further analysis reveals that forecast bias varies systematically across style portfolios formed on book-to-price and market capitalization. These firm characteristics capture systematic variation in forecast error variance and skewness. Within style portfolios, forecast error variance continues to play a dominant role in explaining forecast error.

    Are analysts' loss functions asymmetric?

    Get PDF
    Recent research by Gu and Wu (2003) and Basu and Markov (2004) suggests that the well-known optimism bias in analysts? earnings forecasts is attributable to analysts minimizing symmetric, linear loss functions when the distribution of forecast errors is skewed. An alternative explanation for forecast bias is that analysts have asymmetric loss functions. We test this alternative explanation. Theory predicts that if loss functions are asymmetric then forecast error bias depends on forecast error variance, but not necessarily on forecast error skewness. Our results confirm that the ex ante forecast error variance is a significant determinant of forecast error and that, after controlling for variance, the sign of the coefficient on forecast error skewness is opposite to that found in prior research. Our results are consistent with financial analysts having asymmetric loss functions. Further analysis reveals that forecast bias varies systematically across style portfolios formed on book-to-price and market capitalization. These firm characteristics capture systematic variation in forecast error variance and skewness. Within style portfolios, forecast error variance continues to play a dominant role in explaining forecast error.

    Efficient Optimization for Rank-based Loss Functions

    Full text link
    The accuracy of information retrieval systems is often measured using complex loss functions such as the average precision (AP) or the normalized discounted cumulative gain (NDCG). Given a set of positive and negative samples, the parameters of a retrieval system can be estimated by minimizing these loss functions. However, the non-differentiability and non-decomposability of these loss functions does not allow for simple gradient based optimization algorithms. This issue is generally circumvented by either optimizing a structured hinge-loss upper bound to the loss function or by using asymptotic methods like the direct-loss minimization framework. Yet, the high computational complexity of loss-augmented inference, which is necessary for both the frameworks, prohibits its use in large training data sets. To alleviate this deficiency, we present a novel quicksort flavored algorithm for a large class of non-decomposable loss functions. We provide a complete characterization of the loss functions that are amenable to our algorithm, and show that it includes both AP and NDCG based loss functions. Furthermore, we prove that no comparison based algorithm can improve upon the computational complexity of our approach asymptotically. We demonstrate the effectiveness of our approach in the context of optimizing the structured hinge loss upper bound of AP and NDCG loss for learning models for a variety of vision tasks. We show that our approach provides significantly better results than simpler decomposable loss functions, while requiring a comparable training time.Comment: 15 pages, 2 figure

    Focus Is All You Need: Loss Functions For Event-based Vision

    Full text link
    Event cameras are novel vision sensors that output pixel-level brightness changes ("events") instead of traditional video frames. These asynchronous sensors offer several advantages over traditional cameras, such as, high temporal resolution, very high dynamic range, and no motion blur. To unlock the potential of such sensors, motion compensation methods have been recently proposed. We present a collection and taxonomy of twenty two objective functions to analyze event alignment in motion compensation approaches (Fig. 1). We call them Focus Loss Functions since they have strong connections with functions used in traditional shape-from-focus applications. The proposed loss functions allow bringing mature computer vision tools to the realm of event cameras. We compare the accuracy and runtime performance of all loss functions on a publicly available dataset, and conclude that the variance, the gradient and the Laplacian magnitudes are among the best loss functions. The applicability of the loss functions is shown on multiple tasks: rotational motion, depth and optical flow estimation. The proposed focus loss functions allow to unlock the outstanding properties of event cameras.Comment: 29 pages, 19 figures, 4 table

    Forecasting Nonlinear Functions of Returns Using LINEX Loss Functions

    Get PDF
    This paper applies LINEX loss functions to forecasting nonlinear functions of variance. We derive the optimal one-step-ahead LINEX forecast for various volatility models using data transformations such as ln(y2t) where yt is the return of the asset. Our results suggest that the LINEX loss function is particularly well-suited to many of these forecasting problems and can give better forecasts than conventional loss functions such as mean square error (MSE).LINEX Loss Function, Forecasting, Volatility
    corecore