2,162 research outputs found

    The economic and statistical value of forecast combinations under regime switching: an application to predictable U.S. returns

    Get PDF
    We address one interesting case — the predictability of excess US asset returns from macroeconomic factors within a flexible regime switching VAR framework — in which the presence of regimes may lead to superior forecasting performance from forecast combinations. After having documented that forecast combinations provide gains in prediction accuracy and these gains are statistically significant, we show that combinations may substantially improve portfolio selection. We find that the best performing forecast combinations are those that either avoid estimating the pooling weights or that minimize the need for estimation. In practice, we report that the best performing combination schemes are based on the principle of relative, past forecasting performance. The economic gains from combining forecasts in portfolio management applications appear to be large, stable over time, and robust to the introduction of realistic transaction costs.Forecasting

    Structural Nested Models and G-estimation: The Partially Realized Promise

    Get PDF
    Structural nested models (SNMs) and the associated method of G-estimation were first proposed by James Robins over two decades ago as approaches to modeling and estimating the joint effects of a sequence of treatments or exposures. The models and estimation methods have since been extended to dealing with a broader series of problems, and have considerable advantages over the other methods developed for estimating such joint effects. Despite these advantages, the application of these methods in applied research has been relatively infrequent; we view this as unfortunate. To remedy this, we provide an overview of the models and estimation methods as developed, primarily by Robins, over the years. We provide insight into their advantages over other methods, and consider some possible reasons for failure of the methods to be more broadly adopted, as well as possible remedies. Finally, we consider several extensions of the standard models and estimation methods.Comment: Published in at http://dx.doi.org/10.1214/14-STS493 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Data-driven prognostics based on evolving fuzzy degradation models for power semiconductor devices

    Get PDF
    The increasing application of power converter systems based on semiconductor devices such as Insulated-Gate Bipolar Transistors (IGBTs) has motivated the investigation of strategies for their prognostics and health management. However, physicsbased degradation modelling for semiconductors is usually complex and depends on uncertain parameters, which motivates the use of data-driven approaches. This paper addresses the problem of data-driven prognostics of IGBTs based on evolving fuzzy models learned from degradation data streams. The model depends on two classes of degradation features: one group of features that are very sensitive to the degradation stages is used as a premise variable of the fuzzy model, and another group that provides good trendability and monotonicity is used for the auto-regressive consequent of the fuzzy model for degradation prediction. This strategy allows obtaining interpretable degradation models, which are improved when more degradation data is obtained from the Unit Under Test (UUT) in real time. Furthermore, the fuzzy-based Remaining Useful Life (RUL) prediction is equipped with an uncertainty quantification mechanism to better aid decisionmakers. The proposed approach is then used for the RUL prediction considering an accelerated aging IGBT dataset from the NASA Ames Research Center.Postprint (published version

    Overviews of Optimization Techniques for Geometric Estimation

    Get PDF
    We summarize techniques for optimal geometric estimation from noisy observations for computer vision applications. We first discuss the interpretation of optimality and point out that geometric estimation is different from the standard statistical estimation. We also describe our noise modeling and a theoretical accuracy limit called the KCR lower bound. Then, we formulate estimation techniques based on minimization of a given cost function: least squares (LS), maximum likelihood (ML), which includes reprojection error minimization as a special case, and Sampson error minimization. We describe bundle adjustment and the FNS scheme for numerically solving them and the hyperaccurate correction that improves the accuracy of ML. Next, we formulate estimation techniques not based on minimization of any cost function: iterative reweight, renormalization, and hyper-renormalization. Finally, we show numerical examples to demonstrate that hyper-renormalization has higher accuracy than ML, which has widely been regarded as the most accurate method of all. We conclude that hyper-renormalization is robust to noise and currently is the best method

    Censored Quantile Regression Redux

    Get PDF
    Quantile regression for censored survival (duration) data offers a more flexible alternative to the Cox proportional hazard model for some applications. We describe three estimation methods for such applications that have been recently incorporated into the R package quantreg: the Powell (1986) estimator for fixed censoring, and two methods for random censoring, one introduced by Portnoy (2003), and the other by Peng and Huang (2008). The Portnoy and Peng-Huang estimators can be viewed, respectively, as generalizations to regression of the Kaplan-Meier and Nelson-Aalen estimators of univariate quantiles for censored observations. Some asymptotic and simulation comparisons are made to highlight advantages and disadvantages of the three methods.

    Incremental online learning in high dimensions

    Get PDF
    this article, however, is problematic, as it requires a careful selection of initial ridge regression parameters to stabilize the highly rank-deficient full covariance matrix of the input data, and it is easy to create too much bias or too little numerical stabilization initially, which can trap the local distance metric adaptation in local minima.While the LWPR algorithm just computes about a factor 10 times longer for the 20D experiment in comparison to the 2D experiment, RFWR requires a 1000-fold increase of computation time, thus rendering this algorithm unsuitable for high-dimensional regression. In order to compare LWPR's results to other popular regression methods, we evaluated the 2D, 10D, and 20D cross data sets with gaussian process regression (GP) and support vector (SVM) regression in addition to our LWPR method. It should be noted that neither SVM nor GP methods is an incremental method, although they can be considered state-of-the-art for batch regression under relatively small numbers of training data and reasonable input dimensionality. The computational complexity of these methods is prohibitively high for real-time applications. The GP algorithm (Gibbs & MacKay, 1997) used a generic covariance function and optimized over the hyperparameters. The SVM regression was performed using a standard available package (Saunders et al., 1998) and optimized for kernel choices. Figure 6 compares the performance of LWPR and gaussian processes for the above-mentioned data sets using 100, 300, and 500 training data point

    Incremental Online Learning in High Dimensions

    Get PDF
    Locally weighted projection regression (LWPR) is a new algorithm for incremental non-linear function approximation in high dimensional spaces with redundant and irrelevant input dimensions. At its cor

    A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning

    Full text link
    We present a tutorial on Bayesian optimization, a method of finding the maximum of expensive cost functions. Bayesian optimization employs the Bayesian technique of setting a prior over the objective function and combining it with evidence to get a posterior function. This permits a utility-based selection of the next observation to make on the objective function, which must take into account both exploration (sampling from areas of high uncertainty) and exploitation (sampling areas likely to offer improvement over the current best observation). We also present two detailed extensions of Bayesian optimization, with experiments---active user modelling with preferences, and hierarchical reinforcement learning---and a discussion of the pros and cons of Bayesian optimization based on our experiences
    corecore