47,553 research outputs found

    Forecasting in dynamic factor models using Bayesian model averaging

    Get PDF
    This paper considers the problem of forecasting in dynamic factor models using Bayesian model averaging. Theoretical justifications for averaging across models, as opposed to selecting a single model, are given. Practical methods for implementing Bayesian model averaging with factor models are described. These methods involve algorithms which simulate from the space defined by all possible models. We discuss how these simulation algorithms can also be used to select the model with the highest marginal likelihood (or highest value of an information criterion) in an efficient manner. We apply these methods to the problem of forecasting GDP and inflation using quarterly U.S. data on 162 time series. For both GDP and inflation, we find that the models which contain factors do out-forecast an AR(p), but only by a relatively small amount and only at short horizons. We attribute these findings to the presence of structural instability and the fact that lags of dependent variable seem to contain most of the information relevant for forecasting. Relative to the small forecasting gains provided by including factors, the gains provided by using Bayesian model averaging over forecasting methods based on a single model are appreciable

    Experimental Design for Sensitivity Analysis, Optimization and Validation of Simulation Models

    Get PDF
    This chapter gives a survey on the use of statistical designs for what-if analysis in simula- tion, including sensitivity analysis, optimization, and validation/verification. Sensitivity analysis is divided into two phases. The first phase is a pilot stage, which consists of screening or searching for the important factors among (say) hundreds of potentially important factors. A novel screening technique is presented, namely sequential bifurcation. The second phase uses regression analysis to approximate the input/output transformation that is implied by the simulation model; the resulting regression model is also known as a metamodel or a response surface. Regression analysis gives better results when the simu- lation experiment is well designed, using either classical statistical designs (such as frac- tional factorials) or optimal designs (such as pioneered by Fedorov, Kiefer, and Wolfo- witz). To optimize the simulated system, the analysts may apply Response Surface Metho- dology (RSM); RSM combines regression analysis, statistical designs, and steepest-ascent hill-climbing. To validate a simulation model, again regression analysis and statistical designs may be applied. Several numerical examples and case-studies illustrate how statisti- cal techniques can reduce the ad hoc character of simulation; that is, these statistical techniques can make simulation studies give more general results, in less time. Appendix 1 summarizes confidence intervals for expected values, proportions, and quantiles, in termi- nating and steady-state simulations. Appendix 2 gives details on four variance reduction techniques, namely common pseudorandom numbers, antithetic numbers, control variates or regression sampling, and importance sampling. Appendix 3 describes jackknifing, which may give robust confidence intervals.least squares;distribution-free;non-parametric;stopping rule;run-length;Von Neumann;median;seed;likelihood ratio

    SLOPE - Adaptive variable selection via convex optimization

    Get PDF
    We introduce a new estimator for the vector of coefficients β\beta in the linear model y=Xβ+zy=X\beta+z, where XX has dimensions n×pn\times p with pp possibly larger than nn. SLOPE, short for Sorted L-One Penalized Estimation, is the solution to minbRp12yXb22+λ1b(1)+λ2b(2)++λpb(p),\min_{b\in\mathbb{R}^p}\frac{1}{2}\Vert y-Xb\Vert _{\ell_2}^2+\lambda_1\vert b\vert _{(1)}+\lambda_2\vert b\vert_{(2)}+\cdots+\lambda_p\vert b\vert_{(p)}, where λ1λ2λp0\lambda_1\ge\lambda_2\ge\cdots\ge\lambda_p\ge0 and b(1)b(2)b(p)\vert b\vert_{(1)}\ge\vert b\vert_{(2)}\ge\cdots\ge\vert b\vert_{(p)} are the decreasing absolute values of the entries of bb. This is a convex program and we demonstrate a solution algorithm whose computational complexity is roughly comparable to that of classical 1\ell_1 procedures such as the Lasso. Here, the regularizer is a sorted 1\ell_1 norm, which penalizes the regression coefficients according to their rank: the higher the rank - that is, stronger the signal - the larger the penalty. This is similar to the Benjamini and Hochberg [J. Roy. Statist. Soc. Ser. B 57 (1995) 289-300] procedure (BH) which compares more significant pp-values with more stringent thresholds. One notable choice of the sequence {λi}\{\lambda_i\} is given by the BH critical values λBH(i)=z(1iq/2p)\lambda_{\mathrm {BH}}(i)=z(1-i\cdot q/2p), where q(0,1)q\in(0,1) and z(α)z(\alpha) is the quantile of a standard normal distribution. SLOPE aims to provide finite sample guarantees on the selected model; of special interest is the false discovery rate (FDR), defined as the expected proportion of irrelevant regressors among all selected predictors. Under orthogonal designs, SLOPE with λBH\lambda_{\mathrm{BH}} provably controls FDR at level qq. Moreover, it also appears to have appreciable inferential properties under more general designs XX while having substantial power, as demonstrated in a series of experiments running on both simulated and real data.Comment: Published at http://dx.doi.org/10.1214/15-AOAS842 in the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Proceedings of Abstracts Engineering and Computer Science Research Conference 2019

    Get PDF
    © 2019 The Author(s). This is an open-access work distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. For further details please see https://creativecommons.org/licenses/by/4.0/. Note: Keynote: Fluorescence visualisation to evaluate effectiveness of personal protective equipment for infection control is © 2019 Crown copyright and so is licensed under the Open Government Licence v3.0. Under this licence users are permitted to copy, publish, distribute and transmit the Information; adapt the Information; exploit the Information commercially and non-commercially for example, by combining it with other Information, or by including it in your own product or application. Where you do any of the above you must acknowledge the source of the Information in your product or application by including or linking to any attribution statement specified by the Information Provider(s) and, where possible, provide a link to this licence: http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/This book is the record of abstracts submitted and accepted for presentation at the Inaugural Engineering and Computer Science Research Conference held 17th April 2019 at the University of Hertfordshire, Hatfield, UK. This conference is a local event aiming at bringing together the research students, staff and eminent external guests to celebrate Engineering and Computer Science Research at the University of Hertfordshire. The ECS Research Conference aims to showcase the broad landscape of research taking place in the School of Engineering and Computer Science. The 2019 conference was articulated around three topical cross-disciplinary themes: Make and Preserve the Future; Connect the People and Cities; and Protect and Care

    Regression Models and Experimental Designs: A Tutorial for Simulation Analaysts

    Get PDF
    This tutorial explains the basics of linear regression models. especially low-order polynomials. and the corresponding statistical designs. namely, designs of resolution III, IV, V, and Central Composite Designs (CCDs).This tutorial assumes 'white noise', which means that the residuals of the fitted linear regression model are normally, independently, and identically distributed with zero mean.The tutorial gathers statistical results that are scattered throughout the literature on mathematical statistics, and presents these results in a form that is understandable to simulation analysts.metamodels;fractional factorial designs;Plackett-Burman designs;factor interactions;validation;cross-validation

    Analyzing Extreme Cases: How Quantile Regression can Enhance Our Ability to Identify Productivity Stars

    Get PDF
    Recent research suggests that individual productivity may not be normally distributed and is best modeled by a power law, a form of a heavy-tailed distribution where extreme cases on the right side of the distribution affect the mean and skew the probability distribution. These extreme cases, commonly referred to as “star performers” or “productivity stars,” provide a disproportionately positive impact on organizations. Yet, the field of industrial-organizational psychology has failed to uncover effective techniques to identify them during selection accurately. Limiting factors in the identification of star performers are the traditional methods (e.g., Pearson correlation, ordinary least squares regression) used to establish criterion-related validity and inform selection battery design (i.e., determine which assessments should be retained and how those assessments should be weighted). Pearson correlation and ordinary least squares regression do not perform well (i.e., do not provide accurate estimates) when data are highly skewed and contain outliers. Thus, the purpose of this dissertation was to investigate whether an alternative method, specifically the quantile regression model (QRM), outperforms traditional approaches during criterion-related validation and selection battery design. Across three unique samples, results suggest that although the QRM provides a much more detailed understanding of predictor-criterion relationships, the practical usefulness of the QRM in selection assessment battery design is similar to the OLS regression

    Determinants of Spread and Creditworthiness for Emerging Market Sovereign Debt:A Panel Data Study

    Get PDF
    This study uses a panel-data framework to identify the determinants of the spread over US Treasuries of emerging market sovereign issues as well as of the creditworthiness of the issuers,where the latter is represented by the Institutional Investor's creditworthiness index. We use a sample of 16 emerging market economies, together with time series data for the period 1998 to 2002 when analysing the spread, and from 1987 to 2001 when analysing the creditworthiness. The results suggest that for both the spread and the creditworthiness, significant explanatory variables include the economic growt rate, the debt-to-GDP ratio, the reserves-to-GDP ratio, and the debt-to-exports ratio. In addition, the spread is also determined by the exports-to-GDP ratio, and the debt service to GDP,while the creditworthiness is influenced by the inflation rate and a default dummy variable.

    Heuristic model selection for leading indicators in Russia and Germany

    Get PDF
    Business tendency survey indicators are widely recognized as a key instrument for business cycle forecasting. Their leading indicator property is assessed with regard to forecasting industrial production in Russia and Germany. For this purpose, vector autoregressive (VAR) models are specified and estimated to construct forecasts. As the potential number of lags included is large, we compare full–specified VAR models with subset models obtained using a Genetic Algorithm enabling ’holes’ in multivariate lag structures. The problem is complicated by the fact that a structural break and seasonal variation of indicators have to be taken into account. The models allow for a comparison of the dynamic adjustment and the forecasting performance of the leading indicators for bothLeading indicators, business cycle forecasts, VAR, model selection, genetic algorithms.

    Heuristic model selection for leading indicators in Russia and Germany

    Get PDF
    Business tendency survey indicators are widely recognized as a key instrument for business cycle forecasting. Their leading indicator property is assessed with regard to forecasting industrial production in Russia and Germany. For this purpose, vector autoregressive (VAR) models are specified and estimated to construct forecasts. As the potential number of lags included is large, we compare full–specified VAR models with subset models obtained using a Genetic Algorithm enabling ’holes’ in multivariate lag structures. The problem is complicated by the fact that a structural break and seasonal variation of indicators have to be taken into account. The models allow for a comparison of the dynamic adjustment and the forecasting performance of the leading indicators for both countries revealing marked differences between Russia and Germany.Leading indicators, business cycle forecasts, VAR, model selection, genetic algorithms
    corecore