56,643 research outputs found

    Forecasting substantial data revisions in the presence of model uncertainty

    Get PDF
    A recent revision to the preliminary measurement of GDP(E) growth for 2003Q2 caused considerable press attention, provoked a public enquiry and prompted a number of reforms to UK statistical reporting procedures. In this article, we compute the probability of 'substantial revisions' that are greater (in absolute value) than the controversial 2003 revision. The predictive densities are derived from Bayesian model averaging over a wide set of forecasting models including linear, structural break and regime-switching models with and without heteroscedasticity. Ignoring the nonlinearities and model uncertainty yields misleading predictives and obscures recent improvements in the quality of preliminary UK macroeconomic measurements

    Methodological and empirical challenges in modelling residential location choices

    No full text
    The modelling of residential locations is a key element in land use and transport planning. There are significant empirical and methodological challenges inherent in such modelling, however, despite recent advances both in the availability of spatial datasets and in computational and choice modelling techniques. One of the most important of these challenges concerns spatial aggregation. The housing market is characterised by the fact that it offers spatially and functionally heterogeneous products; as a result, if residential alternatives are represented as aggregated spatial units (as in conventional residential location models), the variability of dwelling attributes is lost, which may limit the predictive ability and policy sensitivity of the model. This thesis presents a modelling framework for residential location choice that addresses three key challenges: (i) the development of models at the dwelling-unit level, (ii) the treatment of spatial structure effects in such dwelling-unit level models, and (iii) problems associated with estimation in such modelling frameworks in the absence of disaggregated dwelling unit supply data. The proposed framework is applied to the residential location choice context in London. Another important challenge in the modelling of residential locations is the choice set formation problem. Most models of residential location choices have been developed based on the assumption that households consider all available alternatives when they are making location choices. Due the high search costs associated with the housing market, however, and the limited capacity of households to process information, the validity of this assumption has been an on-going debate among researchers. There have been some attempts in the literature to incorporate the cognitive capacities of households within discrete choice models of residential location: for instance, by modelling households’ choice sets exogenously based on simplifying assumptions regarding their spatial search behaviour (e.g., an anchor-based search strategy) and their characteristics. By undertaking an empirical comparison of alternative models within the context of residential location choice in the Greater London area this thesis investigates the feasibility and practicality of applying deterministic choice set formation approaches to capture the underlying search process of households. The thesis also investigates the uncertainty of choice sets in residential location choice modelling and proposes a simplified probabilistic choice set formation approach to model choice sets and choices simultaneously. The dwelling-level modelling framework proposed in this research is practice-ready and can be used to estimate residential location choice models at the level of dwelling units without requiring independent and disaggregated dwelling supply data. The empirical comparison of alternative exogenous choice set formation approaches provides a guideline for modellers and land use planners to avoid inappropriate choice set formation approaches in practice. Finally, the proposed simplified choice set formation model can be applied to model the behaviour of households in online real estate environments.Open Acces

    Survey Expectations

    Get PDF
    This paper focuses on survey expectations and discusses their uses for testing and modeling of expectations.Alternative models of expectations formation are reviewed and the importance of allowing for heterogeneity of expectations is emphasized. A weak form of the rational expectations hypothesis which focuses on average expectationsrather than individual expectations is advanced. Other models of expectations formation, such as the adaptive expectations hypothesis, are briefly discussed. Testable implications of rational and extrapolative models of expectationsare reviewed and the importance of the loss function for the interpretation of the test results is discussed. The paper thenprovides an account of the various surveys of expectations, reviews alternative methods of quantifying the qualitative surveys, and discusses the use of aggregate and individual survey responses in the analysis of expectations and for forecasting

    Abandon Statistical Significance

    Get PDF
    We discuss problems the null hypothesis significance testing (NHST) paradigm poses for replication and more broadly in the biomedical and social sciences as well as how these problems remain unresolved by proposals involving modified p-value thresholds, confidence intervals, and Bayes factors. We then discuss our own proposal, which is to abandon statistical significance. We recommend dropping the NHST paradigm--and the p-value thresholds intrinsic to it--as the default statistical paradigm for research, publication, and discovery in the biomedical and social sciences. Specifically, we propose that the p-value be demoted from its threshold screening role and instead, treated continuously, be considered along with currently subordinate factors (e.g., related prior evidence, plausibility of mechanism, study design and data quality, real world costs and benefits, novelty of finding, and other factors that vary by research domain) as just one among many pieces of evidence. We have no desire to "ban" p-values or other purely statistical measures. Rather, we believe that such measures should not be thresholded and that, thresholded or not, they should not take priority over the currently subordinate factors. We also argue that it seldom makes sense to calibrate evidence as a function of p-values or other purely statistical measures. We offer recommendations for how our proposal can be implemented in the scientific publication process as well as in statistical decision making more broadly

    Evaluating internal credit rating systems depending on bank size

    Get PDF
    Under a new Basel capital accord, bank regulators might use quantitative measures when evaluating the eligibility of internal credit rating systems for the internal ratings based approach. Based on data from Deutsche Bundesbank and using a simulation approach, we find that it is possible to identify strongly inferior rating systems out-of time based on statistics that measure either the quality of ranking borrowers from good to bad, or the quality of individual default probability forecasts. Banks do not significantly improve system quality if they use credit scores instead of ratings, or logistic regression default probability estimates instead of historical data. Banks that are not able to discriminate between high- and low-risk borrowers increase their average capital requirements due to the concavity of the capital requirements function

    A point process framework for modeling electrical stimulation of the auditory nerve

    Full text link
    Model-based studies of auditory nerve responses to electrical stimulation can provide insight into the functioning of cochlear implants. Ideally, these studies can identify limitations in sound processing strategies and lead to improved methods for providing sound information to cochlear implant users. To accomplish this, models must accurately describe auditory nerve spiking while avoiding excessive complexity that would preclude large-scale simulations of populations of auditory nerve fibers and obscure insight into the mechanisms that influence neural encoding of sound information. In this spirit, we develop a point process model of the auditory nerve that provides a compact and accurate description of neural responses to electric stimulation. Inspired by the framework of generalized linear models, the proposed model consists of a cascade of linear and nonlinear stages. We show how each of these stages can be associated with biophysical mechanisms and related to models of neuronal dynamics. Moreover, we derive a semi-analytical procedure that uniquely determines each parameter in the model on the basis of fundamental statistics from recordings of single fiber responses to electric stimulation, including threshold, relative spread, jitter, and chronaxie. The model also accounts for refractory and summation effects that influence the responses of auditory nerve fibers to high pulse rate stimulation. Throughout, we compare model predictions to published physiological data and explain differences in auditory nerve responses to high and low pulse rate stimulation. We close by performing an ideal observer analysis of simulated spike trains in response to sinusoidally amplitude modulated stimuli and find that carrier pulse rate does not affect modulation detection thresholds.Comment: 1 title page, 27 manuscript pages, 14 figures, 1 table, 1 appendi

    Disturbing Extremal Behavior of Spot Rate Dynamics

    Get PDF
    This paper presents a study of extreme interest rate movements in the U.S. Federal Funds market over almost a half century of daily observations from the mid 1950s through the end of 2000. We analyze the fluctuations of the maximal and minimal changes in short term interest rates and test the significance of time-varying paths followed by the mean and volatility of extremes. We formally determine the relevance of introducing trend and serial correlation in the mean, and of incorporating the level and GARCH effects in the volatility of extreme changes in the federal funds rate. The empirical findings indicate the existence of volatility clustering in the standard deviation of extremes, and a significantly positive relationship between the level and the volatility of extremes. The results point to the presence of an autoregressive process in the means of both local maxima and local minima values. The paper proposes a conditional extreme value approach to calculating value at risk by specifying the location and scale parameters of the generalized Pareto distribution as a function of past information. Based on the estimated VaR thresholds, the statistical theory of extremes is found to provide more accurate estimates of the rate of occurrence and the size of extreme observations.extreme value theory, volatility, interest rates, value at risk
    • …
    corecore