1,174,332 research outputs found

    Evaluation of value-at-risk models using historical data

    Get PDF
    We study the effect of restrictions on dual trading in futures contracts. Previous studies have found that dual trading restrictions can have a positive, negative, or neutral effect on market liquidity. In this paper, we propose that trader heterogeneity may explain these conflicting empirical results. We find that, for contracts affected by restrictions, the change in market activity following restrictions differs between contracts. More important, the effect of a restriction varies among dual traders in the same market. For example, dual traders who ceased trading the S&P 500 index futures following restrictions had the highest personal trading skills prior to restrictions. However, realized bid-ask spreads for customers did not increase following restrictions. Our results imply that securities regulation may adversely affect customers, but in ways not captured by broad-based liquidity measures, such as the bid-ask spread.Econometric models ; Investments ; Risk

    Studying historical occupational careers with multilevel growth models

    Get PDF
    In this article we propose to study occupational careers with historical data by using multilevel growth models. Historical career data are often characterized by a lack of information on the timing of occupational changes and by different numbers of observations of occupations per individual. Growth models can handle these specificities, whereas standard methods, such as event history analyses can’t. We illustrate the use of growth models by studying career success of men and women, using data from the Historical Sample of the Netherlands. The results show that the method is applicable to male careers, but causes trouble when analyzing female careers.careers, growth models, historical data

    Evaluating alternative methods for testing asset pricing models with historical data

    Get PDF
    We follow the correct Jagannathan and Wang (2002) framework for comparing the estimates and specification tests of the classical Beta and Stochastic Discount Factor/Generalized Method of Moments (SDF/GMM) methods. We extend previous studies by considering not only single but also multifactor models, and by taking into account some of the prescriptions for improving empirical tests suggested by Lewellen, Nagel and Shanken (2009). Our results reveal that SDF/GMM first-stage estimators lead to lower pricing errors than OLS, while SDF/GMM second-stage estimators display higher pricing errors than the classical Beta GLS method. While Jagannathan and Wang (2002), and Cochrane (2005) conclude that there are no differences when estimating and testing by the Beta and SDF/GMM methods for the CAPM, we show that their conclusion can not be extensible for multifactor models. Moreover, the Beta method (OLS and GLS) seem to dominate the SDF/GMM (first and second-stage) procedure in terms of estimators’ properties. These results are consistent across benchmark portfolios and sample periods.Beta Pricing Models; Stochastic Discount Factor; Pricing Errors; Evaluation of Factor Models.

    The Visualization of Historical Structures and Data in a 3D Virtual City

    Get PDF
    Google Earth is a powerful tool that allows users to navigate through 3D representations of many cities and places all over the world. Google Earth has a huge collection of 3D models and it only continues to grow as users all over the world continue to contribute new models. As new buildings are built new models are also created. But what happens when a new building replaces another? The same thing that happens in reality also happens in Google Earth. Old models are replaced with new models. While Google Earth shows the most current data, many users would also benefit from being able to view historical data. Google Earth has acknowledged this with the ability to view historical images with the manipulation of a time slider. However, this feature does not apply to 3D models of buildings, which remain in the environment even when viewing a time before their existence. I would like to build upon this concept by proposing a system that stores 3D models of historical buildings that have been demolished and replaced by new developments. People may want to view the old cities that they grew up in which have undergone huge developments over the years. Old neighborhoods may be completely transformed with new road and buildings. In addition to being able to view historical buildings, users may want to view statistics of a given area. Users can view such data in their raw format but using 3D visualizations of statistical data allows for a greater understanding and appreciation of historical changes. I propose to enhance the visualization of the 3D world by allowing users to graphically view statistical data such as population, ethnic groups, education, crime, and income. With this feature users will not only be able to see physical changes in the environment, but also statistical changes over time

    ON COMPLETENESS OF HISTORICAL RELATIONAL DATA MODELS

    Get PDF
    Several proposals for extending the relational data model to incorporate the temporal dimension of data have appeared in the past several years. These proposals have differed considerably in the way that the temporal dimension has been incorporated both into the structure of the extended relations that are defined as part of these extended model, and into the operations of the extended relational algebra or calculus component of the models. Because of these differences it has been difficult to compare the proposed models and to make judgements as to which of them is "better" or indeed, the "best." In this paper we propose a notion of historical relational completeness, analogous to Codd's notion of relational completeness, and examine several historical relational proposals in light of this standard.Information Systems Working Papers Serie

    Assessing Simulations of Imperial Dynamics and Conflict in the Ancient World

    Get PDF
    The development of models to capture large-scale dynamics in human history is one of the core contributions of cliodynamics. Most often, these models are assessed by their predictive capability on some macro-scale and aggregated measure and compared to manually curated historical data. In this report, we consider the model from Turchin et al. (2013), where the evaluation is done on the prediction of "imperial density": the relative frequency with which a geographical area belonged to large-scale polities over a certain time window. We implement the model and release both code and data for reproducibility. We then assess its behaviour against three historical data sets: the relative size of simulated polities vs historical ones; the spatial correlation of simulated imperial density with historical population density; the spatial correlation of simulated conflict vs historical conflict. At the global level, we show good agreement with population density (R2<0.75R^2 < 0.75), and some agreement with historical conflict in Europe (R2<0.42R^2 < 0.42). The model instead fails to reproduce the historical shape of individual polities. Finally, we tweak the model to behave greedily by having polities preferentially attacking weaker neighbours. Results significantly degrade, suggesting that random attacks are a key trait of the original model. We conclude by proposing a way forward by matching the probabilistic imperial strength from simulations to inferred networked communities from real settlement data

    A fully objective Bayesian approach for the Behrens-Fisher problem using historical studies

    Full text link
    For in vivo research experiments with small sample sizes and available historical data, we propose a sequential Bayesian method for the Behrens-Fisher problem. We consider it as a model choice question with two models in competition: one for which the two expectations are equal and one for which they are different. The choice between the two models is performed through a Bayesian analysis, based on a robust choice of combined objective and subjective priors, set on the parameters space and on the models space. Three steps are necessary to evaluate the posterior probability of each model using two historical datasets similar to the one of interest. Starting from the Jeffreys prior, a posterior using a first historical dataset is deduced and allows to calibrate the Normal-Gamma informative priors for the second historical dataset analysis, in addition to a uniform prior on the model space. From this second step, a new posterior on the parameter space and the models space can be used as the objective informative prior for the last Bayesian analysis. Bayesian and frequentist methods have been compared on simulated and real data. In accordance with FDA recommendations, control of type I and type II error rates has been evaluated. The proposed method controls them even if the historical experiments are not completely similar to the one of interest

    As-built design specification for historical daily data bases for testing advanced models

    Get PDF
    There are no author-identified significant results in this report
    • …
    corecore