23,816 research outputs found

    Measuring economic complexity of countries and products: which metric to use?

    Full text link
    Evaluating the economies of countries and their relations with products in the global market is a central problem in economics, with far-reaching implications to our theoretical understanding of the international trade as well as to practical applications, such as policy making and financial investment planning. The recent Economic Complexity approach aims to quantify the competitiveness of countries and the quality of the exported products based on the empirical observation that the most competitive countries have diversified exports, whereas developing countries only export few low quality products -- typically those exported by many other countries. Two different metrics, Fitness-Complexity and the Method of Reflections, have been proposed to measure country and product score in the Economic Complexity framework. We use international trade data and a recent ranking evaluation measure to quantitatively compare the ability of the two metrics to rank countries and products according to their importance in the network. The results show that the Fitness-Complexity metric outperforms the Method of Reflections in both the ranking of products and the ranking of countries. We also investigate a Generalization of the Fitness-Complexity metric and show that it can produce improved rankings provided that the input data are reliable

    Fast and Robust Rank Aggregation against Model Misspecification

    Full text link
    In rank aggregation, preferences from different users are summarized into a total order under the homogeneous data assumption. Thus, model misspecification arises and rank aggregation methods take some noise models into account. However, they all rely on certain noise model assumptions and cannot handle agnostic noises in the real world. In this paper, we propose CoarsenRank, which rectifies the underlying data distribution directly and aligns it to the homogeneous data assumption without involving any noise model. To this end, we define a neighborhood of the data distribution over which Bayesian inference of CoarsenRank is performed, and therefore the resultant posterior enjoys robustness against model misspecification. Further, we derive a tractable closed-form solution for CoarsenRank making it computationally efficient. Experiments on real-world datasets show that CoarsenRank is fast and robust, achieving consistent improvement over baseline methods

    Measuring the Behavioural Component of the S&P 500 and its Relationship to Financial Stress and Aggregated Earnings Surprises

    Get PDF
    Scholars in management and economics have shown increasing interest in isolating the behavioural dimension of market evolution. Indeed, by improving forecast accuracy and precision, this exercise would certainly help firms to anticipate economic fluctuations, thus leading to more profitable business and investment strategies. Yet, how to extract the behavioural component from real market data remains an open question. By using monthly data on the returns of the constituents of the S&P 500 index, we propose a Bayesian methodology to measure the extent to which market data conform to what is predicted by prospect theory (the behavioural perspective), relative to the (standard) subjective expected utility theory baseline.We document a significant behavioural component that reaches its peaks during recession periods and is correlated to measures of financial volatility, market sentiment and financial stress with expected sign. Moreover, the behavioural component decreases around macroeconomic corporate earnings news, while it reacts positively to the number of surprising announcements

    The Z-index: A geometric representation of productivity and impact which accounts for information in the entire rank-citation profile

    Get PDF
    We present a simple generalization of Hirsch's h-index, Z = \sqrt{h^{2}+C}/\sqrt{5}, where C is the total number of citations. Z is aimed at correcting the potentially excessive penalty made by h on a scientist's highly cited papers, because for the majority of scientists analyzed, we find the excess citation fraction (C-h^{2})/C to be distributed closely around the value 0.75, meaning that 75 percent of the author's impact is neglected. Additionally, Z is less sensitive to local changes in a scientist's citation profile, namely perturbations which increase h while only marginally affecting C. Using real career data for 476 physicists careers and 488 biologist careers, we analyze both the distribution of ZZ and the rank stability of Z with respect to the Hirsch index h and the Egghe index g. We analyze careers distributed across a wide range of total impact, including top-cited physicists and biologists for benchmark comparison. In practice, the Z-index requires the same information needed to calculate h and could be effortlessly incorporated within career profile databases, such as Google Scholar and ResearcherID. Because Z incorporates information from the entire publication profile while being more robust than h and g to local perturbations, we argue that Z is better suited for ranking comparisons in academic decision-making scenarios comprising a large number of scientists.Comment: 9 pages, 5 figure

    Forecasting Issues: Ideas of Decomposition and Combination

    Get PDF
    Combination techniques and decomposition procedures have been applied to time series forecasting to enhance prediction accuracy and to facilitate the analysis of data respectively. However, the restrictive complexity of some combination techniques and the difficulties associated with the application of the decomposition results to the extrapolation of data, mainly due to the large variability involved in economic and financial time series, have limited their application and compromised their development. This paper is a re-examination of the benefits and limitations of decomposition and combination techniques in the area of forecasting, and a contribution to the field with a new forecasting methodology. The new methodology is based on the disaggregation of time series components through the STL decomposition procedure, the extrapolation of linear combinations of the disaggregated sub-series, and the reaggregation of the extrapolations to obtain estimation for the global series. With the application of the methodology to the data from the NN3 and M1 Competition series, the results suggest that it can outperform other competing statistical techniques. The power of the method lies in its ability to perform consistently well, irrespective of the characteristics, underlying structure and level of noise of the data.ARIMA models, combining forecasts, decomposition, error measures, evaluating forecasts, forecasting competitions, time series
    • …
    corecore