12,161 research outputs found

    Application of Stationary Wavelet Support Vector Machines for the Prediction of Economic Recessions

    Get PDF
    This paper examines the efficiency of various approaches on the classification and prediction of economic expansion and recession periods in United Kingdom. Four approaches are applied. The first is discrete choice models using Logit and Probit regressions, while the second approach is a Markov Switching Regime (MSR) Model with Time-Varying Transition Probabilities. The third approach refers on Support Vector Machines (SVM), while the fourth approach proposed in this study is a Stationary Wavelet SVM modelling. The findings show that SW-SVM and MSR present the best forecasting performance, in the out-of sample period. In addition, the forecasts for period 2012-2015 are provided using all approaches

    Banking and Currency Crises: How Common Are Twins?

    Get PDF
    The coincidence of banking and currency crises associated with the Asian financial crisis has drawn renewed attention to causal and common factors linking the two phenomena. In this paper, we analyze the incidence and underlying causes of banking and currency crises in 90 industrial and developing countries over the 1975-97 period. We measure the individual and joint ("twin") occurrence of bank and currency crises and assess the extent to which each type of crisis provides information about the likelihood of the other. We find that the twin crisis phenomenon is most common in financially liberalized emerging markets. The strong contemporaneous correlation between currency and bank crises in emerging markets is robust, even after controlling for a host of macroeconomic and financial structure variables and possible simultaneity bias. We also find that the occurrence of banking crises provides a good leading indicator of currency crises in emerging markets. The converse does not hold, however, as currency crises are not a useful leading indicator of the onset of future banking crises. We conjecture that the openness of emerging markets to international capital flows, combined with a liberalized financial structure, make them particularly vulnerable to twin crises.

    Photometric redshifts with Quasi Newton Algorithm (MLPQNA). Results in the PHAT1 contest

    Get PDF
    Context. Since the advent of modern multiband digital sky surveys, photometric redshifts (photo-z's) have become relevant if not crucial to many fields of observational cosmology, from the characterization of cosmic structures, to weak and strong lensing. Aims. We describe an application to an astrophysical context, namely the evaluation of photometric redshifts, of MLPQNA, a machine learning method based on Quasi Newton Algorithm. Methods. Theoretical methods for photo-z's evaluation are based on the interpolation of a priori knowledge (spectroscopic redshifts or SED templates) and represent an ideal comparison ground for neural networks based methods. The MultiLayer Perceptron with Quasi Newton learning rule (MLPQNA) described here is a computing effective implementation of Neural Networks for the first time exploited to solve regression problems in the astrophysical context and is offered to the community through the DAMEWARE (DAta Mining & ExplorationWeb Application REsource) infrastructure. Results. The PHAT contest (Hildebrandt et al. 2010) provides a standard dataset to test old and new methods for photometric redshift evaluation and with a set of statistical indicators which allow a straightforward comparison among different methods. The MLPQNA model has been applied on the whole PHAT1 dataset of 1984 objects after an optimization of the model performed by using as training set the 515 available spectroscopic redshifts. When applied to the PHAT1 dataset, MLPQNA obtains the best bias accuracy (0.0006) and very competitive accuracies in terms of scatter (0.056) and outlier percentage (16.3%), scoring as the second most effective empirical method among those which have so far participated to the contest. MLPQNA shows better generalization capabilities than most other empirical methods especially in presence of underpopulated regions of the Knowledge Base.Comment: Accepted for publication in Astronomy & Astrophysics; 9 pages, 2 figure

    Sparse learning of stochastic dynamic equations

    Full text link
    With the rapid increase of available data for complex systems, there is great interest in the extraction of physically relevant information from massive datasets. Recently, a framework called Sparse Identification of Nonlinear Dynamics (SINDy) has been introduced to identify the governing equations of dynamical systems from simulation data. In this study, we extend SINDy to stochastic dynamical systems, which are frequently used to model biophysical processes. We prove the asymptotic correctness of stochastics SINDy in the infinite data limit, both in the original and projected variables. We discuss algorithms to solve the sparse regression problem arising from the practical implementation of SINDy, and show that cross validation is an essential tool to determine the right level of sparsity. We demonstrate the proposed methodology on two test systems, namely, the diffusion in a one-dimensional potential, and the projected dynamics of a two-dimensional diffusion process

    When did the 2001 recession really start?

    Get PDF
    The paper develops a non-parametric, non-stationary framework for business-cycle dating based on an innovative statistical methodology known as Adaptive Weights Smoothing (AWS). The methodology is used both for the study of the individual macroeconomic time series relevant to the dating of the business cycle as well as for the estimation of their joint dynamic. Since the business cycle is defined as the common dynamic of some set of macroeconomic indicators, its estimation depends fundamentally on the group of series monitored. We apply our dating approach to two sets of US economic indicators including the monthly series of industrial production, nonfarm payroll employment, real income, wholesale-retail trade and gross domestic product (GDP). We find evidence of a change in the methodology of the NBER's Business- Cycle Dating Committee: an extended set of five monthly macroeconomic indicators replaced in the dating of the last recession the set of indicators emphasized by the NBER's Business-Cycle Dating Committee in recent decades. This change seems to seriously affect the continuity in the outcome of the dating of business cycle. Had the dating been done on the traditional set of indicators, the last recession would have lasted one year and a half longer. We find that, independent of the set of coincident indicators monitored, the last economic contraction began in November 2000, four months before the date of the NBER's Business-Cycle Dating Committee.business cycle, non-parametric smoothing, non-stationarity

    Challenges of Big Data Analysis

    Full text link
    Big Data bring new opportunities to modern society and challenges to data scientists. On one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity, and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This article give overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures. We also provide various new perspectives on the Big Data analysis and computation. In particular, we emphasis on the viability of the sparsest solution in high-confidence set and point out that exogeneous assumptions in most statistical methods for Big Data can not be validated due to incidental endogeneity. They can lead to wrong statistical inferences and consequently wrong scientific conclusions

    Banking and currency crises: how common are twins?

    Get PDF
    The coincidence of banking and currency crises associated with the Asian financial crisis has drawn renewed attention to causal and common factors linking the two phenomena. In this paper, we analyze the incidence and underlying causes of banking and currency crises in 90 industrial and developing countries over the 1975-97 period. We measure the individual and joint ("twin") occurrence of bank and currency crises and assess the extent to which each type of crisis provides information about the likelihood of the other. ; We find that the twin crisis phenomenon is most common in financially liberalized emerging markets. The strong contemporaneous correlation between currency and bank crises in emerging variables and possible simultaneity bias. We also find that the occurrence of banking crises provides a good leading indicator of currency crises in emerging markets. The converse does not old, however, as currency crises are not a useful leading indicator of the onset of future banking crises. We conjecture that the openness of emerging markets to international capital flows, combined with a liberalized financial structure, make them particularly vulnerable to twin crises.Financial crises ; Money ; Bank failures

    When did the 2001 recession really start?

    Get PDF
    The paper develops a non-parametric, non-stationary framework for business-cycle dating based on an innovative statistical methodology known as Adaptive Weights Smoothing (AWS). The methodology is used both for the study of the individual macroeconomic time series relevant to the dating of the business cycle as well as for the estimation of their joint dynamic. Since the business cycle is defined as the common dynamic of some set of macroeconomic indicators, its estimation depends fundamentally on the group of series monitored. We apply our dating approach to two sets of US economic indicators including the monthly series of industrial production, nonfarm payroll employment, real income, wholesale-retail trade and gross domestic product (GDP). We find evidence of a change in the methodology of the NBER’s Business-Cycle Dating Committee an extended set of five monthly macroeconomic indicators replaced in the dating of the last recession the set of indicators emphasized by the NBER’s Business- Cycle Dating Committee in recent decades. This change seems to seriously affect the continuity in the outcome of the dating of business cycle. Had the dating been done on the traditional set of indicators, the last recession would have lasted one year and a half longer. We find that, independent of the set of coincident indicators monitored, the last economic contraction began in November 2000, four months before the date of the NBER’s Business-Cycle Dating Committee.business cycle, non-parametric smoothing, non-stationarity

    Leading Indicators of Inflation for Brazil

    Get PDF
    The goal of this project is to construct leading indicators that anticipate inflation cycle turning points on a real time monitoring basis. As a first step, turning points of the IPCA inflation are determined using a periodic stochastic Markov switching model. These turning points are the event timing that the leading indicators should anticipate. A dynamic factor model is then used to extract common cyclical movements in a set of variables that display predictive content for inflation. The leading indicators are designed to serve as practical tools to assist real-time monitoring of monetary policy on a month-to-month basis. Thus, the indicators are built and ranked according to their out-of-sample forecasting performance. The leading indicators are found to be an informative tool for signaling future phases of the inflation cycle out-of-sample, even in real time when only preliminary and unrevised data are available.
    corecore