320,505 research outputs found

    Automated Forecasts of Asia-Pacific Economic Activity

    Get PDF
    This paper reports quarterly ex ante forecasts of macroeconomic activity for the U.S.A., Japan and Australia for the period 1995-1997. The forecasts are based on automated time series models of vector autoregressions (VAR's), reduced rank regressions (RRR's), error correction models (ECM's) and Bayesian vector autoregressions (BVAR's). The models are automated by using an asymptotic predictive form of the model selection criterion PIC to determine autoregressive lag order, cointegrating rank and trend degree in the VAR's, RRR's, and ECM's. The same criterion is used to find optimal values of the hyperparameters in the BVAR's. The forecasts are graphed and tabulated. In the case of the U.S.A., the results are compared with forecasts from the Fair model, a structural econometric model of the U.S. economy.Automated time series model, cointegration, model selection, nonstationarity, posterior information criterion (PIC), PIC'ed model, stochastic trend, unit root

    A retrieval-based dialogue system utilizing utterance and context embeddings

    Get PDF
    Finding semantically rich and computer-understandable representations for textual dialogues, utterances and words is crucial for dialogue systems (or conversational agents), as their performance mostly depends on understanding the context of conversations. Recent research aims at finding distributed vector representations (embeddings) for words, such that semantically similar words are relatively close within the vector-space. Encoding the "meaning" of text into vectors is a current trend, and text can range from words, phrases and documents to actual human-to-human conversations. In recent research approaches, responses have been generated utilizing a decoder architecture, given the vector representation of the current conversation. In this paper, the utilization of embeddings for answer retrieval is explored by using Locality-Sensitive Hashing Forest (LSH Forest), an Approximate Nearest Neighbor (ANN) model, to find similar conversations in a corpus and rank possible candidates. Experimental results on the well-known Ubuntu Corpus (in English) and a customer service chat dataset (in Dutch) show that, in combination with a candidate selection method, retrieval-based approaches outperform generative ones and reveal promising future research directions towards the usability of such a system.Comment: A shorter version is accepted at ICMLA2017 conference; acknowledgement added; typos correcte

    Comparison of procedures for fitting the autoregressive order of a vector error correction model

    No full text
    International audienceThis paper investigates the lag length selection problem of a vector error correction model by using a convergent information criterion and tools based on the Box-Pierce methodology recently proposed in the literature. The performances of these approaches for selecting the optimal lag length are compared via Monte Carlo experiments. The effects of misspecified deterministic trend or cointegrating rank on the lag length selection are studied. Noting that processes often exhibit nonlinearities, the cases of iid and conditionally heteroscedastic errors will be considered. Strategies that can avoid misleading situations are proposed

    Model Determination and Macroeconomic Activity

    Get PDF
    The subject of this paper is modelling, estimation, inference and prediction for economic time series. Bayesian and classical approaches are considered. The paper has three main parts. The first is concerned with Bayesian model determination, forecast evaluation and the construction of evolving sequences of models that can adapt in dimension and form (including the way in which any nonstationarity in the data is modelled) as new characteristics in the data become evident. This part of the paper continues some recent work on Bayesian asymptotics by the author and Werner Ploberger, develops embedding techniques for vector martingales that justify the role of a general class of exponential densities in model selection and forecast evaluation, and implements the modelling ideas in a multivariate regression framework that includes Bayesian vector autoregressions (BVAR’s) and reduced rank regressions (RRR’s). It is shown how the theory in the paper can be used: (i) to construct optimized BVAR’s with data-determined hyperparameters; (ii) to compare models such as BVAR’s, optimized BVAR’s and RRR’s; (iii) to perform joint order selection of cointegrating rank, lag length and trend degree in a VAR; and (iv) to discard data that may be irrelevant and thereby help determine the “lifetime” of an econometric model. Simulations are conducted to study the forecasting performance of these model determination procedures in some multiple time series models with cointegration. The final part of the paper reports an empirical application of these ideas and methods to US and UK macroeconomic data

    STOCK MARKET TREND PREDICTION USING SUPPORT VECTOR MACHINES

    Get PDF
    The aim of the paper was to outline a trend prediction model for the BELEX15 stock market index of the Belgrade stock exchange based on Support Vector Machines (SVMs). The feature selection was carried out through the analysis of technical and macroeconomics indicators. In addition, the SVM method was compared with a "similar" one, the least squares support vector machines - LS-SVMs to analyze their classification precisions and complexity. The test results indicate that the SVMs outperform benchmarking models and are suitable for short-term stock market trend predictions

    Bayes Methods for Trending Multiple Time Series with an Empirical Application to the US Economy

    Get PDF
    Multiple time series models with stochastic regressors are considered and primary attention is given to vector autoregressions (VAR’s) with trending mechanisms that may be stochastic, deterministic or both. In a Bayesian framework, the data density in such a system implies the existence of a time series “Bayes model” and “Bayes measure” of the data. These are predictive models and measures for the next period observation given the historical trajectory to the present. Issues of model selection, hypothesis testing and forecast evaluation are all studied within the context of these models and the measures are used to develop selection criteria, test statistics and encompassing tests within the compass of the same statistical methodology. Of particular interest in applications are lag order and trend degree, causal effects, the presence and number of unit roots in the system, and for integrated series the presence of cointegration and the rank of the cointegration space, which can be interpreted as an order selection problem. In data where there is evidence of mildly explosive behavior we also wish to allow for the presence of co-motion among variables even though they are individually not modelled as integrated series. The paper develops a statistical framework for addressing these features of trending multiple time series and reports an extended empirical application of the methodology to a model of the US economy that sets out to explain the behavior of and to forecast interest rates, unemployment, money stock, prices and income. The performance of a data-based, evolving “Bayes model” of these series is evaluated against some rival fixed format VAR’s, VAR’s with Minnesota priors (BVARM’s) and univariate models. The empirical results show that fixed format VAR’s and BVARM’s all perform poorly in forecasting exercises in comparison with evolving “Bayes models” that explicitly adapt in form as new data becomes available

    Machine learning survival analysis on couple time-to-divorce data

    Get PDF
    Marriage life does not always last harmoniously and occasionally can lead to divorce. The trend for the last three years since 2019 shows that divorce cases in Palangka Raya occur with a fluctuating trend that has recently been increasing. This research used a machine learning method called Survival Support Vector Machine on the divorce dataset in Palangka Raya. This research developed a feature selection technique using backward elimination to determine the factors influencing the couple’s decision to have their divorce registered in the religious court. The backward elimination method yielded the variables contributing to divorce: the number of children, the defendant's occupation, the plaintiff's age at marriage, the cause of divorce, and the defendant's education. Based on the comparison of the survival model performance between the Cox proportional hazard and the Survival Support Vector Machine, it was found that the latter was better since it had a higher concordance index and hazard ratio, which were 61.24 and 0.54, respectively. Thus, 61.24% of divorce cases were classified precisely by SUR-SVM in terms of the time sequence of events. Moreover, the hazard ratio of 0.54 indicated that the divorce rate of couples with censored status was 0.54 times than that of couples with failed/endpoint status

    River discharge simulation using variable parameter McCarthy–Muskingum and wavelet-support vector machine methods

    Get PDF
    In this study, an extended version of variable parameter McCarthy–Muskingum (VPMM) method originally proposed by Perumal and Price (J Hydrol 502:89–102, 2013) was compared with the widely used data-based model, namely support vector machine (SVM) and hybrid wavelet-support vector machine (WASVM) to simulate the hourly discharge in Neckar River wherein significant lateral flow contribution by intermediate catchment rainfall prevails during flood wave movement. The discharge data from the year 1999 to 2002 have been used in this study. The extended VPMM method has been used to simulate 9 flood events of the year 2002, and later the results were compared with SVM and WASVM models. The analysis of statistical and graphical results suggests that the extended VPMM method was able to predict the flood wave movement better than the SVM and WASVM models. A model complexity analysis was also conducted which suggests that the two parameter-based extended VPMM method has less complexity than the three parameter-based SVM and WASVM model. Further, the model selection criteria also give the highest values for VPMM in 7 out of 9 flood events. The simulation of flood events suggested that both the approaches were able to capture the underlying physics and reproduced the target value close to the observed hydrograph. However, the VPMM models are slightly more efficient and accurate, than the SVM and WASVM model which are based only on the antecedent discharge data. The study captures the current trend in the flood forecasting studies and showed the importance of both the approaches (physical and data-based modeling). The analysis of the study suggested that these approaches complement each other and can be used in accurate yet less computational intensive flood forecasting

    Modelling short-term interest rate spreads in the euro money market

    Get PDF
    In the framework of a new money market econometric model, we assess the degree of precision achieved by the European Central Bank ECB) in meeting its operational target for the short-term interest rate and the impact of the U.S. sub-prime credit crisis on the euro money market during the second half of 2007. This is done in two steps. Firstly, the long-term behaviour of interest rates with one-week maturity is investigated by testing for co-breaking and for homogeneity of spreads against the minimum bid rate (MBR, the key policy rate). These tests capture the idea that successful steering of very short-term interest rates is inconsistent with the existence of more than one common trend driving the one-week interest rates and/or with nonstationarity of the spreads among interest rates of the same maturity (or measured against the MBR). Secondly, the impact of several shocks to the spreads (e.g. interest rate expectations, volumes of open market operations, interest rate volatility, policy interventions, and credit risk) is assessed by jointly modelling their behaviour. We show that, after August 2007, euro area commercial banks started paying a premium to participate in the ECB liquidity auctions. This puzzling phenomenon can be understood by the interplay between, on the one hand, adverse selection in the interbank market and, on the other hand, the broad range of collateral accepted by the ECB. We also show that after August 2007, the ECB steered the “risk-free” rate close to the policy rate, but has not fully off-set the impact of the credit events on other money market rates. JEL Classification: C32, E43, E50, E58, G15co-breaking, Credit risk, euro area, fractional co-integration, fractionally integrated factor vector autoregressive model, liquidity risk, long memory, money market interest rates, Structural change, sub-prime credit crisis
    corecore