672 research outputs found
Variable Selection using Non-Standard Optimisation of Information Criteria
The question of variable selection in a regression model is a major open research topic in econometrics. Traditionally two broad classes of methods have been used. One is sequential testing and the other is information criteria. The advent of large datasets used by institutions such as central banks has exacerbated this model selection problem. This paper provides a new solution in the context of information criteria. The solution rests on the judicious selection of a subset of models for consideration using nonstandard optimisation algorithms for information criterion minimisation. In particular, simulated annealing and genetic algorithms are considered. Both a Monte Carlo study and an empirical forecasting application to UK CPI infation suggest that the new methods are worthy of further consideration.Simulated Annealing, Genetic Algorithms, Information criteria, Model selection, Forecasting, Inflation
Choosing the Optimal Set of Instruments from Large Instrument Sets
It is well known that instrumental variables (IV) estimation is sensitive to the choice of instruments both in small samples and asymptotically. Recently, Donald and Newey (2001) suggested a simple method for choosing the instrument set. The method involves minimising the approximate mean square error (MSE) of a given IV estimator where the MSE is obtained using refined asymptotic theory. An issue with the work of Donald and Newey (2001) is the fact that when considering large sets of valid instruments, it is not clear how to order the instruments in order to choose which ones ought to be included in the estimation. The present paper provides a possible solution to the problem using nonstandard optimisation algorithms. The properties of the algorithms are discussed. A Monte Carlo study illustrates the potential of the new method.Instrumental Variables, MSE, Simulated Annealing, Genetic Algorithms
Recommended from our members
Prediction of claims in export credit finance: a comparison of four machine learning techniques
This study evaluates four machine learning (ML) techniques (Decision Trees (DT), Random Forests (RF), Neural Networks (NN) and Probabilistic Neural Networks (PNN)) on their ability to accurately predict export credit insurance claims. Additionally, we compare the performance of the ML techniques against a simple benchmark (BM) heuristic. The analysis is based on the utilisation of a dataset provided by the Berne Union, which is the most comprehensive collection of export credit insurance data and has been used in only two scientific studies so far. All ML techniques performed relatively well in predicting whether or not claims would be incurred, and, with limitations, in predicting the order of magnitude of the claims. No satisfactory results were achieved predicting actual claim ratios. RF performed significantly better than DT, NN and PNN against all prediction tasks, and most reliably carried their validation performance forward to test performance
Recommended from our members
Prediction of progression in idiopathic pulmonary fibrosis using CT scans atbaseline: A quantum particle swarm optimization - Random forest approach
Idiopathic pulmonary fibrosis (IPF) is a fatal lung disease characterized by an unpredictable progressive declinein lung function. Natural history of IPF is unknown and the prediction of disease progression at the time ofdiagnosis is notoriously difficult. High resolution computed tomography (HRCT) has been used for the diagnosisof IPF, but not generally for monitoring purpose. The objective of this work is to develop a novel predictivemodel for the radiological progression pattern at voxel-wise level using only baseline HRCT scans. Mainly, thereare two challenges: (a) obtaining a data set of features for region of interest (ROI) on baseline HRCT scans andtheir follow-up status; and (b) simultaneously selecting important features from high-dimensional space, andoptimizing the prediction performance. We resolved the first challenge by implementing a study design andhaving an expert radiologist contour ROIs at baseline scans, depending on its progression status in follow-upvisits. For the second challenge, we integrated the feature selection with prediction by developing an algorithmusing a wrapper method that combines quantum particle swarm optimization to select a small number of featureswith random forest to classify early patterns of progression. We applied our proposed algorithm to analyzeanonymized HRCT images from 50 IPF subjects from a multi-center clinical trial. We showed that it yields aparsimonious model with 81.8% sensitivity, 82.2% specificity and an overall accuracy rate of 82.1% at the ROIlevel. These results are superior to other popular feature selections and classification methods, in that ourmethod produces higher accuracy in prediction of progression and more balanced sensitivity and specificity witha smaller number of selected features. Our work is the first approach to show that it is possible to use onlybaseline HRCT scans to predict progressive ROIs at 6 months to 1year follow-ups using artificial intelligence
EFFECT OF PENALTY FUNCTION PARAMETER IN OBJECTIVE FUNCTION OF SYSTEM IDENTIFICATION
The evaluation of an objective function for a particular model allows one to determine the optimality of a model structure with the aim of selecting an adequate model in system identification. Recently, an objective function was introduced that, besides evaluating predictive accuracy, includes a logarithmic penalty function to achieve a suitable balance between the former model’s characteristics and model parsimony. However, the parameter value in the penalty function was made arbitrarily. This paper presents a study on the effect of the penalty function parameter in model structure selection in system identification on a number of simulated models. The search was done using genetic algorithms. A representation of the sensitivity of the penalty function parameter value in model structure selection is given, along with a proposed mathematical function that defines it. A recommendation is made regarding how a suitable penalty function parameter value can be determined
Effect of Penalty Function Parameter in Objective Function of System Identification
The evaluation of an objective function for a particular model allows one to determine the optimality of a model structure with the aim of selecting an adequate model in system identification. Recently, an objective function was introduced that, besides evaluating predictive accuracy, includes a logarithmic penalty function to achieve a suitable balance between the former model’s characteristics and model parsimony. However, the parameter value in the penalty function was made arbitrarily. This paper presents a study on the effect of the penalty function parameter in model structure selection in system identification on a number of simulated models. The search was done using genetic algorithms. A representation of the sensitivity of the penalty function parameter value in model structure selection is given, along with a proposed mathematical function that defines it. A recommendation is made regarding how a suitable penalty function parameter value can be determined
Effect of Penalty Function Parameter in Objective Function of System Identification
The evaluation of an objective function for a particular model allows one to determine the optimality of a model structure with the aim of selecting an adequate model in system identification. Recently, an objective function was introduced that, besides evaluating predictive accuracy, includes a logarithmic penalty function to achieve a suitable balance between the former model’s characteristics and model parsimony. However, the parameter value in the penalty function was made arbitrarily. This paper presents a study on the effect of the penalty function parameter in model structure selection in system identification on a number of simulated models. The search was done using genetic algorithms. A representation of the sensitivity of the penalty function parameter value in model structure selection is given, along with a proposed mathematical function that defines it. A recommendation is made regarding how a suitable penalty function parameter value can be determined
Purchasing Power Parity: The Irish Experience Re-visited
This paper looks at issues surrounding the testing of purchasing power parity using Irish data. Potential difficulties in placing the analysis in an I(1)/I(0) framework are highlighted. Recent tests for fractional integration and nonlinearity are discussed and used to investigate the behaviour of the Irish exchange rate against the United Kingdom and Germany. Little evidence of fractionality is found but there is strong evidence of nonlinearity from a variety of tests. Importantly, when the nonlinearity is modelled using a random field regression, the data conform well to purchasing power parity theory, in contrast to the findings of previous Irish studies, whose results were very mixed.
Threshold effects In multivariate error correction models
In this paper we propose a testing procedure for assessing the presence of threshold effects in nonstationary Vector autoregressive models with or without cointegration. Our approach involves first testing whether the long run impact matrix characterising the VECM type representation of the VAR switches according to the magnitude of some threshold variable and is valid regardless of whether the system is purely I(1), I(1) with cointegration or stationary. Once the potential presence of threshold effects is established we subsequently evaluate the cointegrating properties of the system in each regime through a model selection based approach whose asymptotic and finite sample properties are also established. This subsequently allows us to introduce a novel non-linear permanent and transitory decomposition of the vector process of interest.
- …