599 research outputs found
Voting features based classifier with feature construction and its application to predicting financial distress
Voting features based classifiers, shortly VFC, have been shown to perform well on most real-world data sets. They are robust to irrelevant features and missing feature values. In this paper, we introduce an extension to VFC, called voting features based classifier with feature construction, VFCC for short, and show its application to the problem of predicting if a bank will encounter financial distress, by analyzing current financial statements. The previously developed VFC learn a set of rules that contain a single condition based on a single feature in their antecedent. The VFCC algorithm proposed in this work, on the other hand, constructs rules whose antecedents may contain conjuncts based on several features. Experimental results on recent financial ratios of banks in Turkey show that the VFCC algorithm achieves better accuracy than other well-known rule learning classification algorithms. © 2009 Elsevier Ltd. All rights reserved
Voting Features based Classifier with Feature Construction and its Application to Predicting Financial Distress
Voting features based classifiers, shortly VFC, have been shown to perform well on most real-world data sets. They are robust to irrelevant features and missing feature values. In this paper, we introduce an extension to VFC, called voting features based classifier with feature construction, VFCC for short, and show its application to the problem of predicting if a bank will encounter financial distress, by analyzing current financial statements. The previously developed VFC learn a set of rules that contain a single condition based on a single feature in their antecedent. The VFCC algorithm proposed in this work, on the other hand, constructs rules whose antecedents may contain conjuncts based on several features. Experimental results on recent financial ratios of banks in Turkey show that the VFCC algorithm achieves better accuracy than other well-known rule learning classification algorithms
Voting Features based Classifier with Feature Construction and its Application to Predicting Financial Distress
Voting features based classifiers, shortly VFC, have been shown to perform well on most real-world data sets. They are robust to irrelevant features and missing feature values. In this paper, we introduce an extension to VFC, called voting features based classifier with feature construction, VFCC for short, and show its application to the problem of predicting if a bank will encounter financial distress, by analyzing current financial statements. The previously developed VFC learn a set of rules that contain a single condition based on a single feature in their antecedent. The VFCC algorithm proposed in this work, on the other hand, constructs rules whose antecedents may contain conjuncts based on several features. Experimental results on recent financial ratios of banks in Turkey show that the VFCC algorithm achieves better accuracy than other well-known rule learning classification algorithms
Recommended from our members
Predicting business failure using artificial intelligence system
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonPredicting business insolvency is considered one of the main supportive sources of information
for decision making for financial institutions, investors, creditors, and other participants in the
business market. Financial reporting systems provide relevant information that can be used to
assess the financial position of firms. It is crucial to have classification and prediction models
that can analyse this financial information and provide accurate assurance for users about
business health. Recent studies have explored the use of machine learning tools as substitute
for traditional statistical methods to develop classification models to classify firm insolvency
according to financial statement information. However, these models have no ideal classifier,
since each provides a certain percentage of wrong outputs, which is a crucial consideration;
every percentage of wrong response can mean massive financial losses for stakeholders.
Therefore, this study proposes new insolvency classification and perdition models based on
machine learning modelling techniques to develop an improved classifier.
Individual modelling techniques using statistical methods and machine learning were used to
develop the classification model of business insolvency. The results showed that machine
learning method outperformed statistical methods. Deep Learning (DPL) achieved the highest
performance based on all performance measurements used in the study, and it was the best
individual classifier, with average accuracy of 97.2% using all-years dataset. Ensemble-
Boosted Decision Tree classifier ranked second, followed by Decision Tree classifier. Thus, it
has been proven that DPL modelling approach is useful for business insolvency classification.
A key contribution in enhancing individual classifier outputs is the use of traditional combining
methods with two new aggregation methods in business insolvency (Fuzzy Logic and
Consensus Approach). The Consensus Approach showed the best improvement in the results
of all individual classifiers with average accuracy of 97.7%, and it is considered the best
classification method not only in comparison with individual classifiers, but also with
traditional combiners.
This study pioneers the development of a time series business insolvency prediction model
with Big Data for UK businesses. The aim of the model is to provide early prediction about a
business health. Three prediction models were developed based on Nonlinear Autoregressive
with Exogenous Input models (NARX), Nonlinear Autoregressive Neural Network (NAR),
and Deep Learning Time-series model (DPL-SA) and achieved average accuracy rates of
83.6%, 89.5%, and 91.35%, respectively. The results show relatively high performance in
comparison with the best individual classifier (deep learning)
Ensemble of Example-Dependent Cost-Sensitive Decision Trees
Several real-world classification problems are example-dependent
cost-sensitive in nature, where the costs due to misclassification vary between
examples and not only within classes. However, standard classification methods
do not take these costs into account, and assume a constant cost of
misclassification errors. In previous works, some methods that take into
account the financial costs into the training of different algorithms have been
proposed, with the example-dependent cost-sensitive decision tree algorithm
being the one that gives the highest savings. In this paper we propose a new
framework of ensembles of example-dependent cost-sensitive decision-trees. The
framework consists in creating different example-dependent cost-sensitive
decision trees on random subsamples of the training set, and then combining
them using three different combination approaches. Moreover, we propose two new
cost-sensitive combination approaches; cost-sensitive weighted voting and
cost-sensitive stacking, the latter being based on the cost-sensitive logistic
regression method. Finally, using five different databases, from four
real-world applications: credit card fraud detection, churn modeling, credit
scoring and direct marketing, we evaluate the proposed method against
state-of-the-art example-dependent cost-sensitive techniques, namely,
cost-proportionate sampling, Bayes minimum risk and cost-sensitive decision
trees. The results show that the proposed algorithms have better results for
all databases, in the sense of higher savings.Comment: 13 pages, 6 figures, Submitted for possible publicatio
Prediction of Banks Financial Distress
In this research we conduct a comprehensive review on the existing literature of
prediction techniques that have been used to assist on prediction of the bank distress.
We categorized the review results on the groups depending on the prediction techniques method,
our categorization started by firstly using time factors of the founded literature, so we mark the
literature founded in the period (1990-2010) as history of prediction techniques, and after this
period until 2013 as recent prediction techniques and then presented the strengths and
weaknesses of both. We came out by the fact that there was no specific type fit with all bank
distress issue although we found that intelligent hybrid techniques considered the most
candidates methods in term of accuracy and reputatio
Ensemble Committees for Stock Return Classification and Prediction
This paper considers a portfolio trading strategy formulated by algorithms in
the field of machine learning. The profitability of the strategy is measured by
the algorithm's capability to consistently and accurately identify stock
indices with positive or negative returns, and to generate a preferred
portfolio allocation on the basis of a learned model. Stocks are characterized
by time series data sets consisting of technical variables that reflect market
conditions in a previous time interval, which are utilized produce binary
classification decisions in subsequent intervals. The learned model is
constructed as a committee of random forest classifiers, a non-linear support
vector machine classifier, a relevance vector machine classifier, and a
constituent ensemble of k-nearest neighbors classifiers. The Global Industry
Classification Standard (GICS) is used to explore the ensemble model's efficacy
within the context of various fields of investment including Energy, Materials,
Financials, and Information Technology. Data from 2006 to 2012, inclusive, are
considered, which are chosen for providing a range of market circumstances for
evaluating the model. The model is observed to achieve an accuracy of
approximately 70% when predicting stock price returns three months in advance.Comment: 15 pages, 4 figures, Neukom Institute Computational Undergraduate
Research prize - second plac
The Superiority of the Ensemble Classification Methods: A Comprehensive Review
The modern technologies, which are characterized by cyber-physical systems and internet of things expose organizations to big data, which in turn can be processed to derive actionable knowledge. Machine learning techniques have vastly been employed in both supervised and unsupervised environments in an effort to develop systems that are capable of making feasible decisions in light of past data. In order to enhance the accuracy of supervised learning algorithms, various classification-based ensemble methods have been developed. Herein, we review the superiority exhibited by ensemble learning algorithms based on the past that has been carried out over the years. Moreover, we proceed to compare and discuss the common classification-based ensemble methods, with an emphasis on the boosting and bagging ensemble-learning models. We conclude by out setting the superiority of the ensemble learning models over individual base learners. Keywords: Ensemble, supervised learning, Ensemble model, AdaBoost, Bagging, Randomization, Boosting, Strong learner, Weak learner, classifier fusion, classifier selection, Classifier combination. DOI: 10.7176/JIEA/9-5-05 Publication date: August 31st 2019
- …