31 research outputs found
Handling Uncertainty in Social Lending Credit Risk Prediction with a Choquet Fuzzy Integral Model
As one of the main business models in the financial technology field,
peer-to-peer (P2P) lending has disrupted traditional financial services by
providing an online platform for lending money that has remarkably reduced
financial costs. However, the inherent uncertainty in P2P loans can result in
huge financial losses for P2P platforms. Therefore, accurate risk prediction is
critical to the success of P2P lending platforms. Indeed, even a small
improvement in credit risk prediction would be of benefit to P2P lending
platforms. This paper proposes an innovative credit risk prediction framework
that fuses base classifiers based on a Choquet fuzzy integral. Choquet integral
fusion improves creditworthiness evaluations by synthesizing the prediction
results of multiple classifiers and finding the largest consistency between
outcomes among conflicting and consistent results. The proposed model was
validated through experimental analysis on a real- world dataset from a
well-known P2P lending marketplace. The empirical results indicate that the
combination of multiple classifiers based on fuzzy Choquet integrals
outperforms the best base classifiers used in credit risk prediction to date.
In addition, the proposed methodology is superior to some conventional
combination techniques
Fuzzy Logic and Its Uses in Finance: A Systematic Review Exploring Its Potential to Deal with Banking Crises
The major success of fuzzy logic in the field of remote control opened the door to its application in many other fields, including finance. However, there has not been an updated and comprehensive literature review on the uses of fuzzy logic in the financial field. For that reason, this study attempts to critically examine fuzzy logic as an effective, useful method to be applied to financial research and, particularly, to the management of banking crises. The data sources were Web of Science and Scopus, followed by an assessment of the records according to pre-established criteria and an arrangement of the information in two main axes: financial markets and corporate finance. A major finding of this analysis is that fuzzy logic has not yet been used to address banking crises or as an alternative to ensure the resolvability of banks while minimizing the impact on the real economy. Therefore, we consider this article relevant for supervisory and regulatory bodies, as well as for banks and academic researchers, since it opens the door to several new research axes on banking crisis analyses using artificial intelligence techniques
Proceedings of the First Karlsruhe Service Summit Workshop - Advances in Service Research, Karlsruhe, Germany, February 2015 (KIT Scientific Reports ; 7692)
Since April 2008 KSRI fosters interdisciplinary research in order to support and advance the progress in the service domain. KSRI brings together academia and industry while serving as a European research hub with respect to service science. For KSS2015 Research Workshop, we invited submissions of theoretical and empirical research dealing with the relevant topics in the context of services including energy, mobility, health care, social collaboration, and web technologies
Predicting financial distress using corporate efficiency and corporate governance measures
Credit models are essential to control credit risk and accurately predicting
bankruptcy and financial distress is even more necessary after the recent global
financial crisis. Although accounting and financial information have been the main
variables in corporate credit models for decades, academics continue searching for
new attributes to model the probability of default. This thesis investigates the use of
corporate efficiency and corporate governance measures in standard statistical credit
models using cross-sectional and hazard models.
Relative efficiency as calculated by Data Envelopment Analysis (DEA) can be used
in prediction but most previous literature that has used such variables has failed to
follow the assumptions of Variable Returns to Scale and sample homogeneity and
hence the efficiency may not be correctly measured. This research has built industry
specific models to successfully incorporate DEA efficiency scores for different
industries and it is the first to decompose overall Technical Efficiency into Pure
Technical Efficiency and Scale Efficiency in the context of modelling financial
distress. It has been found that efficiency measures can improve the predictive
accuracy and Scale Efficiency is a more important measure of efficiency than others.
Furthermore, as no literature has attempted a panel analysis of DEA scores to predict
distress, this research has extended the cross sectional analysis to a survival analysis
by using Malmquist DEA and discrete hazard models. Results show that dynamic
efficiency scores calculated with reference to the global efficiency frontier have the
best discriminant power to classify distressed and non-distressed companies.
Four groups of corporate governance measures, board composition, ownership
structure, management compensation and director and manager characteristics, are
incorporated in the hazard models to predict financial distress. It has been found that
state control, institutional ownership, salaries to independent directors, the Chair’s
age, the CEO’s education, the work location of independent directors and the
concurrent position of the CEO have significant associations with the risk of
financial distress. The best predictive accuracy is made from the model of
governance measures, financial ratios and macroeconomic variables. Policy
implications are advised to the regulatory commission
Isotonic Distributional Regression
Distributional regression estimates the probability distribution of a response variable conditional on covariates. The estimated conditional distribution comprehensively summarizes the available information on the response variable, and allows to derive all statistical quantities of interest, such as the conditional mean, threshold exceedance probabilities, or quantiles.
This thesis develops isotonic distributional regression, a method for estimating conditional distributions under the assumption of a monotone relationship between covariates and a response variable. The response variable is univariate and real-valued, and the covariates lie in a partially ordered set. The monotone relationship is formulated in terms of stochastic order constraints, that is, the response variable increases in a stochastic sense as the covariates increase in the partial order. This assumption alone yields a shape-constrained non-parametric estimator, which does not involve any tuning parameters.
The estimation of distributions under stochastic order restrictions has already been studied for various stochastic orders, but so far only with totally ordered covariates. Apart from considering more general partially ordered covariates, the first main contribution of this thesis lies in a shift of focus from estimation to prediction. Distributional regression is the backbone of probabilistic forecasting, which aims at quantifying the uncertainty about a future quantity of interest comprehensively in the form of probability distributions. When analyzed with respect to predominant criteria for probabilistic forecast quality, isotonic distributional regression is shown to have desirable properties. In addition, this thesis develops an efficient algorithm for the computation of isotonic distributional regression, and proposes an estimator under a weaker, previously not thoroughly studied stochastic order constraint.
A main application of isotonic distributional regression is the uncertainty quantification for point forecasts. Such point forecasts sometimes stem from external sources, like physical models or expert surveys, but often they are generated with statistical models. The second contribution of this thesis is the extension of isotonic distributional regression to allow covariates that are point predictions from a regression model, which may be trained on the same data to which isotonic distributional regression is to be applied. This combination yields a so-called distributional index model. Asymptotic consistency is proved under suitable assumptions, and real data applications demonstrate the usefulness of the method.
Isotonic distributional regression provides a benchmark in forecasting problems, as it allows to quantify the merits of a specific, tailored model for the application at hand over a generic method which only relies on monotonicity. In such comparisons it is vital to assess the significance of forecast superiority or of forecast misspecification. The third contribution of this thesis is the development of new, safe methods for forecast evaluation, which require no or minimal assumptions on the data generating processes
A comparison of the CAR and DAGAR spatial random effects models with an application to diabetics rate estimation in Belgium
When hierarchically modelling an epidemiological phenomenon on a finite collection of sites in space, one must always take a latent spatial effect into account in order to capture the correlation structure that links the phenomenon to the territory. In this work, we compare two autoregressive spatial models that can be used for this purpose: the classical CAR model and the more recent DAGAR model. Differently from the former, the latter has a desirable property: its ρ parameter can be naturally interpreted as the average neighbor pair correlation and, in addition, this parameter can be directly estimated when the effect is modelled using a DAGAR rather than a CAR structure. As an application, we model the diabetics rate in Belgium in 2014 and show the adequacy of these models in predicting the response variable when no covariates are available