696 research outputs found
Can a Loan Valuation Adjustment (LVA) Approach Immunize Collateralized Debt from Defaults?
This study focuses on structuring tangible asset backed loans to inhibit their endemic option to default. We adapt the pragmatic approach of a margin loan in the configuring of collateralized debt to yield a quasi‐default‐free facility. We link our practical method to the current Basel III (2017) regulatory framework. Our new concept of the Loan Valuation Adjustment (LVA) and novel method to minimize the LVA converts the risky loan into a quasi risk‐free loan and achieves value maximization for the lending financial institution. As a result, entrepreneurial activities are promoted and economic growth invigorated. Information asymmetry, costly bailouts and resulting financial fragility are reduced while depositors are endowed with a safety net equivalent to deposit insurance but without the associated moral hazard between risk‐averse lenders and borrowers
The Bivariate Normal Copula
We collect well known and less known facts about the bivariate normal
distribution and translate them into copula language. In addition, we prove a
very general formula for the bivariate normal copula, we compute Gini's gamma,
and we provide improved bounds and approximations on the diagonal.Comment: 24 page
Least Dependent Component Analysis Based on Mutual Information
We propose to use precise estimators of mutual information (MI) to find least
dependent components in a linearly mixed signal. On the one hand this seems to
lead to better blind source separation than with any other presently available
algorithm. On the other hand it has the advantage, compared to other
implementations of `independent' component analysis (ICA) some of which are
based on crude approximations for MI, that the numerical values of the MI can
be used for:
(i) estimating residual dependencies between the output components;
(ii) estimating the reliability of the output, by comparing the pairwise MIs
with those of re-mixed components;
(iii) clustering the output according to the residual interdependencies.
For the MI estimator we use a recently proposed k-nearest neighbor based
algorithm. For time sequences we combine this with delay embedding, in order to
take into account non-trivial time correlations. After several tests with
artificial data, we apply the resulting MILCA (Mutual Information based Least
dependent Component Analysis) algorithm to a real-world dataset, the ECG of a
pregnant woman.
The software implementation of the MILCA algorithm is freely available at
http://www.fz-juelich.de/nic/cs/softwareComment: 18 pages, 20 figures, Phys. Rev. E (in press
Moody's Correlated Binomial Default Distributions for Inhomogeneous Portfolios
This paper generalizes Moody's correlated binomial default distribution for
homogeneous (exchangeable) credit portfolio, which is introduced by Witt, to
the case of inhomogeneous portfolios. As inhomogeneous portfolios, we consider
two cases. In the first case, we treat a portfolio whose assets have uniform
default correlation and non-uniform default probabilities. We obtain the
default probability distribution and study the effect of the inhomogeneity on
it. The second case corresponds to a portfolio with inhomogeneous default
correlation. Assets are categorized in several different sectors and the
inter-sector and intra-sector correlations are not the same. We construct the
joint default probabilities and obtain the default probability distribution. We
show that as the number of assets in each sector decreases, inter-sector
correlation becomes more important than intra-sector correlation. We study the
maximum values of the inter-sector default correlation. Our generalization
method can be applied to any correlated binomial default distribution model
which has explicit relations to the conditional default probabilities or
conditional default correlations, e.g. Credit Risk, implied default
distributions. We also compare some popular CDO pricing models from the
viewpoint of the range of the implied tranche correlation.Comment: 29 pages, 17 figures and 1 tabl
Estimating Mutual Information
We present two classes of improved estimators for mutual information
, from samples of random points distributed according to some joint
probability density . In contrast to conventional estimators based on
binnings, they are based on entropy estimates from -nearest neighbour
distances. This means that they are data efficient (with we resolve
structures down to the smallest possible scales), adaptive (the resolution is
higher where data are more numerous), and have minimal bias. Indeed, the bias
of the underlying entropy estimates is mainly due to non-uniformity of the
density at the smallest resolved scale, giving typically systematic errors
which scale as functions of for points. Numerically, we find that
both families become {\it exact} for independent distributions, i.e. the
estimator vanishes (up to statistical fluctuations) if . This holds for all tested marginal distributions and for all
dimensions of and . In addition, we give estimators for redundancies
between more than 2 random variables. We compare our algorithms in detail with
existing algorithms. Finally, we demonstrate the usefulness of our estimators
for assessing the actual independence of components obtained from independent
component analysis (ICA), for improving ICA, and for estimating the reliability
of blind source separation.Comment: 16 pages, including 18 figure
A joint scoring model for peer-to-peer and traditional lending:A bivariate model with copula dependence
We analyse the dependence between defaults in peer-to-peer lending and credit bureaus. To achieve this, we propose a new flexible bivariate regression model that is suitable for binary imbalanced samples. We use different copula functions to model the dependence structure between defaults in the two credit markets. We implement the model in the R package BivGEV and we explore the empirical properties of the proposed fitting procedure by a Monte Carlo study. The application of this proposal to a comprehensive data set provided by Lending Club shows a significant level of dependence between the defaults in peer-to-peer and credit bureaus. Finally, we find that our model outperforms the bivariate probit and univariate logit models in predicting peer-to-peer default, in estimating the value at risk and the expected shortfall
The Term Structure of Interest Rates and its Impact on the Liability Adequacy Test for Insurance Companies in Brazil
The Brazilian regulation for applying the Liability Adequacy Test (LAT) to technical provisions in insurance companies requires that the current estimate is discounted by a term structure of interest rates (hereafter TSIR). This article aims to analyze the LAT results, derived from the use of various models to build the TSIR: the cubic spline interpolation technique, Svensson's model (adopted by the regulator) and Vasicek's model. In order to achieve the objective proposed, the exchange rates of BM&FBOVESPA trading days were used to model the ETTJ and, consequently, to discount the cash flow of the insurance company. The results indicate that: (i) LAT is sensitive to the choice of the model used to build the TSIR; (ii) this sensitivity increases with cash flow longevity; (iii) the adoption of an ultimate forward rate (UFR) for the Brazilian insurance market should be evaluated by the regulator, in order to stabilize the trajectory of the yield curve at longer maturities. The technical provision is among the main solvency items of insurance companies and the LAT result is a significant indicator of the quality of this provision, as this evaluates its sufficiency or insufficiency. Thus, this article bridges a gap in the Brazilian actuarial literature, introducing the main methodologies available for modeling the yield curve and a practical application to analyze the impact of its choice on LAT.</p
- …