761 research outputs found
Can a Loan Valuation Adjustment (LVA) Approach Immunize Collateralized Debt from Defaults?
This study focuses on structuring tangible asset backed loans to inhibit their endemic option to default. We adapt the pragmatic approach of a margin loan in the configuring of collateralized debt to yield a quasi‐default‐free facility. We link our practical method to the current Basel III (2017) regulatory framework. Our new concept of the Loan Valuation Adjustment (LVA) and novel method to minimize the LVA converts the risky loan into a quasi risk‐free loan and achieves value maximization for the lending financial institution. As a result, entrepreneurial activities are promoted and economic growth invigorated. Information asymmetry, costly bailouts and resulting financial fragility are reduced while depositors are endowed with a safety net equivalent to deposit insurance but without the associated moral hazard between risk‐averse lenders and borrowers
Moody's Correlated Binomial Default Distributions for Inhomogeneous Portfolios
This paper generalizes Moody's correlated binomial default distribution for
homogeneous (exchangeable) credit portfolio, which is introduced by Witt, to
the case of inhomogeneous portfolios. As inhomogeneous portfolios, we consider
two cases. In the first case, we treat a portfolio whose assets have uniform
default correlation and non-uniform default probabilities. We obtain the
default probability distribution and study the effect of the inhomogeneity on
it. The second case corresponds to a portfolio with inhomogeneous default
correlation. Assets are categorized in several different sectors and the
inter-sector and intra-sector correlations are not the same. We construct the
joint default probabilities and obtain the default probability distribution. We
show that as the number of assets in each sector decreases, inter-sector
correlation becomes more important than intra-sector correlation. We study the
maximum values of the inter-sector default correlation. Our generalization
method can be applied to any correlated binomial default distribution model
which has explicit relations to the conditional default probabilities or
conditional default correlations, e.g. Credit Risk, implied default
distributions. We also compare some popular CDO pricing models from the
viewpoint of the range of the implied tranche correlation.Comment: 29 pages, 17 figures and 1 tabl
Least Dependent Component Analysis Based on Mutual Information
We propose to use precise estimators of mutual information (MI) to find least
dependent components in a linearly mixed signal. On the one hand this seems to
lead to better blind source separation than with any other presently available
algorithm. On the other hand it has the advantage, compared to other
implementations of `independent' component analysis (ICA) some of which are
based on crude approximations for MI, that the numerical values of the MI can
be used for:
(i) estimating residual dependencies between the output components;
(ii) estimating the reliability of the output, by comparing the pairwise MIs
with those of re-mixed components;
(iii) clustering the output according to the residual interdependencies.
For the MI estimator we use a recently proposed k-nearest neighbor based
algorithm. For time sequences we combine this with delay embedding, in order to
take into account non-trivial time correlations. After several tests with
artificial data, we apply the resulting MILCA (Mutual Information based Least
dependent Component Analysis) algorithm to a real-world dataset, the ECG of a
pregnant woman.
The software implementation of the MILCA algorithm is freely available at
http://www.fz-juelich.de/nic/cs/softwareComment: 18 pages, 20 figures, Phys. Rev. E (in press
Estimating Mutual Information
We present two classes of improved estimators for mutual information
, from samples of random points distributed according to some joint
probability density . In contrast to conventional estimators based on
binnings, they are based on entropy estimates from -nearest neighbour
distances. This means that they are data efficient (with we resolve
structures down to the smallest possible scales), adaptive (the resolution is
higher where data are more numerous), and have minimal bias. Indeed, the bias
of the underlying entropy estimates is mainly due to non-uniformity of the
density at the smallest resolved scale, giving typically systematic errors
which scale as functions of for points. Numerically, we find that
both families become {\it exact} for independent distributions, i.e. the
estimator vanishes (up to statistical fluctuations) if . This holds for all tested marginal distributions and for all
dimensions of and . In addition, we give estimators for redundancies
between more than 2 random variables. We compare our algorithms in detail with
existing algorithms. Finally, we demonstrate the usefulness of our estimators
for assessing the actual independence of components obtained from independent
component analysis (ICA), for improving ICA, and for estimating the reliability
of blind source separation.Comment: 16 pages, including 18 figure
A Power Market Forward Curve with Hydrology Dependence - An Approach Based on Artificial Neural Networks
- …