47 research outputs found
Decomposition approaches to integration without a measure
Extending the idea of Even and Lehrer [3], we discuss a general approach to
integration based on a given decomposition system equipped with a weighting
function, and a decomposition of the integrated function. We distinguish two
type of decompositions: sub-decomposition based integrals (in economics linked
with optimization problems to maximize the possible profit) and
super-decomposition based integrals (linked with costs minimization). We
provide several examples (both theoretical and realistic) to stress that our
approach generalizes that of Even and Lehrer [3] and also covers problems of
linear programming and combinatorial optimization. Finally, we introduce some
new types of integrals related to optimization tasks.Comment: 15 page
On the Diversification Effect in Solvency II for Extremely Dependent Risks
In this article, we investigate the validity of diversification effect under extreme-value copulas, when the marginal risks of the portfolio are identically distributed, which can be any one having a finite endpoint or belonging to one of the three maximum domains of attraction. We show that Value-at-Risk (V@R) under extreme-value copulas is asymptotically subadditive for marginal risks with finite mean, while it is asymptotically superadditive for risks with infinite mean. Our major findings enrich and supplement the context of the second fundamental theorem of quantitative risk management in existing literature, which states that V@R of a portfolio is typically non-subadditive for non-elliptically distributed risk vectors. In particular, we now pin down when the V@R is super or subadditive depending on the heaviness of the marginal tail risk. According to our results, one can take advantages from the diversification effect for marginal risks with finite mean. This justifies the standard formula for calculating the capital requirement under Solvency II in which imperfect correlations are used for various risk exposures
Recommended from our members
Measurement and Pricing of Risk in Insurance Markets
The theory and practice of risk measurement provides a point of intersection between risk management, economic theories of choice under risk, financial economics, and actuarial pricing theory. This article provides a review of these interrelationships, from the perspective of an insurance company seeking to price the risks that it underwrites. We examine three distinct approaches to insurance risk pricing, all being contingent on the concept of risk measures. Risk measures can be interpreted as representations of risk orderings, as well as absolute (monetary) quantifiers of risk. The first approach can be called an “axiomatic” one, whereby the price for risks is calculated according to a functional determined by a set of desirable properties. The price of a risk is directly interpreted as a risk measure and may be induced by an economic theory of price under risk. The second approach consists in contextualizing the considerations of the risk bearer by embedding them in the market where risks are traded. Prices are calculated by equilibrium arguments, where each economic agent's optimization problem follows from the minimization of a risk measure. Finally, in the third approach, weaknesses of the equilibrium approach are addressed by invoking alternative valuation techniques, the leading paradigm among which is arbitrage pricing. Such models move the focus from individual decisiontakers to abstract market price systems and are thus more parsimonious in the amount of information that they require. In this context, risk measures, instead of characterizing individual agents, are used for determining the set of price systems that would be viable in a market
The Geometry and Calculus of Losses
Statistical decision problems lie at the heart of statistical machine
learning. The simplest problems are binary and multiclass classification and
class probability estimation. Central to their definition is the choice of loss
function, which is the means by which the quality of a solution is evaluated.
In this paper we systematically develop the theory of loss functions for such
problems from a novel perspective whose basic ingredients are convex sets with
a particular structure. The loss function is defined as the subgradient of the
support function of the convex set. It is consequently automatically proper
(calibrated for probability estimation). This perspective provides three novel
opportunities. It enables the development of a fundamental relationship between
losses and (anti)-norms that appears to have not been noticed before. Second,
it enables the development of a calculus of losses induced by the calculus of
convex sets which allows the interpolation between different losses, and thus
is a potential useful design tool for tailoring losses to particular problems.
In doing this we build upon, and considerably extend existing results on
-sums of convex sets. Third, the perspective leads to a natural theory of
``polar'' loss functions, which are derived from the polar dual of the convex
set defining the loss, and which form a natural universal substitution function
for Vovk's aggregating algorithm.Comment: 65 pages, 17 figure
Fuzzy Esscher changes of measure and copula invariance in Lévy markets
In the context of a multidimensional exponential Lévy market, we focus on the Esscher change of measure and suggest a more flexible tool allowing for a fuzzy version of the standard Esscher transform. Motivated both by the empirical incompatibility of market data and the analytical form of the standard Esscher transform (see [8]) and by the desire to introduce a pricing technique under incompleteness conditions, we detect the impact of fuzziness in terms of measure change function and in contingent claims' pricing. In a multidimensional setting the fuzzy Esscher transform is a copula whose invariance, under margins' transformations induced by a change of measure, is investigated and connected to the notion of the absence of arbitrage opportunities. We highlight how Esscher transform, primarily used in pricing techniques, preserves the invariance of the aggregation operator and it can be generalized to the fuzzy version assuming that the measurable functions defining the Choquet marginal integrals are increasing. Furthermore, the empirical evidence seems to suggest that a weaker concept of invariance may be more suitable, i.e. the ε-measure invariance, coherent with the Esscher fuzzy copula tool. An empirical experiment for our model will make clear how this blurring technique fits the market data
Quantum Loewner Evolution
What is the scaling limit of diffusion limited aggregation (DLA) in the
plane? This is an old and famously difficult question. One can generalize the
question in two ways: first, one may consider the {\em dielectric breakdown
model} -DBM, a generalization of DLA in which particle locations are
sampled from the -th power of harmonic measure, instead of harmonic
measure itself. Second, instead of restricting attention to deterministic
lattices, one may consider -DBM on random graphs known or believed to
converge in law to a Liouville quantum gravity (LQG) surface with parameter
.
In this generality, we propose a scaling limit candidate called quantum
Loewner evolution, QLE. QLE is defined in terms of the radial
Loewner equation like radial SLE, except that it is driven by a measure valued
diffusion derived from LQG rather than a multiple of a standard
Brownian motion. We formalize the dynamics of using an SPDE. For each
, there are two or three special values of for which
we establish the existence of a solution to these dynamics and explicitly
describe the stationary law of .
We also explain discrete versions of our construction that relate DLA to
loop-erased random walk and the Eden model to percolation. A certain
"reshuffling" trick (in which concentric annular regions are rotated randomly,
like slot machine reels) facilitates explicit calculation.
We propose QLE as a scaling limit for DLA on a random
spanning-tree-decorated planar map, and QLE as a scaling limit for the
Eden model on a random triangulation. We propose using QLE to endow
pure LQG with a distance function, by interpreting the region explored by a
branching variant of QLE, up to a fixed time, as a metric ball in a
random metric space.Comment: 132 pages, approximately 100 figures and computer simulation