8 research outputs found

    Capital allocation rules and generalized collapse to the mean

    Get PDF
    In the context of capital allocation principles for (not necessarily coherent) risk measures, we derive - under mild conditions - some representation results as ''collapse to the mean" in a generalized sense. This approach is related to the well-known Gradient allocation and allows to extend a result of Kalkbrener (Theorem 4.3 in [27]) to a non-differentiable setting as well as to more general capital allocation rules and risk measures

    Economic Capital Analysis with Portfolios of Dependent and Heavy-Tailed Risks

    Get PDF
    In the nowadays reality of prudent risk management, the problem of determining aggregate risk capital in financial entities has been intensively studied for quite long. As a result, canonical methods have been developed and even embedded in regulatory accords. While applauded by some and questioned by others, these methods provide a much desired standard benchmark for everyone. The situation is very different when the aggregate risk capital needs to be allocated to the business units of a financial entity. That is, there are overwhelmingly many ways to conduct the allocation exercise, and there is arguably no standard method to do so on the horizon. Two overarching approaches to allocate the aggregate risk capital stand out. These are the top-down approach that entails that the allocation exercise is imposed by the corporate centre, and the bottom-up approach that implies that the allocation of the aggregate risk to business units is informed by these units. Briefly, the top-down allocations start with the aggregate risk capital that is then replenished among business units according to the views of the centre, thus limiting the inputs from the business units. The bottom-up approach does start with the business units, but it is, as a rule, too granular, and so may lead to missing the wood for the trees. The first chapter of this dissertation is concerned with the bottom-up approach to allocating the aggregate risk capital. Namely, we put forward a general theoretical framework for the multiplicative background risk model that allows for arbitrarily distributed idiosyncratic and systemic risk factors. We reveal links between the just-mentioned general structure and the one with the exponentially distributed idiosyncratic risk factors (a key player in the modern actuarial modelling), study relevant theoretical properties of the new structure, and discuss important special cases. Also, we construct realistic numerical examples borrowed from the context of the determination and allocation of economic capital. The examples suggest that a little departure from exponentiality can have substantial impacts on the outcome of risk analysis. In the second chapter of this dissertation, we question the way in which the risk allocation practice is conducted in the state of the art and present an alternative that comes from the context of the distributions defined on the multidimensional simplex. More specifically, we put forward a new family of mixed-scaled Dirichlet distributions that contain the classical Dirichlet distribution as a special case, exhibit a multitude of desirable closure properties, and emerge naturally within the multivariate risk analysis context. As a by-product, our invention revisits the proportional allocation rule that is often used in applications. Interestingly, we are able to unify the top-down and the bottom-up approaches to allocating the aggregate risk capital into one encompassing method. During the study underlying the present dissertation, we rediscovered certain problems of the standard deviation as the ubiquitous measure of variability. In particular, the standard deviation is frequently infinite for insurance risks in the Property and Casualty lines of business, and so it cannot be used to quantify variability therein. Also, the standard deviation is a questionable measure of variability when non-normal distributions are considered, and normality is rarely a reasonable assumption in insurance practice. Therefore, in the third chapter of this dissertation, we turn to an alternative measure of variability. The Gini Mean Difference, which we study in the third chapter, is finite whenever the mean is so, and it is suitable for measuring variability for non-normal risks. Nevertheless, Gini Mean Difference is by far less common in actuarial science than the standard deviation. One of the main reasons for this lies in the critics associated with the computability of the Gini. We reveal convenient ways to compute the Gini Mean Difference measure of variability explicitly and often effortlessly. The thrust of our approach is a link, which we discover, between the Gini and the notion of statistical sample size-bias. Not only the just-mentioned link opens up advantageous computational routes for Gini, but also yields an alternative interpretation for it

    Economic Capital Analysis with Portfolios of Dependent and Heavy-Tailed Risks

    Get PDF
    In the nowadays reality of prudent risk management, the problem of determining aggregate risk capital in financial entities has been intensively studied for quite long. As a result, canonical methods have been developed and even embedded in regulatory accords. While applauded by some and questioned by others, these methods provide a much desired standard benchmark for everyone. The situation is very different when the aggregate risk capital needs to be allocated to the business units of a financial entity. That is, there are overwhelmingly many ways to conduct the allocation exercise, and there is arguably no standard method to do so on the horizon. Two overarching approaches to allocate the aggregate risk capital stand out. These are the top-down approach that entails that the allocation exercise is imposed by the corporate centre, and the bottom-up approach that implies that the allocation of the aggregate risk to business units is informed by these units. Briefly, the top-down allocations start with the aggregate risk capital that is then replenished among business units according to the views of the centre, thus limiting the inputs from the business units. The bottom-up approach does start with the business units, but it is, as a rule, too granular, and so may lead to missing the wood for the trees. The first chapter of this dissertation is concerned with the bottom-up approach to allocating the aggregate risk capital. Namely, we put forward a general theoretical framework for the multiplicative background risk model that allows for arbitrarily distributed idiosyncratic and systemic risk factors. We reveal links between the just-mentioned general structure and the one with the exponentially distributed idiosyncratic risk factors (a key player in the modern actuarial modelling), study relevant theoretical properties of the new structure, and discuss important special cases. Also, we construct realistic numerical examples borrowed from the context of the determination and allocation of economic capital. The examples suggest that a little departure from exponentiality can have substantial impacts on the outcome of risk analysis. In the second chapter of this dissertation, we question the way in which the risk allocation practice is conducted in the state of the art and present an alternative that comes from the context of the distributions defined on the multidimensional simplex. More specifically, we put forward a new family of mixed-scaled Dirichlet distributions that contain the classical Dirichlet distribution as a special case, exhibit a multitude of desirable closure properties, and emerge naturally within the multivariate risk analysis context. As a by-product, our invention revisits the proportional allocation rule that is often used in applications. Interestingly, we are able to unify the top-down and the bottom-up approaches to allocating the aggregate risk capital into one encompassing method. During the study underlying the present dissertation, we rediscovered certain problems of the standard deviation as the ubiquitous measure of variability. In particular, the standard deviation is frequently infinite for insurance risks in the Property and Casualty lines of business, and so it cannot be used to quantify variability therein. Also, the standard deviation is a questionable measure of variability when non-normal distributions are considered, and normality is rarely a reasonable assumption in insurance practice. Therefore, in the third chapter of this dissertation, we turn to an alternative measure of variability. The Gini Mean Difference, which we study in the third chapter, is finite whenever the mean is so, and it is suitable for measuring variability for non-normal risks. Nevertheless, Gini Mean Difference is by far less common in actuarial science than the standard deviation. One of the main reasons for this lies in the critics associated with the computability of the Gini. We reveal convenient ways to compute the Gini Mean Difference measure of variability explicitly and often effortlessly. The thrust of our approach is a link, which we discover, between the Gini and the notion of statistical sample size-bias. Not only the just-mentioned link opens up advantageous computational routes for Gini, but also yields an alternative interpretation for it

    Tőkeallokáció a biztosítási szektorban, elméleti és gyakorlati megközelítésben

    Get PDF
    A disszertáció témája a pénzügyi szektorban alkalmazott belső tőkeallokáció. A tőkeallokáció a nem várt kockázatok fedezésére szükséges tőkének az egyes üzletágakra, portfolióelemekre, vagy más módon meghatározott egységekre való felosztásának folyamata. A tanulmány a tőkeallokáció gyakorlati oldalról való megközelítése során a pénzügyi vállalatokon belül a biztosítási szektorra koncentrál, mivel leginkább ebben a szektorban alkalmaznak szofisztikált tőkeallokációs módszereket. A különböző biztosítási események bekövetkezése sztochasztikus, így a legfejlettebb statisztikai módszerek alkalmazása esetén is előfordulhat, hogy a beszedett díjak és a tartalékok nem fedezik a biztosítóval szembeni követeléseket. Ilyen esetekben a szavatoló tőke biztosítja, hogy a biztosító továbbra is eleget tudjon tenni kötelezettségeinek. Máshogy fogalmazva a szavatoló tőke az események nem várt, kedvezőtlen alakulása esetén felmerülő veszteségek fedezésére szolgál. Bár e szavatoló tőke egyaránt védelmet nyújt bármely üzletág által elszenvedett veszteséggel szemben, számos okból fontos mégis tudni, hogy az egyes üzletágak milyen mértékben járulnak hozzá a biztosító tőkeigényéhez. Tőkét tartani költséges, e költségnek az allokálása pedig igen fontos tényező az üzletágak és egyes termékportfoliók teljesítményének értékelése, termékárazási és bizonyos stratégiai döntések (felvásárlások, összeolvadások, új üzletág indítása, vagy meglévő megszüntetése) során. A biztosítási szektorban a Szolvencia II szabályozás bevezetésével párhuzamosan a tőkeallokáció egyre nagyobb szerephez jut napjainkban Európában, miközben az USA-ban a szabályozó hatóság már korábban is kötelezővé tette a biztosítók számára a tőkeallokációt. A tőkeallokációs probléma régóta foglalkoztatja az akadémiai világot is. Ez kevéssé meglepő, hiszen egy matematikailag is jól megfogalmazható, távolról sem triviális probléma, amely számos különböző megközelítésben vizsgálható: a játékelmélet eszközeivel (pl. Denault, 2001; Csóka et al., 2009, Csóka és Pintér, 2016), opcióárazási (pl. Myers és Read, 2001; Sherris, 2006; Kim és Hardy, 2007) vagy egyéb statisztikai megközelítésben (pl. Kalkberener, 2005; Homburg és Scherpereel, 2008; Buch és Dorfleitner, 2008). A tőkeallokációs probléma megoldására számtalan különböző módszer áll az alkalmazók rendelkezésére, azonban részben pont emiatt az elméleti kutatások és a gyakorlati alkalmazás között egyelőre elég nagy a távolság. A disszertáció olyan, az irodalomban igen ritka megközelítésben vizsgálja a tőkeallokáció kérdését (Vrieze és Brehm, 2003; Zec, 2014, illetve Balogh, 2006 a bankokra koncentrálva), amely a tőkeallokáció elméleti és gyakorlati oldalával egyaránt foglalkozik; a módszertani kérdések vizsgálata mellett a módszerek gyakorlati használatával kapcsolatban is támpontokat szolgáltat. A tőkeallokáció módszertani vizsgálata tíz fontos követelménynek való megfelelés szempontjából vizsgál hét, az irodalomban gyakran előforduló, illetve a gyakorlatban alkalmazott módszert, mind analitikusan, mind pedig szimuláció segítségével. A gyakorlati részben pedig – miközben nagymértékben építek az elméleti rész eredményeire – a tőkeallokációs problémának a szakirodalomban megszokott, igen absztrakt megfogalmazását a biztosítóknál az implementáció során felmerülő gyakorlati kérdésekre fordítom le, ezzel segítséget nyújtva az elméletileg lehetséges módszerek közötti eligazodásban. Végül egy fiktív esettanulmány segítségével illusztrálom a tőkeallokáció egy lehetséges alkalmazását

    The center of a convex set and capital allocation

    Full text link
    A capital allocation scheme for a company that has a random total profit Y and uses a coherent risk measure ρ has been suggested. The scheme returns a unique real number Λρ*(X,Y), which determines the capital that should be allocated to company’s subsidiary with random profit X. The resulting capital allocation is linear and diversifying as defined by Kalkbrener (2005). The problem is reduced to selecting the “center” of a non-empty convex weakly compact subset of a Banach space, and the solution to the latter problem proposed by Lim (1981) has been used. Our scheme can also be applied to selecting the unique Pareto optimal allocation in a wide class of optimal risk sharing problems
    corecore