1,001 research outputs found
Recommended from our members
Pension schemes versus real estate
The demographic, economic and social changes that have characterized the last decades, and the dramatic financial crisis that has taken place since 2008, have led to a demand for structural changes in the pension sector and a growing interest in individual pension products. Hence the need, for most elderly people, to liquidate their fixed assets, which are usually the homes in which they live. This highlights products such as reverse mortgages and domestic reversibility plans. Within this context, we propose a contractual scheme where an immediate life annuity is obtained by paying a single-premium in the form of real estate rights (RERs), for example by transferring to an insurer the property title of a house or a similar realty, while keeping its usufruct or a restricted bundle of rights. The level of the installments depends on the fair value of the transferred RER at the contract’s issue, the life expectancy of the insured and the expected growth rate of the real estate market value. The contract design is developed by considering the control of the financial risk inherent in the contract itself, because of the prospective changes in the value of the RERs, and the level of the insurer’s leverage. Finally, we provide some numerical evidence of the proposed contractual structure, in order to compare the level of the installments according to the house return forecasts in different European countries
Recommended from our members
The dependency premium based on a multifactor model for dependent mortality data
As shown in the literature, the dependence structure in mortality data cannot be ignored in projecting future trends, in particular for a group of similar populations characterized by common long run relationships. We propose a new multifactor model for capturing common and specific features of the trend over time. We implement the model and investigate its impact on actuarial valuations, through the introduction of the concept of the dependency premium
Early handling and repeated cross-fostering have opposite effect on mouse emotionality
Early life events have a crucial role in programming the individual phenotype and exposure to traumatic experiences during infancy can increase later risk for a variety of neuropsychiatric conditions, including mood and anxiety disorders. Animal models of postnatal stress have been developed in rodents to explore molecular mechanisms responsible for the observed short and long lasting neurobiological effects of such manipulations. The main aim of this study was to compare the behavioral and hormonal phenotype of young and adult animals exposed to different postnatal treatments. Outbred mice were exposed to (i) the classical Handling protocol (H: 15 min-day of separation from the mother from day 1 to 14 of life) or to (ii) a Repeated Cross-Fostering protocol (RCF: adoption of litters from day 1 to 4 of life by different dams). Handled mice received more maternal care in infancy and showed the already described reduced emotionality at adulthood. Repeated cross fostered animals did not differ for maternal care received, but showed enhanced sensitivity to separation from the mother in infancy and altered respiratory response to 6% CO2 in breathing air in comparison with controls. Abnormal respiratory responses to hypercapnia are commonly found among humans with panic disorders (PD), and point to RCF-induced instability of the early environment as a valid developmental model for PD. The comparisons between short-and long-term effects of postnatal handling vs. RCF indicate that different types of early adversities are associated with different behavioral profiles, and evoke psychopathologies that can be distinguished according to the neurobiological systems disrupted by early-life manipulation
Artificial Intelligence Algorithms in Precision Medicine: A New Approach in Clinical Decision-Making
US National Institutes of Health described the precision medicine as ‘an emerging
approach for disease treatment and prevention that takes into account individual variability
in genes, environment and lifestyle for each person.’ In other words, on the
basis of the definition, the precision medicine allows to treat patients based on their
genetic, lifestyle, and environmental data. Nevertheless, the complexity and rise of
data in healthcare arising from cheap genome sequencing, advanced biotechnology,
health sensors patients use at home, and the collection of information about patients’
journey in healthcare with hand-held devices unquestionably require a suitable toolkit
and advanced analytics for processing the huge information. The artificial intelligence
algorithms (AI) can remarkably improve the ability to use big data to make predictions
by reducing the cost of making predictions. The advantages of artificial intelligence
algorithms have been extensively discussed in the medical literature. In this paper
based on the collection of the data relevant for the health of a given individual and
the inference obtained by AI, we provide a simulation environment for understanding
and suggesting the best actions that need to be performed to improve the individual’s
health. Such simulation modelling can help improve clinical decision-maing and the
fundamental understanding of the healthcare system and clinical process
Editorial: Recent Advances in Seismic Risk Assessment and Its Applications
This special issue discusses recent advances in seismic risk assessment with particular attention to the development and validation of new procedures that are capable of assessing failure modes and the fragility curves of existing buildings. The studies presented have also a probabilistic background, and show the importance of typological characteristics in the seismic response of a building. Furthermore, non-linear numerical analyses have confirmed the importance of implementing specific models in order to design appropriate interventions aimed at reducing the seismic risk of a specific construction
Detecting Common Longevity Trends by a Multiple Population Approach
Recently the interest in the development of country and longevity risk models has been growing. The investigation of long-run equilibrium relationships could provide valuable information about the factors driving changes in mortality, in particular across ages and across countries. In order to investigate cross-country common longevity trends, tools to quantify, compare, and model the strength of dependence become essential. On one hand, it is necessary to take into account either the dependence for adjacent age groups or the dependence structure across time in a single population setting-a sort of intradependence structure. On the other hand, the dependence across multiple populations, which we describe as interdependence, can be explored for capturing common long-run relationships between countries. The objective of our work is to produce longevity projections by taking into account the presence of various forms of cross-sectional and temporal dependencies in the error processes of multiple populations, considering mortality data from different countries. The algorithm that we propose combines model-based predictions in the Lee-Carter (LC) framework with a bootstrap procedure for dependent data, and so both the historical parametric structure and the intragroup error correlation structure are preserved. We introduce a model which applies a sieve bootstrap to the residuals of the LC model and is able to reproduce, in the sampling, the dependence structure of the data under consideration. In the current article, the algorithm that we build is applied to a pool of populations by using ideas from panel data; we refer to this new algorithm as the Multiple Lee-Carter Panel Sieve (MLCPS). We are interested in estimating the relationship between populations of similar socioeconomic conditions. The empirical results show that the MLCPS approach works well in the presence of dependence
Recommended from our members
The Poisson Log-Bilinear Lee-Carter Model: Applications Of efficient bootstrap methods to annuity analyses
Life insurance companies deal with two fundamental types of risks when issuing annuity contracts: financial risk and demographic risk. Recent work on the latter has focused on modeling the trend in mortality as a stochastic process. A popular method for modeling death rates is the Lee-Carter model. This methodology has become widely used, and various extensions and modifications have been proposed to obtain a broader interpretation and to capture the main features of the dynamics of mortality rates. In order to improve the measurement of uncertainty in survival probability estimates, in particular for older ages, the paper proposes an extension based on simulation procedures and on the bootstrap methodology. It aims to obtain more reliable and accurate mortality projections, based on the idea of obtaining an acceptable accuracy of the estimate by means of variance reducing techniques. In this way the forecasting procedure becomes more efficient. The longevity question constitutes a critical element in the solvency appraisal of pension annuities. The demographic models used for the cash flow distributions in a portfolio impact on the mathematical reserve and surplus calculations and affect the risk management choices for a pension plan. The paper extends the investigation of the impact of survival uncertainty for life annuity portfolios and for a guaranteed annuity option in the case where interest rates are stochastic. In a framework in which insurance companies need to use internal models for risk management purposes and for determining their solvency capital requirement, the authors consider the surplus value, calculated as the ratio between the market value of the projected assets to that of the liabilities, as a meaningful measure of the company's financial position, expressing the degree to which the liabilities are covered by the assets
Fair graph representation learning: Empowering NIFTY via Biased Edge Dropout and Fair Attribute Preprocessing
The increasing complexity and amount of data available in modern applications strongly demand Trustworthy Learning algorithms that can be fed directly with complex and large graphs data. In fact, on one hand, machine learning models must meet high technical standards (e.g., high accuracy with limited computational requirements), but, at the same time, they must be sure not to discriminate against subgroups of the population (e.g., based on gender or ethnicity). Graph Neural Networks (GNNs) are currently the most effective solution to meet the technical requirements, even if it has been demonstrated that they inherit and amplify the biases contained in the data as a reflection of societal inequities. In fact, when dealing with graph data, these biases can be hidden not only in the node attributes but also in the connections between entities. Several Fair GNNs have been proposed in the literature, with uNIfying Fairness and stabiliTY (NIFTY) (Agarwal et al., 2021) being one of the most effective. In this paper, we will empower NIFTY's fairness with two new strategies. The first one is a Biased Edge Dropout, namely, we drop graph edges to balance homophilous and heterophilous sensitive connections, mitigating the bias induced by subgroup node cardinality. The second one is Attributes Preprocessing, which is the process of learning a fair transformation of the original node attributes. The effectiveness of our proposal will be tested on a series of datasets with increasingly challenging scenarios. These scenarios will deal with different levels of knowledge about the entire graph, i.e., how many portions of the graph are known and which sub-portion is labelled at the training and forward phases
Investigating the genetic basis of salt-tolerance in common bean: a genome-wide association study at the early vegetative stage
Salinity poses a significant challenge to global crop productivity, affecting approximately 20% of cultivated and 33% of irrigated farmland, and this issue is on the rise. Negative impact of salinity on plant development and metabolism leads to physiological and morphological alterations mainly due to high ion concentration in tissues and the reduced water and nutrients uptake. Common bean (Phaseolus vulgaris L.), a staple food crop accounting for a substantial portion of consumed grain legumes worldwide, is highly susceptible to salt stress resulting in noticeable reduction in dry matter gain in roots and shoots even at low salt concentrations. In this study we screened a common bean panel of diversity encompassing 192 homozygous genotypes for salt tolerance at seedling stage. Phenotypic data were leveraged to identify genomic regions involved in salt stress tolerance in the species through GWAS. We detected seven significant associations between shoot dry weight and SNP markers. The candidate genes, in linkage with the regions associated to salt tolerance or harbouring the detected SNP, showed strong homology with genes known to be involved in salt tolerance in Arabidopsis. Our findings provide valuable insights onto the genetic control of salt tolerance in common bean and represent a first contribution to address the challenge of salinity-induced yield losses in this species and poses the ground to eventually breed salt tolerant common bean varieties
Vine copula modeling dependence among cyber risks: A dangerous regulatory paradox
Dependence among different cyber risk classes is a fundamentally underexplored topic in the literature. However, disregarding the dependence structure
in cyber risk management leads to inconsistent estimates of potential unintended losses. To bridge this gap, this article adopts a regulatory perspective
to develop vine copulas to capture dependence. In quantifying the solvency
capital requirement gradient for cyber risk measurement according to Solvency II, a dangerous paradox emerges: an insurance company does not tend to
provide cyber risk hedging products as they are excessively expensive and would
require huge premiums that it would not be possible to find policyholders
- …