26 research outputs found

    Physiological differences for distinct somatic sensory modalities and sweating among the donor sites of cutaneous and fasciocutaneous free flaps

    No full text
    Differences of sensation and sweating among the typical sites of cutaneous and fasciocutaneous flaps (scapular, lateral arm, radial forearm, groin and dorsalis pedis) were assessed in 30 healthy volunteers (20 males and 10 females) aged 17-62 years (mean 38.2 years). Standard clinical methods were used: Semmes-Weinstein monofilaments for testing light touch threshold, discriminator and blunt caliper for evaluation of static and dynamic two-point discrimination and the Marstock quantitative method for assessing the normative values of warm-cold difference limen and heat and cold pain thresholds. Spontaneous sweat secretion was observed and documented by the ninhydrin test. We established various physiological differences for distinct somatic sensory modalities and sweating among the body regions (donor sites of cutaneous and fasciocutaneous free flaps)

    Forecasting mortality for small populations by mixing mortality data

    No full text
    In this paper we address the problem of projecting mortality when data are severely affected by random fluctuations, due in particular to a small sample size, or when data are scanty. Such situations may emerge when dealing with small populations, such as small countries (possibly previously part of a larger country), a specific geographic area of a (large) country, a life annuity portfolio or a pension fund, or when the investigation is restricted to the oldest ages. The critical issues arising from the volatility of data due to the small sample size (especially at the highest ages) may be made worse by missing records; this is the case, for example, of a small country previously part of a larger country, or a specific geographic area of a country, given that in some periods mortality data could have been collected just at an aggregate level. We suggest to \u2018replicate\u2019 the mortality of the small population by mixing appropriately the mortality data obtained from other populations. We design a two-step procedure. First, we obtain the average mortality of \u2018neighboring\u2019 populations. Three alternative approaches are tested for the assessment of the average mortality; conversely, the identification and the weight of the neighboring populations are obtained through (standard) optimization techniques. Then, following a sort of credibility approach, we mix the original mortality data of the small population with the average mortality of the neighboring populations. In principle, the approach described in the paper could be adopted for any population, whatever is its size, aiming at improving mortality projections through information collected from other groups. Through backtesting, we show that the procedure we suggest is convenient for small populations, but not necessarily for large populations, nor for populations not showing noticeable erratic effects in data. This finding can be explained as follows: while the replication of the original data implies the increase of the size of the sample, it also involves a smoothing of data, with a possible loss of specific information relating to the group referred to. In the case of small populations showing major erratic movements in mortality data, the advantages gained from the larger sample size overcome the disadvantages of the smoothing effect

    Computation of covex bounds for present value functions with random payments

    Get PDF
    AbstractIn this contribution we study the distribution of the present value function of a series of random payments in a stochastic financial environment. Such distributions occur naturally in a wide range of applications within fields of insurance and finance. We obtain accurate approximations by developing upper and lower bounds in the convex-order sense for present value functions. Technically speaking, our methodology is an extension of the results of Dhaene et al. [Insur. Math. Econom. 31(1) (2002) 3–33, Insur. Math. Econom. 31(2) (2002) 133–161] to the case of scalar products of mutually independent random vectors

    An explicit option-based strategy that outperforms dollar cost averaging

    No full text
    Dollar cost averaging (DCA) is a widely employed investment strategy in financial markets. At the same time it is also well documented that such gradual policy is sub-optimal from the point of view of risk averse decision makers with a fixed investment horizon T > 0. However, an explicit strategy that would be preferred by all risk averse decision makers did not yet appear in the literature. In this paper, we give a novel proof for the suboptimality of DCA when (log) returns are governed by Lévy processes and we construct a dominating strategy explicitly. The optimal strategy we propose is static and consists in purchasing a suitable portfolio of path-independent options. Next, we discuss a market governed by a Brownian motion in more detail. We show that the dominating strategy amounts to setting up a portfolio of power options. We provide evidence that the relative performance of DCA becomes worse in volatile markets, but also give some motivation to support its use. We also analyse DCA in presence of a minimal guarantee, explore the continuous setting and discuss the (non) uniqueness of the dominating strategy. © 2012 World Scientific Publishing Company

    Forecasting mortality for small populations by mixing mortality data

    No full text
    In this paper we address the problem of projecting mortality when data are severely affected by random fluctuations, due in particular to a small sample size, or when data are scanty. Such situations may emerge when dealing with small populations, such as small countries (possibly previously part of a larger country), a specific geographic area of a (large) country, a life annuity portfolio or a pension fund, or when the investigation is restricted to the oldest ages. The critical issues arising from the volatility of data due to the small sample size (especially at the highest ages) may be made worse by missing records; this is the case, for example, of a small country previously part of a larger country, or a specific geographic area of a country, given that in some periods mortality data could have been collected just at an aggregate level. We suggest to ‘replicate’ the mortality of the small population by mixing appropriately the mortality data obtained from other populations. We design a two-step procedure. First, we obtain the average mortality of ‘neighboring’ populations. Three alternative approaches are tested for the assessment of the average mortality; conversely, the identification and the weight of the neighboring populations are obtained through (standard) optimization techniques. Then, following a sort of credibility approach, we mix the original mortality data of the small population with the average mortality of the neighboring populations. In principle, the approach described in the paper could be adopted for any population, whatever is its size, aiming at improving mortality projections through information collected from other groups. Through backtesting, we show that the procedure we suggest is convenient for small populations, but not necessarily for large populations, nor for populations not showing noticeable erratic effects in data. This finding can be explained as follows: while the replication of the original data implies the increase of the size of the sample, it also involves a smoothing of data, with a possible loss of specific information relating to the group referred to. In the case of small populations showing major erratic movements in mortality data, the advantages gained from the larger sample size overcome the disadvantages of the smoothing effect
    corecore