97 research outputs found

    Partially Observable Risk-Sensitive Stopping Problems in Discrete Time

    Full text link
    In this paper we consider stopping problems with partial observation under a general risk-sensitive optimization criterion for problems with finite and infinite time horizon. Our aim is to maximize the certainty equivalent of the stopping reward. We develop a general theory and discuss the Bayesian risk-sensitive house selling problem as a special example. In particular we are able to study the influence of the attitude towards risk of the decision maker on the optimal stopping rule

    Extremal Behavior of Long-Term Investors with Power Utility

    Full text link
    We consider a Bayesian financial market with one bond and one stock where the aim is to maximize the expected power utility from terminal wealth. The solution of this problem is known, however there are some conjectures in the literature about the long-term behavior of the optimal strategy. In this paper we prove now that for positive coefficient in the power utility the long-term investor is very optimistic and behaves as if the best drift has been realized. In case the coefficient in the power utility is negative the long-term investor is very pessimistic and behaves as if the worst drift has been realized

    Partially Observable Risk-Sensitive Markov Decision Processes

    Full text link
    We consider the problem of minimizing a certainty equivalent of the total or discounted cost over a finite and an infinite time horizon which is generated by a Partially Observable Markov Decision Process (POMDP). The certainty equivalent is defined by U1(EU(Y))U^{-1}(EU(Y)) where UU is an increasing function. In contrast to a risk-neutral decision maker this optimization criterion takes the variability of the cost into account. It contains as a special case the classical risk-sensitive optimization criterion with an exponential utility. We show that this optimization problem can be solved by embedding the problem into a completely observable Markov Decision Process with extended state space and give conditions under which an optimal policy exists. The state space has to be extended by the joint conditional distribution of current unobserved state and accumulated cost. In case of an exponential utility, the problem simplifies considerably and we rediscover what in previous literature has been named information state. However, since we do not use any change of measure techniques here, our approach is simpler. A small numerical example, namely the classical repeated casino game with unknown success probability is considered to illustrate the influence of the certainty equivalent and its parameters

    Optimal Risk Allocation in Reinsurance Networks

    Full text link
    In this paper we consider reinsurance or risk sharing from a macroeconomic point of view. Our aim is to find socially optimal reinsurance treaties. In our setting we assume that there are nn insurance companies each bearing a certain risk and one representative reinsurer. The optimization problem is to minimize the sum of all capital requirements of the insurers where we assume that all insurance companies use a form of Range-Value-at-Risk. We show that in case all insurers use Value-at-Risk and the reinsurer's premium principle satisfies monotonicity, then layer reinsurance treaties are socially optimal. For this result we do not need any dependence structure between the risks. In the general setting with Range-Value-at-Risk we obtain again the optimality of layer reinsurance treaties under further assumptions, in particular under the assumption that the individual risks are positively dependent through the stochastic ordering. At the end, we discuss the difference between socially optimal reinsurance treaties and individually optimal ones by looking at a number of special cases

    Stochastic Optimal Growth Model with Risk Sensitive Preferences

    Full text link
    This paper studies a one-sector optimal growth model with i.i.d. productivity shocks that are allowed to be unbounded. The utility function is assumed to be non-negative and unbounded from above. The novel feature in our framework is that the agent has risk sensitive preferences in the sense of Hansen and Sargent (1995). Under mild assumptions imposed on the productivity and utility functions we prove that the maximal discounted non-expected utility in the infinite time horizon satisfies the optimality equation and the agent possesses a stationary optimal policy. A new point used in our analysis is an inequality for the so-called associated random variables. We also establish the Euler equation that incorporates the solution to the optimality equation

    Zero-sum Risk-Sensitive Stochastic Games

    Full text link
    In this paper we consider two-person zero-sum risk-sensitive stochastic dynamic games with Borel state and action spaces and bounded reward. The term risk-sensitive refers to the fact that instead of the usual risk neutral optimization criterion we consider the exponential certainty equivalent. The discounted reward case on a finite and an infinite time horizon is considered, as well as the ergodic reward case. Under continuity and compactness conditions we prove that the value of the game exists and solves the Shapley equation and we show the existence of optimal (non-stationary) strategies. In the ergodic reward case we work with a local minorization property and a Lyapunov condition and show that the value of the game solves the Poisson equation. Moreover, we prove the existence of optimal stationary strategies. A simple example highlights the influence of the risk-sensitivity parameter. Our results generalize findings in Basu/Ghosh 2014 and answer an open question posed there

    Risk-Sensitive Dividend Problems

    Full text link
    We consider a discrete-time version of the popular optimal dividend pay-out problem in risk theory. The novel aspect of our approach is that we allow for a risk averse insurer, i.e., instead of maximising the expected discounted dividends until ruin we maximise the expected utility of discounted dividends until ruin. This task has been proposed as an open problem in H. Gerber and E. Shiu (2004). The model in a continuous-time Brownian motion setting with the exponential utility function has been analysed in P. Grandits, F. Hubalek, W. Schachermayer and M. Zigo (2007). Nevertheless, a complete solution has not been provided. In this work, instead we solve the problem in discrete-time setup for the exponential and the power utility functions and give the structure of optimal history-dependent dividend policies. We make use of certain ideas studied earlier in N. B\"auerle and U. Rieder (2013), where Markov decision processes with general utility functions were treated. Our analysis, however, include new aspects, since the reward functions in this case are not bounded

    Portfolio Optimization in Fractional and Rough Heston Models

    Full text link
    We consider a fractional version of the Heston volatility model which is inspired by [16]. Within this model we treat portfolio optimization problems for power utility functions. Using a suitable representation of the fractional part, followed by a reasonable approximation we show that it is possible to cast the problem into the classical stochastic control framework. This approach is generic for fractional processes. We derive explicit solutions and obtain as a by-product the Laplace transform of the integrated volatility. In order to get rid of some undesirable features we introduce a new model for the rough path scenario which is based on the Marchaud fractional derivative. We provide a numerical study to underline our results

    Optimal Dividend Payout Model with Risk Sensitive Preferences

    Full text link
    We consider a discrete-time dividend payout problem with risk sensitive shareholders. It is assumed that they are equipped with a risk aversion coefficient and construct their discounted payoff with the help of the exponential premium principle. This leads to a non-expected recursive utility of the dividends. Within such a framework not only the expected value of the dividends is taken into account but also their variability. Our approach is motivated by a remark in Gerber and Shiu (2004). We deal with the finite and infinite time horizon problems and prove that, even in general setting, the optimal dividend policy is a band policy. We also show that the policy improvement algorithm can be used to obtain the optimal policy and the corresponding value function. Next, an explicit example is provided, in which the optimal policy of a barrier type is shown to exist. Finally, we present some numerical studies and discuss the influence of the risk sensitive parameter on the optimal dividend policy

    Optimal Control of Partially Observable Piecewise Deterministic Markov Processes

    Full text link
    In this paper we consider a control problem for a Partially Observable Piecewise Deterministic Markov Process of the following type: After the jump of the process the controller receives a noisy signal about the state and the aim is to control the process continuously in time in such a way that the expected discounted cost of the system is minimized. We solve this optimization problem by reducing it to a discrete-time Markov Decision Process. This includes the derivation of a filter for the unobservable state. Imposing sufficient continuity and compactness assumptions we are able to prove the existence of optimal policies and show that the value function satisfies a fixed point equation. A generic application is given to illustrate the results
    corecore