728 research outputs found

    Option pricing in affine generalized Merton models

    Get PDF
    In this article we consider affine generalizations of the Merton jump diffusion model [Merton, J. Fin. Econ., 1976] and the respective pricing of European options. On the one hand, the Brownian motion part in the Merton model may be generalized to a log-Heston model, and on the other hand, the jump part may be generalized to an affine process with possibly state dependent jumps. While the characteristic function of the log-Heston component is known in closed form, the characteristic function of the second component may be unknown explicitly. For the latter component we propose an approximation procedure based on the method introduced in [Belomestny et al., J. Func. Anal., 2009]. We conclude with some numerical examples

    Optimal Investment in the Development of Oil and Gas Field

    Full text link
    Let an oil and gas field consists of clusters in each of which an investor can launch at most one project. During the implementation of a particular project, all characteristics are known, including annual production volumes, necessary investment volumes, and profit. The total amount of investments that the investor spends on developing the field during the entire planning period we know. It is required to determine which projects to implement in each cluster so that, within the total amount of investments, the profit for the entire planning period is maximum. The problem under consideration is NP-hard. However, it is solved by dynamic programming with pseudopolynomial time complexity. Nevertheless, in practice, there are additional constraints that do not allow solving the problem with acceptable accuracy at a reasonable time. Such restrictions, in particular, are annual production volumes. In this paper, we considered only the upper constraints that are dictated by the pipeline capacity. For the investment optimization problem with such additional restrictions, we obtain qualitative results, propose an approximate algorithm, and investigate its properties. Based on the results of a numerical experiment, we conclude that the developed algorithm builds a solution close (in terms of the objective function) to the optimal one

    Competitive portfolio selection using stochastic predictions

    Get PDF
    We study a portfolio selection problem where a player attempts to maximise a utility function that represents the growth rate of wealth. We show that, given some stochastic predictions of the asset prices in the next time step, a sublinear expected regret is attainable against an optimal greedy algorithm, subject to tradeoff against the \accuracy" of such predictions that learn (or improve) over time. We also study the effects of introducing transaction costs into the model

    Optimal leverage from non-ergodicity

    Full text link
    In modern portfolio theory, the balancing of expected returns on investments against uncertainties in those returns is aided by the use of utility functions. The Kelly criterion offers another approach, rooted in information theory, that always implies logarithmic utility. The two approaches seem incompatible, too loosely or too tightly constraining investors' risk preferences, from their respective perspectives. The conflict can be understood on the basis that the multiplicative models used in both approaches are non-ergodic which leads to ensemble-average returns differing from time-average returns in single realizations. The classic treatments, from the very beginning of probability theory, use ensemble-averages, whereas the Kelly-result is obtained by considering time-averages. Maximizing the time-average growth rates for an investment defines an optimal leverage, whereas growth rates derived from ensemble-average returns depend linearly on leverage. The latter measure can thus incentivize investors to maximize leverage, which is detrimental to time-average growth and overall market stability. The Sharpe ratio is insensitive to leverage. Its relation to optimal leverage is discussed. A better understanding of the significance of time-irreversibility and non-ergodicity and the resulting bounds on leverage may help policy makers in reshaping financial risk controls.Comment: 17 pages, 3 figures. Updated figures and extended discussion of ergodicit

    A numerical study on the evolution of portfolio rules

    Get PDF
    In this paper we test computationally the performance of CAPM in an evolutionary setting. In particular we study the stability of distribution of wealth in a financial market where some traders invest as prescribed by CAPM and others behave according to different portfolio rules. Our study is motivated by recent analytical results that show that, whenever a logarithmic utility maximiser enters the market, CAPM traders vanish in the long run. Our analysis provides further insights and extends these results. We simulate a sequence of trades in a financial market and: first, we address the issue of how long is the long run in different parametric settings; second, we study the effect of heterogeneous savings behaviour on asymptotic wealth shares. We find that CAPM is particularly “unfit” for highly risky environments

    Dynamic input demand functions and resource adjustment for US agriculture: state evidence

    Get PDF
    The paper presents an econometric model of dynamic agricultural input demand functions that include research based technical change and autoregressive disturbances and fits the model to annual data for a set of state aggregates pooled over 1950–1982. The methodological approach is one of developing a theoretical foundation for a dynamic input demand system and accepting state aggreage behavior as approximated by nonlinear adjustment costs and long-term profit maximization. Although other studies have largely ignored autocorrelation in dynamic input demand systems, the results show shorter adjustment lags with autocorrelation than without. Dynamic input demand own-price elasticities for the six input groups are inelastic, and the demand functions possess significant cross-price and research effects

    Estimating bank default with generalised extreme value regression models

    Get PDF
    The paper proposes a novel model for the prediction of bank failures, on the basis of both macroeconomic and bank-specific microeconomic factors. As bank failures are rare, in the paper we apply a regression method for binary data based on extreme value theory, which turns out to be more effective than classical logistic regression models, as it better leverages the information in the tail of the default distribution. The application of this model to the occurrence of bank defaults in a highly bank dependent economy (Italy) shows that, while microeconomic factors as well as regulatory capital are significant to explain proper failures, macroeconomic factors are relevant only when failures are defined not only in terms of actual defaults but also in terms of mergers and acquisitions. In terms of predictive accuracy, the model based on extreme value theory outperforms classical logistic regression models

    Problems with Using the Normal Distribution – and Ways to Improve Quality and Efficiency of Data Analysis

    Get PDF
    Background: The Gaussian or normal distribution is the most established model to characterize quantitative variation of original data. Accordingly, data are summarized using the arithmetic mean and the standard deviation, by x 6 SD, or with the standard error of the mean, x 6 SEM. This, together with corresponding bars in graphical displays has become the standard to characterize variation. Methodology/Principal Findings: Here we question the adequacy of this characterization, and of the model. The published literature provides numerous examples for which such descriptions appear inappropriate because, based on the ‘‘95 % range check’’, their distributions are obviously skewed. In these cases, the symmetric characterization is a poor description and may trigger wrong conclusions. To solve the problem, it is enlightening to regard causes of variation. Multiplicative causes are by far more important than additive ones, in general, and benefit from a multiplicative (or log-) normal approach. Fortunately, quite similar to the normal, the log-normal distribution can now be handled easily and characterized at the level of the original data with the help of both, a new sign, x /, times-divide, and notation. Analogous to x 6 SD, it connects the multiplicative (or geometric) mean x * and the multiplicative standard deviation s * in the form x * x /s*, that is advantageous and recommended. Conclusions/Significance: The corresponding shift from the symmetric to the asymmetric view will substantially increas
    corecore