82 research outputs found

    Contrasting two approaches in real options valuation: contingent claims versus dynamic programming

    Get PDF
    This paper compares two well-known approaches for valuing a risky investment using real options theory: contingent claims (CC) with risk neutral valuation and dynamic programming (DP) using a constant risk adjusted discount rate. Both approaches have been used in valuing forest assets. A proof is presented which shows that, except under certain restrictive assumptions; DP using a constant discount rate and CC will not yield the same answers for investment value. A few special cases are considered for which CC and DP with a constant discount rate are consistent with each other. An optimal tree harvesting example is presented to illustrate that the values obtained using the two approaches can differ whcn we depart from these special cases to a more realistic scenariio. Further, the implied risk adjusted discount rate calculated from CC is found to vary with the stochastic state variable and stand age. We conclude that for real options problems the CC approach should be used.optimal tree harvesting, real options, contingent claims, dynamic programming

    Inference about Clustering and Parametric Assumptions in Covariance Matrix Estimation

    Get PDF
    Selecting an estimator for the variance covariance matrix is an important step in hypothesis testing. From less robust to more robust, the available choices include: Eicker/White heteroskedasticity-robust standard errors, Newey and West heteroskedasticity-and-autocorrelation- robust standard errors, and cluster-robust standard errors. The rationale for using a less robust covariance matrix estimator is that tests conducted using a less robust covariance matrix estimator can have better power properties. This motivates tests that examine the appropriate level of robustness in covariance matrix estimation. We propose a new robustness testing strategy, and show that it can dramatically improve inference about the proper level of robustness in covariance matrix estimation. Our main focus is on inference about clustering although the proposed robustness testing strategy can also improve inference about parametric assumptions in covariance matrix estimation, which we demonstrate for the case of testing for heteroskedasticity. We also show why the existing clustering test and other applications of the White (1980) robustness testing approach perform poorly, which to our knowledge has not been well understood. The insight into why this existing testing approach performs poorly is also the basis for the proposed robustness testing strategy.

    A Simple Model of the Nominal Term Structure of Interest Rates

    Get PDF
    This paper presents a simple two-factor model of nominal term structure of interest rates, in which the log-price kernel has an autoregressive drift process and a nonlinear GARCH volatility process. With these two state-variable processes, closed-form solutions are derived for zero-coupon bond prices as well as yield to maturity for a given time to maturity.

    An Empirical Characteristic Function Approach to VaR under a Mixture of Normal Distribution with Time-Varying Volatility

    Get PDF
    This paper considers Value at Risk measures constructed under a discrete mixture of normal distribution on the innovations with time-varying volatility, or MN-GARCH, model. We adopt an approach based on the continuous empirical characteristic function to estimate the param eters of the model using several daily foreign exchange rates' return data. This approach has several advantages as a method for estimating the MN-GARCH model. In particular, under certain weighting measures, a closed form objective distance function for estimation is obtained. This reduces the computational burden considerably. In addition, the characteristic function, unlike its likelihood function counterpart, is always uniformly bounded over parameter space due to the Fourier transformation. To evaluate the VaR estimates obtained from alternative specifications, we construct several measures, such as the number of violations, the average size of violations, the sum square of violations and the expected size of violations. Based on these measures, we find that the VaR measures obtained from the MN-GARCH model outperform those obtained from other competing models.Value at Risk; Mixture of Normals; GARCH; Characteristic Function.

    The Proportion of Females in the Establishment: Discrimination, Preferences and Technology

    Get PDF
    This paper examines determinants of the proportion of females in the establishment as this variable can affect the male- female wage gap in an important way. Our search for the determinants is guided by two views of the labour market, namely discrimination and coincidence of needs between firms and workers. Results suggest that establishments have higher proportion of females when employment is higher during the school year and employment turnover is higher; the more stable the demand for the output; the higher the proportion of white collar employees; and the smaller the local labour market. This suggests that public policy based on one view of how the labour market works may produce unintended results that will not necessarily improve the welfare of the very groups targeted.Gender Wage Gap, Wage Decomposition Techniques, and Determinants Proportion of Females in the Establishment

    The Impact of Stochastic Convenience Yield on Long-term Forestry Investment Decisions

    Get PDF
    This paper investigates whether convenience yield is an important factor in determining optimal decisions for a forestry investment. The Kalman filter method is used to estimate three different models of lumber prices: a mean reverting model, a simple geometric Brownian motion and the two-factor price model due to Schwartz (1997). In the latter model there are two correlated stochastic factors: spot price and convenience yield. The two-factor model is shown to provide a reasonable fit of the term structure of lumber futures prices. The impact of convenience yield on a forestry investment decision is examined using the Schwartz (1997) long-term model which transforms the two-factor price model into a single factor model with a composite price. Using the long-term model an optimal harvesting problem is analyzed, which requires the numerical solution of an impulse control problem formulated as a Hamilton-Jacobi-Bellman Variational Inequality. We compare the results for the long-term model to those from single-factor mean reverting and geometric Brownian motion models. The inclusion of convenience yield through the long-term model is found to have a significant impact on land value and optimal harvesting decisions.

    Modeling Asymmetric Volatility Clusters Using Copulas and High Frequency Data

    Get PDF
    Volatility clustering is a well-known stylized feature of financial asset returns. In this paper, we investigate the asymmetric pattern of volatility clustering on both the stock and foreign exchange rate markets. To this end, we employ copula-based semi-parametric univariate time-series models that accommodate the clusters of both large and small volatilities in the analysis. Using daily realized volatilities of the individual company stocks, stock indices and foreign exchange rates constructed from high frequency data, we find that volatility clustering is strongly asymmetric in the sense that clusters of large volatilities tend to be much stronger than those of small volatilities. In addition, the asymmetric pattern of volatility clusters continues to be visible even when the clusters are allowed to be changing over time, and the volatility clusters themselves remain persistent even after forty days.Volatility clustering, Copulas, Realized volatility, High-frequency data.

    Asymmetric Stochastic Conditional Duration Model --A Mixture of Normals Approach"

    Get PDF
    This paper extends the stochastic conditional duration model by imposing mixtures of bivariate normal distributions on the innovations of the observation and latent equations of the duration process. This extension allows the model not only to capture the asymmetric behavior of the expected duration but also to easily accommodate a richer dependence structure between the two innovations. In addition, it proposes a novel estimation methodology based on the empirical characteristic function. A set of Monte Carlo experiments as well as empirical applications based on the IBM and Boeing transaction data are provided to assess and illustrate the performance of the proposed model and the estimation method. One main empirical finding in this paper is that there is a signicantly positive "leverage effect" under both the contemporaneous and lagged inter-temporal de pendence structures for the IBM and Boeing duration data.Stochastic Conditional Duration model; Leverage Effect; Discrete Mixtures of Normal; Empirical Characteristic Function
    • 

    corecore