1,206,351 research outputs found

    Risk-based Selection of Forest Regeneration Methods

    Get PDF
    A stochastic optimization model is developed to make a selection between the planting method and the seed-tree method, taking into account the uncertainty of, and the legal requirement on, the stocking level of the established seedlings in a given year after regeneration action. Uncertainty is quantified as the variation of the mortality rate of planted seedlings for the planting method, and as the prediction error for the seed-tree method. The objective of the forest landowner is assumed to maximize the expected net present value (NPV). Numerical simulations show that the owner should prefer the seed-tree method to the planting method for Scots pine stand. However, if the risk-free selection model is used, it overestimates the expected NPV by about 2%. Sensitivity analysis shows that a less restrictive forest act could improve the expected net present value both for the planting method and the seed-tree method. Sensitivity analysis also shows that decreasing the level of variation of the mortality rate (or prediction error) increases the expected NPV

    Risk-Adjusted Performance Attribution and Portfolio Optimisation under Tracking-Error Constraints for SIAS Canadian Equity Fund

    Get PDF
    This thesis is inspired by the article “Risk-adjusted performance attribution and portfolio optimizations under tracking error-constraints” by Bertrand (2008) together with some hand-on experience gained though managing a portfolio worth over 10millionCADoftheSimonFraserUniversityendowmentFundforoneyear.Thispaperexploresthetheoriesofattributingportfoliorisk,intheformoftrackingerrorvolatilityintoassetallocationattributesandstockselectioneffectsinaccordancewiththearithmeticperformanceattributionmethod.Thenitappliesthesameattributionmethodincalculatingtheriskadjustedreturn(informationratio)foranormalportfolioandcomparethistoaTEVoptimalportfolio.WeapplytheinformationratioandtrackingerrorvariancemodeltotheSIASCanadianEquityportfoliowithapproximately10 million CAD of the Simon Fraser University endowment Fund for one year. This paper explores the theories of attributing portfolio risk, in the form of tracking-error volatility into asset allocation attributes and stock selection effects in accordance with the arithmetic performance attribution method. Then it applies the same attribution method in calculating the risk adjusted return (information ratio) for a normal portfolio and compare this to a TEV optimal portfolio. We apply the information ratio and tracking-error variance model to the SIAS Canadian Equity portfolio with approximately 4 million CAD in value to test the following: If the SIAS Canadian Equity portfolio sector weights remain the same, what is the expected information ratio? And will this be improved by optimizing the sector weights according to the tracking-error variance frontier? We will then test the robustness of our findings by changing the time period and perform a sensitivity analysis on the estimated expected returns. We will also compare the results with those derived from the mean-variance optimization, by applying mean-variance optimal weights and recalculate the expected information ratio. The findings are as follows: The TEV optimized weights does improve the expected information ratio for a portfolio. This finding is further verified since it gives the same result with different time periods. The sensitivity analysis gives us an interval that the optimized sector weights will be within that interval with 95% probability. Moreover, the comparison to the mean-variance optimized portfolio shows that the tracking-error variance optimization gives less extreme results and is easier to implement, while maintaining a positive expected excess return compared to the benchmark

    LISA Data Analysis using MCMC methods

    Full text link
    The Laser Interferometer Space Antenna (LISA) is expected to simultaneously detect many thousands of low frequency gravitational wave signals. This presents a data analysis challenge that is very different to the one encountered in ground based gravitational wave astronomy. LISA data analysis requires the identification of individual signals from a data stream containing an unknown number of overlapping signals. Because of the signal overlaps, a global fit to all the signals has to be performed in order to avoid biasing the solution. However, performing such a global fit requires the exploration of an enormous parameter space with a dimension upwards of 50,000. Markov Chain Monte Carlo (MCMC) methods offer a very promising solution to the LISA data analysis problem. MCMC algorithms are able to efficiently explore large parameter spaces, simultaneously providing parameter estimates, error analyses and even model selection. Here we present the first application of MCMC methods to simulated LISA data and demonstrate the great potential of the MCMC approach. Our implementation uses a generalized F-statistic to evaluate the likelihoods, and simulated annealing to speed convergence of the Markov chains. As a final step we super-cool the chains to extract maximum likelihood estimates, and estimates of the Bayes factors for competing models. We find that the MCMC approach is able to correctly identify the number of signals present, extract the source parameters, and return error estimates consistent with Fisher information matrix predictions.Comment: 14 pages, 7 figure

    A Semiparametric Model Selection Criterion with Applications to the Marginal Structural Model

    Get PDF
    Estimators for the parameter of interest in semiparametric models often depend on a guessed model for the nuisance parameter. The choice of the model for the nuisance parameter can affect both the finite sample bias and efficiency of the resulting estimator of the parameter of interest. In this paper we propose a finite sample criterion based on cross validation that can be used to select a nuisance parameter model from a list of candidate models. We show that expected value of this criterion is minimized by the nuisance parameter model that yields the estimator of the parameter of interest with the smallest mean-squared error relative to the expected value of an initial consistent reference estimator. In a simulation study, we examine the performance of this criterion for selecting a model for a treatment mechanism in a marginal structural model (MSM) of point treatment data. For situations where all possible models cannot be evaluated, we outline a forward/backward model selection algorithm based on the cross validation criterion proposed in this paper and show how it can be used to select models for multiple nuisance parameters. We evaluate the performance of this algorithm in a simulation study of the one-step estimator of the parameter of interest in a MSM where models for both a treatment mechanism and a conditional expectation of the response need to be selected. Finally, we apply the forward model selection algorithm to a MSM analysis of the relationship between boiled water use and gastrointestinal illness in HIV positive men

    A generalized method for the transient analysis of Markov models of fault-tolerant systems with deferred repair

    Get PDF
    Randomization is an attractive alternative for the transient analysis of continuous time Markov models. The main advantages of the method are numerical stability, well-controlled computation error and ability to specify the computation error in advance. However, the fact that the method can be computationally expensive limits its applicability. Recently, a variant of the (standard) randomization method, called split regenerative randomization has been proposed for the efficient analysis of reliability-like models of fault-tolerant systems with deferred repair. In this paper, we generalize that method so that it covers more general reward measures: the expected transient reward rate and the expected averaged reward rate. The generalized method has the same good properties as the standard randomization method and, for large models and large values of the time t at which the measure has to be computed, can be significantly less expensive. The method requires the selection of a subset of states and a regenerative state satisfying some conditions. For a class of continuous time Markov models, class C'_2, including typical failure/repair reliability models with exponential failure and repair time distributions and deferred repair, natural selections for the subset of states and the regenerative state exist and results are available assessing approximately the computational cost of the method in terms of “visible” model characteristics. Using a large model class C'_2 example, we illustrate the performance of the method and show that it can be significantly faster than previously proposed randomization-based methods.Preprin

    Identification of Heat Exchanger by Neural Network Autoregressive with Exogenous Input Model

    Get PDF
    This chapter presents the performance of neural network autoregressive with exogenous input (NNARX) model structure and evaluates the training data that provide robust model on fresh data set. The neural network type used is backpropagation neural network also known as multilayer perceptron (MLP). The system under test is a heat exchanger QAD Model BDT921. The real input-output data that collect from the heat exchanger will be used to compare with the model structure. The model was estimated by means of prediction error method with Levenberg-Marquardt algorithm for training neural networks. It is expected that the training data that covers the full operating condition will be the optimum training data. For each data, the model is randomly selected and the selection is based on ARX structure. It was validated by residual analysis and model fit, and validation results are presented and concluded. The simulation results show that the neural network system identification is able to identify good model of the heat exchanger
    corecore