4 research outputs found
Recommended from our members
Linear state models for volatility estimation and prediction
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.This thesis concerns the calibration and estimation of linear state models for forecasting stock return volatility. In the first two chapters I present aspects of financial modelling theory and practice that are of particular relevance to the theme of this present work. In addition to this I
review the literature concerning these aspects with a particular emphasis on the area of dynamic volatility models. These chapters set the scene and lay the foundations for subsequent empirical work and are a contribution in themselves. The structure of the models employed in the application chapters 4,5 and 6 is the state-space structure, or alternatively the models are known as unobserved components models. In the literature these models have been applied in the estimation of volatility, both for high frequency and low frequency data. As opposed to what has been carried out in the literature I propose the use of these models with Gaussian components. I suggest the implementation of these for high frequency data for short and medium term forecasting. I then demonstrate the calibration of these models and compare medium term forecasting performance for different forecasting methods and model
variations as well as that of GARCH and constant volatility models. I then introduce implied volatility measurements leading to two-state models and verify whether this derivative-based information improves forecasting performance. In chapter 6I compare different unobserved components models' specification and forecasting performance. The appendices contain the
extensive workings of the parameter estimates' standard error calculations
Safety system design optimisation
This thesis investigates the efficiency of a design optimisation scheme that is
appropriate for systems which require a high likelihood of functioning on demand.
Traditional approaches to the design of safety critical systems follow the preliminary
design, analysis, appraisal and redesign stages until what is regarded as an acceptable
design is achieved. For safety systems whose failure could result in loss of life it is
imperative that the best use of the available resources is made and a system which is
optimal, not just adequate, is produced.
The object of the design optimisation problem is to minimise system unavailability
through manipulation of the design variables, such that limitations placed on them by
constraints are not violated.
Commonly, with mathematical optimisation problem; there will be an explicit
objective function which defines how the characteristic to be minimised is related to
the variables. As regards the safety system problem, an explicit objective function
cannot be formulated, and as such, system performance is assessed using the fault tree
method. By the use of house events a single fault tree is constructed to represent the
failure causes of each potential design to overcome the time consuming task of
constructing a fault tree for each design investigated during the optimisation
procedure. Once the fault tree has been constructed for the design in question it is
converted to a BDD for analysis.
A genetic algorithm is first employed to perform the system optimisation, where the
practicality of this approach is demonstrated initially through application to a High-Integrity
Protection System (HIPS) and subsequently a more complex Firewater
Deluge System (FDS).
An alternative optimisation scheme achieves the final design specification by solving
a sequence of optimisation problems. Each of these problems are defined by
assuming some form of the objective function and specifying a sub-region of the
design space over which this function will be representative of the system
unavailability.
The thesis concludes with attention to various optimisation techniques, which possess
features able to address difficulties in the optimisation of safety critical systems.
Specifically, consideration is given to the use of a statistically designed experiment
and a logical search approach
Uncertainty analysis in a shipboard integrated power system using multi-element polynomial chaos
Thesis (Ph. D. in Ocean Engineering)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2007.Errata, dated Oct. 30, 2007, inserted between pages 3 and 4 of text.Includes bibliographical references (p. 301-307).The integrated power system has become increasingly important in electric ships due to the integrated capability of high-power equipment, for example, electromagnetic rail guns, advance radar system, etc. Several parameters of the shipboard power system are uncertain, caused by a measurement difficulty, a temperature dependency, and random fluctuation of its environment. To date, there has been little if any studies which account for these stochastic effects in the large and complex shipboard power system from either an analytical or a numerical perspective. Furthermore, all insensitive parameters must be identified so that the stochastic analysis with the reduced dimensional parameters can accelerate the process. Therefore, this thesis is focused on two main issues - stochastic and sensitivity analysis - on the shipboard power system. The stochastic analysis of the large and complex nonlinear systems with the non-Gaussian random variables or processes, in their initial states or parameters, are prohibited analytically and very time consuming using the brute force Monte Carlo method. As a result, numerical stochastic solutions of these systems can be efficiently solved by the generalized Polynomial Chaos (gPC) and Probabilistic Collocation Method (PCM).(cont.) In the case of the long-time integration and discontinuity in the stochastic solutions, the multi-element technique of PCM, which refines the solution in random space, can significantly improve the solutions' accuracy. Furthermore, the hybrid gPC+PCM is developed to extend the gPC ability to handle a system with nonlinear non-polynomial functions. Then, we systematically establish the convergence rate and compare the convergence performance among all numerical stochastic algorithms on various systems with both continuous and discontinuous solutions as a function of random dimension and the algorithms' accuracy governing parameters. To identify the most significant parameter in the large-scale complex systems, we propose new sensitivity analysis techniques - Monte Carlo Sampling, Collocation, Variance, and Inverse Variance methods - for static functions and show that they agree well with Morris method, which is one of the existing sensitivity analysis techniques for a function with large input dimensions. In addition, we extend the capability of the Sampling, Collocation, Variance, and the Morris methods to study both the parameters' sensitivity and the interaction of the ordinary differential equations.(cont.) In each approach, both strength and limitations of the sensitivity ranking accuracy and the convergence performance are emphasized. The convergence rate of the Collocation and Variance methods are more than an order of magnitude faster than that of Morris and Sampling methods for low and medium parameters' dimensions. At last, we successfully apply both stochastic and sensitivity analysis techniques to the integrated shipboard power system, with both open-and close-loop control of the propulsion system, to study a propagation of uncertainties and rank parameters in the order of their importance, respectively.by Pradya Prempraneerach.Ph.D.in Ocean Engineerin
Optimal approximation of SDE's with additive fractional noise
AbstractWe study pathwise approximation of scalar stochastic differential equations with additive fractional Brownian noise of Hurst parameter H>12, considering the mean square L2-error criterion. By means of the Malliavin calculus we derive the exact rate of convergence of the Euler scheme, also for non-equidistant discretizations. Moreover, we establish a sharp lower error bound that holds for arbitrary methods, which use a fixed number of bounded linear functionals of the driving fractional Brownian motion. The Euler scheme based on a discretization, which reflects the local smoothness properties of the equation, matches this lower error bound up to the factor 1.39