17 research outputs found

    Ein Aufsatz über Währungsrisiken und Portfoliostrategien

    Get PDF
    The carry trade is a zero net investment strategy that borrows in low yielding currencies and subsequently invests in high yielding currencies. It has been identified as highly profitable FX strategy delivering significantly excess returns with high Sharpe ratios. This work shows that these excess returns are compensation for bearing FX variance and crash risk. Additionally, factor risks that affect foreign money changes, foreign inflation changes, as well as changes to a newly developed Carry Trade Activity Index and the VIX index, as a proxy for global risk aversion, make up the carry trade risk anatomy. Furthermore, this study investigates an efficient parametric portfolio policy model to improve the return distribution of the currency carry trade investment strategy. This is done by modeling the optimal weight as a function of the carry trade’s risk characteristics. Especially, when using global FX option-implied variance risk, as well as global consumer price inflation and commodity prices as background risk factors, the model delivers extremely-efficient out-of-sample results with annualized mean returns of up to 8.4% from 2007 to 2015, accompanied with a low standard deviation, positively skewed returns and leading to Sharpe ratios around unity, including transaction costs. The last part examines the relationship between currency option’s implied skewness and its future realized skewness, where the difference is known as the skewness risk premium (SRP). The SRP indicates whether investors pay a premium to be insured against future crash risk. Past investigations about implied and realized skewness within currency markets showed that both measures are loosely connected or even exhibit a negative relationship that cannot be rationalized by no-arbitrage arguments. It will be shown that this phenomenon can be explained in terms of investor’s position-induced demand pressure and FX momentum effects. In order to exploit the disconnection of skewness, a simple skew swap trading strategy proposed by Schneider and Trojani (2015) have been set up. The resulting skew swap returns are relatively high, but the return distribution is extremely fat-tailed. To appropriately compare different skew swap strategy returns, this paper proposes a Higher Moment Sharpe Ratio that also takes higher moments into account

    Optimal Trading of a Storable Commodity via Forward Markets

    Get PDF
    A commodity market participant trading via her inventory has access to both spot and forward markets. To liquidate her inventory, she can sell at the spot price, take a short forward position, or do a combination of both. A trade is proposed in which there is always a hedging forward contract, which can be considered a dynamic cash and carry arbitrage. The trader can adjust the maturity of the forward contract dynamically until the inventory is depleted or a time constraint is reached. In the first setup, the storage contract (to carry inventory) is assumed to have a constant cost and a flexible duration. The risk and return characteristics of an Approximate Dynamic Programming (ADP) and a Forward Dynamic Optimization solution are compared. The trade is contrasted with optimal spot sale among other alternative liquidation strategies. Independent from the underlying stochastic forward price model, it is proved and verified numerically that a partial sale strategy is not optimal. The optimally selected forward maturities are limited to the subset comprising the immediate, next, and last timesteps. Under a more realistic storage contract, which assumes a stochastic cost and a fixed duration, a new ADP approach is developed. The optimal policy shows the tanker rent decision is accompanied by a buy order since the loss from an empty tanker is more than the gain of renting it cheaply yet early. Given the nonadjustable duration of the rent contract, a longer contract generates a higher value by benefiting from a tanker refill option

    A study in the financial valuation of a topping oil refinery

    Get PDF
    Oil refineries underpin modern day economics, finance and engineering – without their refined products the world would stand still, as vehicles would not have petrol, planes grounded without kerosene and homes not heated, without heating oil. In this thesis I study the refinery as a financial asset; it is not too dissimilar to a chemical plant, in this respect. There are a number of reasons for this research; over recent years there have been legal disputes based on a refiner's value, investors and entrepreneurs are interested in purchasing refineries, and finally the research in this arena is sparse. In this thesis I utilise knowledge and techniques within finance, optimisation, stochastic mathematics and commodities to build programs that obtain a financial value for an oil refinery. In chapter one I introduce the background of crude oil and the significance of the refinery in the oil value chain. In chapter two I construct a traditional discounted cash flow valuation often applied within practical finance. In chapter three I program an extensive piecewise non linear optimisation solution on the entire state space, leveraging off a simulation of the refined products using a set of single factor Schwartz (1997) stochastic equations often applied to commodities. In chapter four I program an optimisation using an approximation on crack spread option data with the aim of lowering the duration of solution found in chapter three; this is achieved by utilising a two-factor Hull & White sub-trinomial tree based numerical scheme; see Hull & White (1994) articles I & II for a thorough description. I obtain realistic and accurate numbers for a topping oil refinery using financial market contracts and other real data for the Vadinar refinery based in Gujurat India

    Exact Bayesian inference for diffusion-based models

    Get PDF
    We develop methods to carry out Bayesian inference for diffusion-based continuous time models, formulated as stochastic differential equations (SDEs). The transition density implied by such SDEs is intractable, which complicates likelihood-based inference from discrete observations. In spite of this obstacle, we seek methods that are exact in the sense that they target the correct posterior distribution, in contrast to prevailing discretization approaches. We begin by discussing the main approaches to likelihood-based inference under intractability, and their application to diffusion-based models. This discussion is followed by a presentation of the fundamental inference algorithms for ordinary ItĹŤ diffusion inference, of computational difficulties they meet in practice, and of recent improvements motivated by our research on more complex diffusion-based models. These include Markov switching diffusions and stochastic volatility models, where a latent continuous time process modifies the dynamics of an observable diffusion process. We follow up by developing Markov chain Monte Carlo (MCMC) and Monte Carlo Expectation Maximization (MCEM) inference algorithms for the more complex settings, and evaluate them systematically. We close with a discussion of practical hurdles to adoption of exact algorithms, and propose solutions to overcome those hurdles

    Enhanced Machine Learning Engine Engineering Using Innovative Blending, Tuning, and Feature Optimization

    Get PDF
    Investigated into and motivated by Ensemble Machine Learning (ML) techniques, this thesis contributes to addressing performance, consistency, and integrity issues such as overfitting, underfitting, predictive errors, accuracy paradox, and poor generalization for the ML models. Ensemble ML methods have shown promising outcome when a single algorithm failed to approximate the true prediction function. Using meta-learning, a super learner is engineered by combining weak learners. Generally, several methods in Supervised Learning (SL) are evaluated to find the best fit to the underlying data and predictive analytics (i.e., “No Free Lunch” Theorem relevance). This thesis addresses three main challenges/problems, i) determining the optimum blend of algorithms/methods for enhanced SL ensemble models, ii) engineering the selection and grouping of features that aggregate to the highest possible predictive and non-redundant value in the training data set, and iii) addressing the performance integrity issues such as accuracy paradox. Therefore, an enhanced Machine Learning Engine Engineering (eMLEE) is inimitably constructed via built-in parallel processing and specially designed novel constructs for error and gain functions to optimally score the classifier elements for improved training experience and validation procedures. eMLEE, as based on stochastic thinking, is built on; i) one centralized unit as Logical Table unit (LT), ii) two explicit units as enhanced Algorithm Blend and Tuning (eABT) and enhanced Feature Engineering and Selection (eFES), and two implicit constructs as enhanced Weighted Performance Metric(eWPM) and enhanced Cross Validation and Split (eCVS). Hence, it proposes an enhancement to the internals of the SL ensemble approaches. Motivated by nature inspired metaheuristics algorithms (such as GA, PSO, ACO, etc.), feedback mechanisms are improved by introducing a specialized function as Learning from the Mistakes (LFM) to mimic the human learning experience. LFM has shown significant improvement towards refining the predictive accuracy on the testing data by utilizing the computational processing of wrong predictions to increase the weighting scoring of the weak classifiers and features. LFM further ensures the training layer experiences maximum mistakes (i.e., errors) for optimum tuning. With this designed in the engine, stochastic modeling/thinking is implicitly implemented. Motivated by OOP paradigm in the high-level programming, eMLEE provides interface infrastructure using LT objects for the main units (i.e., Unit A and Unit B) to use the functions on demand during the classifier learning process. This approach also assists the utilization of eMLEE API by the outer real-world usage for predictive modeling to further customize the classifier learning process and tuning elements trade-off, subject to the data type and end model in goal. Motivated by higher dimensional processing and Analysis (i.e., 3D) for improved analytics and learning mechanics, eMLEE incorporates 3D Modeling of fitness metrics such as x for overfit, y for underfit, and z for optimum fit, and then creates logical cubes using LT handles to locate the optimum space during ensemble process. This approach ensures the fine tuning of ensemble learning process with improved accuracy metric. To support the built and implementation of the proposed scheme, mathematical models (i.e., Definitions, Lemmas, Rules, and Procedures) along with the governing algorithms’ definitions (and pseudo-code), and necessary illustrations (to assist in elaborating the concepts) are provided. Diverse sets of data are used to improve the generalization of the engine and tune the underlying constructs during development-testing phases. To show the practicality and stability of the proposed scheme, several results are presented with a comprehensive analysis of the outcomes for the metrics (i.e., via integrity, corroboration, and quantification) of the engine. Two approaches are followed to corroborate the engine, i) testing inner layers (i.e., internal constructs) of the engine (i.e., Unit-A, Unit-B and C-Unit) to stabilize and test the fundamentals, and ii) testing outer layer (i.e., engine as a black box) for standard measuring metrics for the real-world endorsement. Comparison with various existing techniques in the state of the art are also reported. In conclusion of the extensive literature review, research undertaken, investigative approach, engine construction and tuning, validation approach, experimental study, and results visualization, the eMLEE is found to be outperforming the existing techniques most of the time, in terms of the classifier learning, generalization, metrics trade-off, optimum-fitness, feature engineering, and validation

    A quantitative real options method for aviation technology decision-making in the presence of uncertainty

    Get PDF
    The developments of new technologies for commercial aviation involve significant risk for technologists as these programs are often driven by fixed assumptions regarding future airline needs, while being subject to many uncertainties at the technical and market levels. To prioritize these developments, technologists must assess their economic viability even though standard methods used for capital budgeting are not well suited to handle the overwhelming uncertainty surrounding such developments. This research proposes a framework featuring real options to overcome this challenge. It is motivated by three observations: disregarding the value of managerial flexibility undervalues long-term research and development (R&D) programs; windows of opportunities emerge and disappear and manufacturers can derive significant value by exploiting their upside potential; integrating competitive aspects early in the design ensures that development programs are robust with respect to moves by the competition. Real options analyses have been proposed to address some of these points but the adoption has been slow, hindered by constraining frameworks. A panel of academics and practitioners has identified a set of requirements, known as the Georgetown Challenge, that real options analyses must meet to get more traction amongst practitioners in the industry. In a bid to meet some of these requirements, this research proposes a novel methodology, cross-fertilizing techniques from financial engineering, actuarial sciences, and statistics to evaluate and study the timing of technology developments under uncertainty. It aims at substantiating decision making for R&D while having a wider domain of application and an improved ability to handle a complex reality compared to more traditional approaches. The method named FLexible AViation Investment Analysis (FLAVIA) uses first Monte Carlo techniques to simulate the evolution of uncertainties driving the value of technology developments. A non-parametric Esscher transform is then applied to perform a change of probability measure to express these evolutions under the equivalent martingale measure. A bootstrap technique is suggested next to construct new non-weighted evolutions of the technology development value under the new measure. A regression-based technique is finally used to analyze the technology development program and to discover trigger boundaries which help define when the technology development program should be launched. Verification of the method is performed on several canonical examples and indicates good accuracy and competitive execution time. It is applied next to the analysis of a performance improvement package (PIP) development using the Integrated Cost And Revenue Estimation method (i-CARE) developed as part of this research. The PIP can be retrofitted to currently operating turbofan engines in order to mitigate the impact of the aging process on their operating costs. The PIP is subject to market uncertainties, such as the evolution of jet-fuel prices and the possible taxation of carbon emissions. The profitability of the PIP development is investigated and the value of managerial flexibility and timing flexibility are highlighted.The developments of new technologies for commercial aviation involve significant risk for technologists as these programs are often driven by fixed assumptions regarding future airline needs, while being subject to many uncertainties at the technical and market levels. To prioritize these developments, technologists must assess their economic viability even though standard methods used for capital budgeting are not well suited to handle the overwhelming uncertainty surrounding such developments. This research proposes a framework featuring real options to overcome this challenge. It is motivated by three observations: disregarding the value of managerial flexibility undervalues long-term research and development (R&D) programs; windows of opportunities emerge and disappear and manufacturers can derive significant value by exploiting their upside potential; integrating competitive aspects early in the design ensures that development programs are robust with respect to moves by the competition. Real options analyses have been proposed to address some of these points but the adoption has been slow, hindered by constraining frameworks. A panel of academics and practitioners has identified a set of requirements, known as the Georgetown Challenge, that real options analyses must meet to get more traction amongst practitioners in the industry. In a bid to meet some of these requirements, this research proposes a novel methodology, cross-fertilizing techniques from financial engineering, actuarial sciences, and statistics to evaluate and study the timing of technology developments under uncertainty. It aims at substantiating decision making for R&D while having a wider domain of application and an improved ability to handle a complex reality compared to more traditional approaches. The method named FLexible AViation Investment Analysis (FLAVIA) uses first Monte Carlo techniques to simulate the evolution of uncertainties driving the value of technology developments. A non-parametric Esscher transform is then applied to perform a change of probability measure to express these evolutions under the equivalent martingale measure. A bootstrap technique is suggested next to construct new non-weighted evolutions of the technology development value under the new measure. A regression-based technique is finally used to analyze the technology development program and to discover trigger boundaries which help define when the technology development program should be launched. Verification of the method is performed on several canonical examples and indicates good accuracy and competitive execution time. It is applied next to the analysis of a performance improvement package (PIP) development using the Integrated Cost And Revenue Estimation method (i-CARE) developed as part of this research. The PIP can be retrofitted to currently operating turbofan engines in order to mitigate the impact of the aging process on their operating costs. The PIP is subject to market uncertainties, such as the evolution of jet-fuel prices and the possible taxation of carbon emissions. The profitability of the PIP development is investigated and the value of managerial flexibility and timing flexibility are highlighted.Ph.D

    A study in the financial valuation of a topping oil refinery

    Get PDF
    Oil refineries underpin modern day economics, finance and engineering – without their refined products the world would stand still, as vehicles would not have petrol, planes grounded without kerosene and homes not heated, without heating oil. In this thesis I study the refinery as a financial asset; it is not too dissimilar to a chemical plant, in this respect. There are a number of reasons for this research; over recent years there have been legal disputes based on a refiner's value, investors and entrepreneurs are interested in purchasing refineries, and finally the research in this arena is sparse. In this thesis I utilise knowledge and techniques within finance, optimisation, stochastic mathematics and commodities to build programs that obtain a financial value for an oil refinery. In chapter one I introduce the background of crude oil and the significance of the refinery in the oil value chain. In chapter two I construct a traditional discounted cash flow valuation often applied within practical finance. In chapter three I program an extensive piecewise non linear optimisation solution on the entire state space, leveraging off a simulation of the refined products using a set of single factor Schwartz (1997) stochastic equations often applied to commodities. In chapter four I program an optimisation using an approximation on crack spread option data with the aim of lowering the duration of solution found in chapter three; this is achieved by utilising a two-factor Hull & White sub-trinomial tree based numerical scheme; see Hull & White (1994) articles I & II for a thorough description. I obtain realistic and accurate numbers for a topping oil refinery using financial market contracts and other real data for the Vadinar refinery based in Gujurat India

    A Study of Myoelectric Signal Processing

    Get PDF
    This dissertation of various aspects of electromyogram (EMG: muscle electrical activity) signal processing is comprised of two projects in which I was the lead investigator and two team projects in which I participated. The first investigator-led project was a study of reconstructing continuous EMG discharge rates from neural impulses. Related methods for calculating neural firing rates in other contexts were adapted and applied to the intramuscular motor unit action potential train firing rate. Statistical results based on simulation and clinical data suggest that performances of spline-based methods are superior to conventional filter-based methods in the absence of decomposition error, but they unacceptably degrade in the presence of even the smallest decomposition errors present in real EMG data, which is typically around 3-5%. Optimal parameters for each method are found, and with normal decomposition error rates, ranks of these methods with their optimal parameters are given. Overall, Hanning filtering and Berger methods exhibit consistent and significant advantages over other methods. In the second investigator-led project, the technique of signal whitening was applied prior to motion classification of upper limb surface EMG signals previously collected from the forearm muscles of intact and amputee subjects. The motions classified consisted of 11 hand and wrist actions pertaining to prosthesis control. Theoretical models and experimental data showed that whitening increased EMG signal bandwidth by 65-75% and the coefficients of variation of temporal features computed from the EMG were reduced. As a result, a consistent classification accuracy improvement of 3-5% was observed for all subjects at small analysis durations (\u3c 100 ms). In the first team-based project, advanced modeling methods of the constant posture EMG-torque relationship about the elbow were studied: whitened and multi-channel EMG signals, training set duration, regularized model parameter estimation and nonlinear models. Combined, these methods reduced error to less than a quarter of standard techniques. In the second team-based project, a study related biceps-triceps surface EMG to elbow torque at seven joint angles during constant-posture contractions. Models accounting for co-contraction estimated that individual flexion muscle torques were much higher than models that did not account for co-contraction

    Guide to Discrete Mathematics

    Get PDF
    corecore