17 research outputs found

    Closed-Form Approximations for Spread Option Prices and Greeks

    Get PDF
    We develop a new closed-form approximation method for pricing spread options. Numerical analysis shows that our method is more accurate than existing analytical approximations. Our method is also extremely fast, with computing time more than two orders of magnitude shorter than one-dimensional numerical integration. We also develop closed-form approximations for the greeks of spread options. In addition, we analyze the price sensitivities of spread options and provide lower and upper bounds for digital spread options. Our method enables the accurate pricing of a bulk volume of spread options with different specifications in real time, which offers traders a potential edge in financial markets. The closed-form approximations of greeks serve as valuable tools in financial applications such as dynamic hedging and value-at-risk calculations.

    An Optimization model for selecting the economical cutting parameters in an external forward turning operation

    Get PDF
    This study develops software that provides optimum lathe cutting conditions for specific materials and parameters. This is accomplished by developing a model, based on empirical and analytical relationships, which estimates the optimum cutting conditions, (i.e. spindle speed, feed rate and depth of cut) for a single pass, external turning operation. These parameters are optimized to yield the minimum production cost, while satisfying constraints imposed by workpiece specifications and equipment limitations. The equipment limitations considered include the available machine power, maximum workholding force of the lathe, range of spindle speeds, feed rates for the lathe and tool, and the range of depth of cut of the tool. The workpiece constraints are the surface finish specification for the part and the maximum allowable deflection of the part at each cross section. The effects of each of these limitations are discussed. The developed analytical model introduces manufacturing economics, along with the above constraints, into a decision making process which heretofore relied primarily on the lathe operator\u27s experience and standard handbooks. Typically, the determination of the metal cutting conditions is based on the machinist\u27s experience. This method of specifying cutting conditions tends to emphasize the requirements of the workpiece specifications - i.e. surface finish and dimensional tolerances - excluding economic considerations. On the other hand, a pure rate of production analysis would maximize the ratio of actual cutting time to total machining time without considering workpiece specifications or the implications of operating the equipment at the maximum production rate. While minimizing the production time generally reduces production costs, there is a trade off to be considered. A higher production rate requires an increase in spindle speed or tool feedrate resulting in a decreased tool life. This reduction in tool life adds the associated costs of additional tools and tool changing times to the production cost. Hence, it is necessary to find the operating conditions which will minimize cost while considering all aspects of the manufacturing process. With the increasing usage of CNC lathes which involve large capital expenditures, the use of an economic analysis combined with technical considerations becomes imperative for minimizing the overall production cost. Further, an effective optimization procedure allows low volume runs on many different part numbers with the first part being both cost effective and fit for function. The methodology used to develop the model was based on published literature, experimentation, and several well known and widely accepted equations defining tool-life and tool-workpiece relationships. Through the use of a statistically designed experiment, data was obtained and a set of equations was determined to estimate the cutting forces generated in the turning operation. The data compared favorably with the published equations for calculating cutting forces which were used in this model. The parameters for this experiment, which was conducted on an instrumented lathe at Renesselaer Polytechnic Institute, were feed rate and depth of cut. An additional experiment was conducted to determine the tool life corresponding to the maximum allowable tool flank wear for several feed rates. These values are unique for a tool-workpiece material combination. For the purpose of applying the model, the experimentation was restricted to the use of a carbide coated tool insert and a free machining stainless steel, AISI 416. The work was based on the actual needs and production tooling of a major company. The determination of the empirical constants for other tool-workpiece material combinations would extend the model\u27s application. The optimization procedure is incorporated into a computer program to calculate the economical machining parameters in a finishing operation

    A study in the financial valuation of a topping oil refinery

    Get PDF
    Oil refineries underpin modern day economics, finance and engineering – without their refined products the world would stand still, as vehicles would not have petrol, planes grounded without kerosene and homes not heated, without heating oil. In this thesis I study the refinery as a financial asset; it is not too dissimilar to a chemical plant, in this respect. There are a number of reasons for this research; over recent years there have been legal disputes based on a refiner's value, investors and entrepreneurs are interested in purchasing refineries, and finally the research in this arena is sparse. In this thesis I utilise knowledge and techniques within finance, optimisation, stochastic mathematics and commodities to build programs that obtain a financial value for an oil refinery. In chapter one I introduce the background of crude oil and the significance of the refinery in the oil value chain. In chapter two I construct a traditional discounted cash flow valuation often applied within practical finance. In chapter three I program an extensive piecewise non linear optimisation solution on the entire state space, leveraging off a simulation of the refined products using a set of single factor Schwartz (1997) stochastic equations often applied to commodities. In chapter four I program an optimisation using an approximation on crack spread option data with the aim of lowering the duration of solution found in chapter three; this is achieved by utilising a two-factor Hull & White sub-trinomial tree based numerical scheme; see Hull & White (1994) articles I & II for a thorough description. I obtain realistic and accurate numbers for a topping oil refinery using financial market contracts and other real data for the Vadinar refinery based in Gujurat India

    High linearity analog and mixed-signal integrated circuit design

    Get PDF
    Linearity is one of the most important specifications in electrical circuits.;In Chapter 1, a ladder-based transconductance networks has been adopted first time to build a low distortion analog filters for low frequency applications. This new technique eliminated the limitation of the application with the traditional passive resistors for low frequency applications. Based on the understanding of this relationship, a strategy for designing high linear analog continuous-time filters has been developed. According to our strategy, a prototype analog integrated filter has been designed and fabricated with AMI05 0.5 um standard CMOS process. Experimental results proved this technique has the ability to provide excellent linearity with very limited active area.;In Chapter 2, the relationships between the transconductance networks and major circuit specifications have been explored. The analysis reveals the trade off between the silicon area saved by the transconductance networks and the some other important specifications such as linearity, noise level and the process variations of the overall circuit. Experimental results of discrete component circuit matched very well with our analytical outcomes to predict the change of linearity and noise performance associated with different transconductance networks.;The Chapter 3 contains the analysis and mathematical proves of the optimum passive area allocations for several most popular analog active filters. Because the total area is now manageable by the technique introduced in the Chapter 1, the further reduce of the total area will be very important and useful for efficient utilizing the silicon area, especially with the today\u27s fast growing area efficiency of the highly density digital circuits. This study presents the mathematical conclusion that the minimum passive area will be achieved with the equalized resistor and capacitor.;In the Chapter 4, a well recognized and highly honored current division circuit has been studied. Although it was claimed to be inherently linear and there are over 60 published works reported with high linearity based on this technique, our study discovered that this current division circuit can achieve, if proper circuit condition being managed, very limited linearity and all the experimental verified performance actually based on more general circuit principle. Besides its limitation, however, we invented a novel current division digital to analog converter (DAC) based on this technique. Benefiting from the simple circuit structure and moderate good linearity, a prototype 8-bit DAC was designed in TSMC018 0.2 um CMOS process and the post layout simulations exhibited the good linearity with very low power consumption and extreme small active area.;As the part of study of the output stage for the current division DAC discussed in the Chapter 4, a current mirror is expected to amplify the output current to drive the low resistive load. The strategy of achieving the optimum bandwidth of the cascode current mirror with fixed total current gain is discussed in the Chapter 5.;Improving the linearity of pipeline ADC has been the hottest and hardest topic in solid-state circuit community for decade. In the Chapter 6, a comprehensive study focus on the existing calibration algorithms for pipeline ADCs is presented. The benefits and limitations of different calibration algorithms have been discussed. Based on the understanding of those reported works, a new model-based calibration is delivered. The simulation results demonstrate that the model-based algorithms are vulnerable to the model accuracy and this weakness is very hard to be removed. From there, we predict the future developments of calibration algorithms that can break the linearity limitations for pipelined ADC. (Abstract shortened by UMI.

    A study in the financial valuation of a topping oil refinery

    Get PDF
    Oil refineries underpin modern day economics, finance and engineering – without their refined products the world would stand still, as vehicles would not have petrol, planes grounded without kerosene and homes not heated, without heating oil. In this thesis I study the refinery as a financial asset; it is not too dissimilar to a chemical plant, in this respect. There are a number of reasons for this research; over recent years there have been legal disputes based on a refiner's value, investors and entrepreneurs are interested in purchasing refineries, and finally the research in this arena is sparse. In this thesis I utilise knowledge and techniques within finance, optimisation, stochastic mathematics and commodities to build programs that obtain a financial value for an oil refinery. In chapter one I introduce the background of crude oil and the significance of the refinery in the oil value chain. In chapter two I construct a traditional discounted cash flow valuation often applied within practical finance. In chapter three I program an extensive piecewise non linear optimisation solution on the entire state space, leveraging off a simulation of the refined products using a set of single factor Schwartz (1997) stochastic equations often applied to commodities. In chapter four I program an optimisation using an approximation on crack spread option data with the aim of lowering the duration of solution found in chapter three; this is achieved by utilising a two-factor Hull & White sub-trinomial tree based numerical scheme; see Hull & White (1994) articles I & II for a thorough description. I obtain realistic and accurate numbers for a topping oil refinery using financial market contracts and other real data for the Vadinar refinery based in Gujurat India

    Physical, social and intellectual landscapes in the Neolithic: contextualizing Scottish and Irish Megalithic architecture

    Get PDF
    The broad aim of this study is to examine the way in which people build worlds which are liveable and which make sense; to explore the means by which a social, intellectual order particular to time and place is embedded within the material universe. The phenomenon of monumentality is considered in the context of changing narratives of place and biographies of person and landscape, which are implicated in the making of the self and society and the perception of being in place. Three groups of megalithic mortuary monuments of quite different formal characteristics, constructed and used predominantly during the fourth and third millennia BC, are analyzed in detail within their landscape setting: a series of Clyde tombs on the Isle of Arran in southwest Scotland; a group of cairns in the Black Isle of peninsula in the northeast of the country, which belong primarily to the Orkney-Cromarty tradition; and a passage tomb complex situated in east-central Ireland, among the Loughcrew hills. Individual studies are presented for each of these distinct and diverse landscapes, which consider the ways in which natural and built form interact through the medium of the human body, how megalithic architecture operated as part of local strategies for creating a workable scheme to 'place' humanity in relation to a wider cosmos, and how the interrelation of physical, social and intellectual landscapes may have engendered particular understandings of the world. An attempt is made to write regionalized, localized neolithics which challenge some of the traditional frameworks of the discipline - in particular those concerned with morphological, chronological and economic classification - and modes of representation which, removing subject and monument from a specific material context, establish a spurious objectivity. (DXN 006, 349)

    Mapping genes for birth weight in a wild population of red deer (Cervus elaphus)

    Get PDF

    The emulation of nations: William Robertson and the International Order

    Get PDF
    corecore