467 research outputs found

    Approximation of Jump Diffusions in Finance and Economics

    Get PDF
    In finance and economics the key dynamics are often specified via stochastic differential equations (SDEs) of jump-diffusion type. The class of jump-diffusion SDEs that admits explicit solutions is rather limited. Consequently, discrete time approximations are required. In this paper we give a survey of strong and weak numerical schemes for SDEs with jumps. Strong schemes provide pathwise approximations and therefore can be employed in scenario analysis, filtering or hedge simulation. Weak schemes are appropriate for problems such as derivative pricing or the evaluation of risk measures and expected utilities. Here only an approximation of the probability distribution of the jump-diffusion process is needed. As a framework for applications of these methods in finance and economics we use the benchmark approach. Strong approximation methods are illustrated by scenario simulations. Numerical results on the pricing of options on an index are presented using weak approximation methods.jump-diffusion processes; discrete time approximation; simulation; strong covergence; weak convergence; benchmark approach; growth optimal portfolio

    Performance of a parallel code for the Euler equations on hypercube computers

    Get PDF
    The performance of hypercubes were evaluated on a computational fluid dynamics problem and the parallel environment issues were considered that must be addressed, such as algorithm changes, implementation choices, programming effort, and programming environment. The evaluation focuses on a widely used fluid dynamics code, FLO52, which solves the two dimensional steady Euler equations describing flow around the airfoil. The code development experience is described, including interacting with the operating system, utilizing the message-passing communication system, and code modifications necessary to increase parallel efficiency. Results from two hypercube parallel computers (a 16-node iPSC/2, and a 512-node NCUBE/ten) are discussed and compared. In addition, a mathematical model of the execution time was developed as a function of several machine and algorithm parameters. This model accurately predicts the actual run times obtained and is used to explore the performance of the code in interesting but yet physically realizable regions of the parameter space. Based on this model, predictions about future hypercubes are made

    Multinomial Inverse Regression for Text Analysis

    Full text link
    Text data, including speeches, stories, and other document forms, are often connected to sentiment variables that are of interest for research in marketing, economics, and elsewhere. It is also very high dimensional and difficult to incorporate into statistical analyses. This article introduces a straightforward framework of sentiment-preserving dimension reduction for text data. Multinomial inverse regression is introduced as a general tool for simplifying predictor sets that can be represented as draws from a multinomial distribution, and we show that logistic regression of phrase counts onto document annotations can be used to obtain low dimension document representations that are rich in sentiment information. To facilitate this modeling, a novel estimation technique is developed for multinomial logistic regression with very high-dimension response. In particular, independent Laplace priors with unknown variance are assigned to each regression coefficient, and we detail an efficient routine for maximization of the joint posterior over coefficients and their prior scale. This "gamma-lasso" scheme yields stable and effective estimation for general high-dimension logistic regression, and we argue that it will be superior to current methods in many settings. Guidelines for prior specification are provided, algorithm convergence is detailed, and estimator properties are outlined from the perspective of the literature on non-concave likelihood penalization. Related work on sentiment analysis from statistics, econometrics, and machine learning is surveyed and connected. Finally, the methods are applied in two detailed examples and we provide out-of-sample prediction studies to illustrate their effectiveness.Comment: Published in the Journal of the American Statistical Association 108, 2013, with discussion (rejoinder is here: http://arxiv.org/abs/1304.4200). Software is available in the textir package for

    Multiscale derivation, analysis and simulation of collective dynamics models: geometrical aspects and applications

    Get PDF
    This thesis is a contribution to the study of swarming phenomena from the point of view of mathematical kinetic theory. This multiscale approach starts from stochastic individual based (or particle) models and aims at the derivation of partial differential equation models on statistical quantities when the number of particles tends to infinity. This latter class of models is better suited for mathematical analysis in order to reveal and explain large-scale emerging phenomena observed in various biological systems such as flocks of birds or swarms of bacteria. Within this objective, a large part of this thesis is dedicated to the study of a body-attitude coordination model and, through this example, of the influence of geometry on self-organisation. The first part of the thesis deals with the rigorous derivation of partial differential equation models from particle systems with mean-field interactions. After a review of the literature, in particular on the notion of propagation of chaos, a rigorous convergence result is proved for a large class of geometrically enriched piecewise deterministic particle models towards local BGK-type equations. In addition, the method developed is applied to the design and analysis of a new particle-based algorithm for sampling. This first part also addresses the question of the efficient simulation of particle systems using recent GPU routines. The second part of the thesis is devoted to kinetic and fluid models for body-oriented particles. The kinetic model is rigorously derived as the mean-field limit of a particle system. In the spatially homogeneous case, a phase transition phenomenon is investigated which discriminates, depending on the parameters of the model, between a “disordered” dynamics and a self-organised “ordered” dynamics. The fluid (or macroscopic) model was derived as the hydrodynamic limit of the kinetic model a few years ago by Degond et al. The analytical and numerical study of this model reveal the existence of new self-organised phenomena which are confirmed and quantified using particle simulations. Finally a generalisation of this model in arbitrary dimension is presented.Open Acces

    Resource allocation and adaptive scheduling for scalable video streaming

    Get PDF
    The obvious recent advances in areas such as video compression and network architectures allow for the deployment of novel video distribution applications. These have the potential to provide ubiquitous media access to end users. In recent years, applications based on audio and video streaming have turned out to be immensely popular and the Internet has become the most widely used vector for media content distribution, due to its high availability and connectivity. However, the nature of the Internet infrastructure is not adapted to the specific characteristics of multimedia traffic, which presents a certain tolerance to losses, but strict delay and high bandwidth requirements. In this thesis, our goal is to improve the efficiency of media delivery over the existing network architecture. In order to do so we consider the delivery of scalable video in three main delivery scenarios, namely one-to-one client server architectures, one-to-many broadcasting architectures, and many-to-one distributed streaming architectures. First, we propose a distributed media-friendly rate allocation algorithm for the delivery of both finely and coarsely scalable video streams. Unlike existing solutions, our algorithm explicitly takes the characteristics of media streams into consideration. As a result, it provides rate allocations that better fit the heterogeneous characteristics of media streams. We outline an implementation that is robust to random feedback delays and that permits a scalable deployment of the algorithm. The rate allocation that is computed by our algorithm achieves network stability and high bandwidth utilization. It moreover allows to maximize the average received quality for all streams that are delivered in the network. While considering the transmission of coarsely layered streams, we derive conditions on the encoding rates of the video layers. These conditions depend on the allowed end-to-end delay and on the rate allocation algorithm that controls the sending rates. They allow us to take full advantage of the allocated transmission rates. Second, we investigate the problem of jointly addressing the needs of multiple receivers that consume different versions of a layered media stream in a broadcasting scenario. We provide optimal scheduling algorithms that jointly optimize the playback delay and the buffer occupancy at all of these receivers when the used channel is known. Furthermore we analyze low complexity heuristics based optimization techniques, which provide close to optimal results when only limited channel knowledge is available. Finally, we explore the possibility to exploit the inherent network diversity that is provided by the Internet infrastructure. In particular, we consider media delivery schemes where multiple senders are available for the transmission of a scalable video stream to a single client. Such an architecture is referred to as a distributed streaming architecture. It has the benefit of aggregating multiple unreliable channels into a single more robust channel with high availability. Through the use of Fountain codes, we are able to transform the distributed streaming problem into a rate allocation problem of lower complexity. The solution to this problem is shown to depend not only on the average packet loss rate, but also on the average length of packet loss bursts that are observed on each of the available channels. The coding scheme that we suggest enables our system to adapt the streamed content to the network characteristics, as well as to the needs of the receiving client

    Toward sparse and geometry adapted video approximations

    Get PDF
    Video signals are sequences of natural images, where images are often modeled as piecewise-smooth signals. Hence, video can be seen as a 3D piecewise-smooth signal made of piecewise-smooth regions that move through time. Based on the piecewise-smooth model and on related theoretical work on rate-distortion performance of wavelet and oracle based coding schemes, one can better analyze the appropriate coding strategies that adaptive video codecs need to implement in order to be efficient. Efficient video representations for coding purposes require the use of adaptive signal decompositions able to capture appropriately the structure and redundancy appearing in video signals. Adaptivity needs to be such that it allows for proper modeling of signals in order to represent these with the lowest possible coding cost. Video is a very structured signal with high geometric content. This includes temporal geometry (normally represented by motion information) as well as spatial geometry. Clearly, most of past and present strategies used to represent video signals do not exploit properly its spatial geometry. Similarly to the case of images, a very interesting approach seems to be the decomposition of video using large over-complete libraries of basis functions able to represent salient geometric features of the signal. In the framework of video, these features should model 2D geometric video components as well as their temporal evolution, forming spatio-temporal 3D geometric primitives. Through this PhD dissertation, different aspects on the use of adaptivity in video representation are studied looking toward exploiting both aspects of video: its piecewise nature and the geometry. The first part of this work studies the use of localized temporal adaptivity in subband video coding. This is done considering two transformation schemes used for video coding: 3D wavelet representations and motion compensated temporal filtering. A theoretical R-D analysis as well as empirical results demonstrate how temporal adaptivity improves coding performance of moving edges in 3D transform (without motion compensation) based video coding. Adaptivity allows, at the same time, to equally exploit redundancy in non-moving video areas. The analogy between motion compensated video and 1D piecewise-smooth signals is studied as well. This motivates the introduction of local length adaptivity within frame-adaptive motion compensated lifted wavelet decompositions. This allows an optimal rate-distortion performance when video motion trajectories are shorter than the transformation "Group Of Pictures", or when efficient motion compensation can not be ensured. After studying temporal adaptivity, the second part of this thesis is dedicated to understand the fundamentals of how can temporal and spatial geometry be jointly exploited. This work builds on some previous results that considered the representation of spatial geometry in video (but not temporal, i.e, without motion). In order to obtain flexible and efficient (sparse) signal representations, using redundant dictionaries, the use of highly non-linear decomposition algorithms, like Matching Pursuit, is required. General signal representation using these techniques is still quite unexplored. For this reason, previous to the study of video representation, some aspects of non-linear decomposition algorithms and the efficient decomposition of images using Matching Pursuits and a geometric dictionary are investigated. A part of this investigation concerns the study on the influence of using a priori models within approximation non-linear algorithms. Dictionaries with a high internal coherence have some problems to obtain optimally sparse signal representations when used with Matching Pursuits. It is proved, theoretically and empirically, that inserting in this algorithm a priori models allows to improve the capacity to obtain sparse signal approximations, mainly when coherent dictionaries are used. Another point discussed in this preliminary study, on the use of Matching Pursuits, concerns the approach used in this work for the decompositions of video frames and images. The technique proposed in this thesis improves a previous work, where authors had to recur to sub-optimal Matching Pursuit strategies (using Genetic Algorithms), given the size of the functions library. In this work the use of full search strategies is made possible, at the same time that approximation efficiency is significantly improved and computational complexity is reduced. Finally, a priori based Matching Pursuit geometric decompositions are investigated for geometric video representations. Regularity constraints are taken into account to recover the temporal evolution of spatial geometric signal components. The results obtained for coding and multi-modal (audio-visual) signal analysis, clarify many unknowns and show to be promising, encouraging to prosecute research on the subject

    Global optimisation in process design

    Get PDF
    This thesis concerns the development of rigorous global optimisation techniques and their application to process engineering problems. Many Process Engineering optimisation problems are nonlinear. Local optimisation approaches may not provide global solutions to these problems if they are nonconvex. The global optimisation approach utilised in this work is based on interval branch and bound algorithms. The interval global optimisation approach is extended to take advantage of information about the structure of the problem and facilitate efficient solution of constrained NLPs using interval analysis. This is achieved by reformulating the interval lower bounding procedure as a convex programming problem which allows inclusion of convex constraints in the lower bounding problem. The approach is applied to a number of standard constrained test problems indicating that this algorithm retains the wide applicability of the interval methods while allowing efficient solution of constrained problems. A new approach to the construction of modular flowsheets is developed. This approach allows construction of flowsheets from linked unit models which enable the application of a number of global optimisation algorithms. The modular flowsheets are constructed with 'generic' unit operations which provide interval bounds, linear bounds, derivatives and derivative bounds using extended numerical types. The genericity means that new 'extended types' can be devised and used without rewriting the unit operations models. The new interval global optimisation algorithm is applied to the generic modular flowsheet. Using interval analysis and automatic differentiation as the arithmetic types, lower bounding linear programs are constructed and used in a branch and bound framework to globally optimise the modular flowsheet
    corecore