1,868 research outputs found
An Optimal Query Assignment for Wireless Sensor Networks
A trade-off between two QoS requirements of wireless sensor networks: query
waiting time and validity (age) of the data feeding the queries, is
investigated. We propose a Continuous Time Markov Decision Process with a drift
that trades-off between the two QoS requirements by assigning incoming queries
to the wireless sensor network or to the database. To compute an optimal
assignment policy, we argue, by means of non-standard uniformization, a
discrete time Markov decision process, stochastically equivalent to the initial
continuous process. We determine an optimal query assignment policy for the
discrete time process by means of dynamic programming. Next, we assess
numerically the performance of the optimal policy and show that it outperforms
in terms of average assignment costs three other heuristics, commonly used in
practice. Lastly, the optimality of the our model is confirmed also in the case
of real query traffic, where our proposed policy achieves significant cost
savings compared to the heuristics.Comment: 27 pages, 20 figure
Fast drift approximated pricing in the BGM model
This paper shows that the forward rates process discretized by a single time step together with a separability assumption on the volatility function allows for representation by a low-dimensional Markov process. This in turn leads to e±cient pricing by for example finite differences. We then develop a discretization based on the Brownian bridge especially designed to have high accuracy for single time stepping. The scheme is proven to converge weakly with order 1. We compare the single time step method for pricing on a grid with multi step Monte Carlo simulation for a Bermudan swaption, reporting a computational speed increase of a factor 10, yet pricing sufficiently accurate.BGM model, predictor-corrector, Brownian bridge, Markov processes, separability, Feynman-Kac, Bermudan swaption
An optimal query assignment for wireless sensor networks
With the increased use of large-scale real-time embedded sensor networks, new control mechanisms are needed to avoid congestion and meet required Quality of Service (QoS) levels. In this paper, we propose a Markov Decision Problem (MDP) to prescribe an optimal query assignment strategy that achieves a trade-off between two QoS requirements: query response time and data validity. Query response time is the time that queries spend in the sensor network until they are solved. Data validity (freshness) indicates the time elapsed between data acquisition and query response and whether that time period exceeds a predefined tolerance. We assess the performance of the proposed model by means of a discrete event simulation. Compared with three other heuristics, derived from practical assignment strategies, the proposed policy performs better in terms of average assignment costs. Also in the case of real query traffic simulations, results show that the proposed policy achieves cost gains compared with the other heuristics considered. The results provide useful insight into deriving simple assignment strategies that can be easily used in practice
A Semi-Lagrangian scheme for a modified version of the Hughes model for pedestrian flow
In this paper we present a Semi-Lagrangian scheme for a regularized version
of the Hughes model for pedestrian flow. Hughes originally proposed a coupled
nonlinear PDE system describing the evolution of a large pedestrian group
trying to exit a domain as fast as possible. The original model corresponds to
a system of a conservation law for the pedestrian density and an Eikonal
equation to determine the weighted distance to the exit. We consider this model
in presence of small diffusion and discuss the numerical analysis of the
proposed Semi-Lagrangian scheme. Furthermore we illustrate the effect of small
diffusion on the exit time with various numerical experiments
Optimal dividend policies with random profitability
We study an optimal dividend problem under a bankruptcy constraint. Firms
face a trade-off between potential bankruptcy and extraction of profits. In
contrast to previous works, general cash flow drifts, including
Ornstein--Uhlenbeck and CIR processes, are considered. We provide rigorous
proofs of continuity of the value function, whence dynamic programming, as well
as comparison between the sub- and supersolutions of the
Hamilton--Jacobi--Bellman equation, and we provide an efficient and convergent
numerical scheme for finding the solution. The value function is given by a
nonlinear PDE with a gradient constraint from below in one dimension. We find
that the optimal strategy is both a barrier and a band strategy and that it
includes voluntary liquidation in parts of the state space. Finally, we present
and numerically study extensions of the model, including equity issuance and
credit lines
OPERATOR METHODS, ABELIAN PROCESSES AND DYNAMIC CONDITIONING
A mathematical framework for Continuous Time Finance based on operator algebraic
methods oers a new direct and entirely constructive perspective on the field. It also
leads to new numerical analysis techniques which can take advantage of the emerging massively parallel GPU architectures which are uniquely suited to execute large matrix manipulations.
This is partly a review paper as it covers and expands on the mathematical framework underlying a series of more applied articles. In addition, this article also presents a few key new theorems that make the treatment self-contained. Stochastic processes with continuous time and continuous space variables are defined constructively by establishing new convergence estimates for Markov chains on simplicial sequences. We emphasize high precision computability by numerical linear algebra methods as opposed to the ability of arriving to analytically closed form expressions in terms of special functions. Path dependent processes adapted to a given Markov filtration are associated to an operator algebra. If this algebra is commutative, the corresponding process is named Abelian, a concept which provides a far reaching extension of the notion of stochastic integral. We recover the classic Cameron-Dyson-Feynman-Girsanov-Ito-Kac-Martin theorem as a particular case of a broadly general block-diagonalization algorithm. This technique has many applications ranging from the problem of pricing cliquets to target-redemption-notes and volatility derivatives. Non-Abelian processes are also relevant and appear in several important applications to for instance snowballs and soft calls. We show that in these cases one can eectively use block-factorization algorithms. Finally, we discuss
the method of dynamic conditioning that allows one to dynamically correlate over possibly
even hundreds of processes in a numerically noiseless framework while preserving marginal
distributions
- …