374,000 research outputs found

    Dynkin games with Poisson random intervention times

    Get PDF
    This paper introduces a new class of Dynkin games, where the two players are allowed to make their stopping decisions at a sequence of exogenous Poisson arrival times. The value function and the associated optimal stopping strategy are characterized by the solution of a backward stochastic differential equation. The paper further applies the model to study the optimal conversion and calling strategies of convertible bonds, and their asymptotics when the Poisson intensity goes to infinity

    Modeling continuous-time financial markets with capital gains taxes

    Get PDF
    We formulate a model of continuous-time financial market consisting of a bank account with constant interest rate and one risky asset subject to capital gains taxes. We consider the problem of maximizing expected utility from future consumption in infinite horizon. This is the continuous-time version of the model introduced by Dammon, Spatt and Zhang [11]. The taxation rule is linear so that it allows for tax credits when capital gains losses are experienced. In this context, wash sales are optimal. Our main contribution is to derive lower and upper bounds on the value function in terms of the corresponding value in a tax-free and frictionless model. While the upper bound corresponds to the value function in a tax-free model, the lower bound is a consequence of wash sales. As an important implication of these bounds, we derive an explicit first order expansion of our value function for small interest rate and tax rate coefficients. In order to examine the accuracy of this approximation, we provide a characterization of the value function in terms of the associated dynamic programming equation, and we suggest a numerical approximation scheme based on finite differences and the Howard algorithm. The numerical results show that the first order Taylor expansion is reasonably accurate for reasonable market data

    Certified Reinforcement Learning with Logic Guidance

    Full text link
    This paper proposes the first model-free Reinforcement Learning (RL) framework to synthesise policies for unknown, and continuous-state Markov Decision Processes (MDPs), such that a given linear temporal property is satisfied. We convert the given property into a Limit Deterministic Buchi Automaton (LDBA), namely a finite-state machine expressing the property. Exploiting the structure of the LDBA, we shape a synchronous reward function on-the-fly, so that an RL algorithm can synthesise a policy resulting in traces that probabilistically satisfy the linear temporal property. This probability (certificate) is also calculated in parallel with policy learning when the state space of the MDP is finite: as such, the RL algorithm produces a policy that is certified with respect to the property. Under the assumption of finite state space, theoretical guarantees are provided on the convergence of the RL algorithm to an optimal policy, maximising the above probability. We also show that our method produces ''best available'' control policies when the logical property cannot be satisfied. In the general case of a continuous state space, we propose a neural network architecture for RL and we empirically show that the algorithm finds satisfying policies, if there exist such policies. The performance of the proposed framework is evaluated via a set of numerical examples and benchmarks, where we observe an improvement of one order of magnitude in the number of iterations required for the policy synthesis, compared to existing approaches whenever available.Comment: This article draws from arXiv:1801.08099, arXiv:1809.0782

    Optimal control of continuous-time Markov chains with noise-free observation

    Full text link
    We consider an infinite horizon optimal control problem for a continuous-time Markov chain XX in a finite set II with noise-free partial observation. The observation process is defined as Yt=h(Xt)Y_t = h(X_t), t≥0t \geq 0, where hh is a given map defined on II. The observation is noise-free in the sense that the only source of randomness is the process XX itself. The aim is to minimize a discounted cost functional and study the associated value function VV. After transforming the control problem with partial observation into one with complete observation (the separated problem) using filtering equations, we provide a link between the value function vv associated to the latter control problem and the original value function VV. Then, we present two different characterizations of vv (and indirectly of VV): on one hand as the unique fixed point of a suitably defined contraction mapping and on the other hand as the unique constrained viscosity solution (in the sense of Soner) of a HJB integro-differential equation. Under suitable assumptions, we finally prove the existence of an optimal control
    • …
    corecore