150,316 research outputs found
Markovian Nash equilibrium in financial markets with asymmetric information and related forward-backward systems
This paper develops a new methodology for studying continuous-time Nash
equilibrium in a financial market with asymmetrically informed agents. This
approach allows us to lift the restriction of risk neutrality imposed on market
makers by the current literature. It turns out that, when the market makers are
risk averse, the optimal strategies of the agents are solutions of a
forward-backward system of partial and stochastic differential equations. In
particular, the price set by the market makers solves a nonstandard "quadratic"
backward stochastic differential equation. The main result of the paper is the
existence of a Markovian solution to this forward-backward system on an
arbitrary time interval, which is obtained via a fixed-point argument on the
space of absolutely continuous distribution functions. Moreover, the
equilibrium obtained in this paper is able to explain several stylized facts
which are not captured by the current asymmetric information models.Comment: Published at http://dx.doi.org/10.1214/15-AAP1138 in the Annals of
Applied Probability (http://www.imstat.org/aap/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Computation Alignment: Capacity Approximation without Noise Accumulation
Consider several source nodes communicating across a wireless network to a
destination node with the help of several layers of relay nodes. Recent work by
Avestimehr et al. has approximated the capacity of this network up to an
additive gap. The communication scheme achieving this capacity approximation is
based on compress-and-forward, resulting in noise accumulation as the messages
traverse the network. As a consequence, the approximation gap increases
linearly with the network depth.
This paper develops a computation alignment strategy that can approach the
capacity of a class of layered, time-varying wireless relay networks up to an
approximation gap that is independent of the network depth. This strategy is
based on the compute-and-forward framework, which enables relays to decode
deterministic functions of the transmitted messages. Alone, compute-and-forward
is insufficient to approach the capacity as it incurs a penalty for
approximating the wireless channel with complex-valued coefficients by a
channel with integer coefficients. Here, this penalty is circumvented by
carefully matching channel realizations across time slots to create
integer-valued effective channels that are well-suited to compute-and-forward.
Unlike prior constant gap results, the approximation gap obtained in this paper
also depends closely on the fading statistics, which are assumed to be i.i.d.
Rayleigh.Comment: 36 pages, to appear in IEEE Transactions on Information Theor
Decision Problems for Petri Nets with Names
We prove several decidability and undecidability results for nu-PN, an
extension of P/T nets with pure name creation and name management. We give a
simple proof of undecidability of reachability, by reducing reachability in
nets with inhibitor arcs to it. Thus, the expressive power of nu-PN strictly
surpasses that of P/T nets. We prove that nu-PN are Well Structured Transition
Systems. In particular, we obtain decidability of coverability and termination,
so that the expressive power of Turing machines is not reached. Moreover, they
are strictly Well Structured, so that the boundedness problem is also
decidable. We consider two properties, width-boundedness and depth-boundedness,
that factorize boundedness. Width-boundedness has already been proven to be
decidable. We prove here undecidability of depth-boundedness. Finally, we
obtain Ackermann-hardness results for all our decidable decision problems.Comment: 20 pages, 7 figure
Tracking Stopping Times Through Noisy Observations
A novel quickest detection setting is proposed which is a generalization of
the well-known Bayesian change-point detection model. Suppose
\{(X_i,Y_i)\}_{i\geq 1} is a sequence of pairs of random variables, and that S
is a stopping time with respect to \{X_i\}_{i\geq 1}. The problem is to find a
stopping time T with respect to \{Y_i\}_{i\geq 1} that optimally tracks S, in
the sense that T minimizes the expected reaction delay E(T-S)^+, while keeping
the false-alarm probability P(T<S) below a given threshold \alpha \in [0,1].
This problem formulation applies in several areas, such as in communication,
detection, forecasting, and quality control.
Our results relate to the situation where the X_i's and Y_i's take values in
finite alphabets and where S is bounded by some positive integer \kappa. By
using elementary methods based on the analysis of the tree structure of
stopping times, we exhibit an algorithm that computes the optimal average
reaction delays for all \alpha \in [0,1], and constructs the associated optimal
stopping times T. Under certain conditions on \{(X_i,Y_i)\}_{i\geq 1} and S,
the algorithm running time is polynomial in \kappa.Comment: 19 pages, 4 figures, to appear in IEEE Transactions on Information
Theor
Universal Estimation of Directed Information
Four estimators of the directed information rate between a pair of jointly
stationary ergodic finite-alphabet processes are proposed, based on universal
probability assignments. The first one is a Shannon--McMillan--Breiman type
estimator, similar to those used by Verd\'u (2005) and Cai, Kulkarni, and
Verd\'u (2006) for estimation of other information measures. We show the almost
sure and convergence properties of the estimator for any underlying
universal probability assignment. The other three estimators map universal
probability assignments to different functionals, each exhibiting relative
merits such as smoothness, nonnegativity, and boundedness. We establish the
consistency of these estimators in almost sure and senses, and derive
near-optimal rates of convergence in the minimax sense under mild conditions.
These estimators carry over directly to estimating other information measures
of stationary ergodic finite-alphabet processes, such as entropy rate and
mutual information rate, with near-optimal performance and provide alternatives
to classical approaches in the existing literature. Guided by these theoretical
results, the proposed estimators are implemented using the context-tree
weighting algorithm as the universal probability assignment. Experiments on
synthetic and real data are presented, demonstrating the potential of the
proposed schemes in practice and the utility of directed information estimation
in detecting and measuring causal influence and delay.Comment: 23 pages, 10 figures, to appear in IEEE Transactions on Information
Theor
Lower Bounds for Symbolic Computation on Graphs: Strongly Connected Components, Liveness, Safety, and Diameter
A model of computation that is widely used in the formal analysis of reactive
systems is symbolic algorithms. In this model the access to the input graph is
restricted to consist of symbolic operations, which are expensive in comparison
to the standard RAM operations. We give lower bounds on the number of symbolic
operations for basic graph problems such as the computation of the strongly
connected components and of the approximate diameter as well as for fundamental
problems in model checking such as safety, liveness, and co-liveness. Our lower
bounds are linear in the number of vertices of the graph, even for
constant-diameter graphs. For none of these problems lower bounds on the number
of symbolic operations were known before. The lower bounds show an interesting
separation of these problems from the reachability problem, which can be solved
with symbolic operations, where is the diameter of the graph.
Additionally we present an approximation algorithm for the graph diameter
which requires symbolic steps to achieve a
-approximation for any constant . This compares to
symbolic steps for the (naive) exact algorithm and
symbolic steps for a 2-approximation. Finally we also give a refined analysis
of the strongly connected components algorithms of Gentilini et al., showing
that it uses an optimal number of symbolic steps that is proportional to the
sum of the diameters of the strongly connected components
- …