1,527 research outputs found
A central limit theorem for temporally non-homogenous Markov chains with applications to dynamic programming
We prove a central limit theorem for a class of additive processes that arise
naturally in the theory of finite horizon Markov decision problems. The main
theorem generalizes a classic result of Dobrushin (1956) for temporally
non-homogeneous Markov chains, and the principal innovation is that here the
summands are permitted to depend on both the current state and a bounded number
of future states of the chain. We show through several examples that this added
flexibility gives one a direct path to asymptotic normality of the optimal
total reward of finite horizon Markov decision problems. The same examples also
explain why such results are not easily obtained by alternative Markovian
techniques such as enlargement of the state space.Comment: 27 pages, 1 figur
Twitter event networks and the Superstar model
Condensation phenomenon is often observed in social networks such as Twitter
where one "superstar" vertex gains a positive fraction of the edges, while the
remaining empirical degree distribution still exhibits a power law tail. We
formulate a mathematically tractable model for this phenomenon that provides a
better fit to empirical data than the standard preferential attachment model
across an array of networks observed in Twitter. Using embeddings in an
equivalent continuous time version of the process, and adapting techniques from
the stable age-distribution theory of branching processes, we prove limit
results for the proportion of edges that condense around the superstar, the
degree distribution of the remaining vertices, maximal nonsuperstar degree
asymptotics and height of these random trees in the large network limit.Comment: Published at http://dx.doi.org/10.1214/14-AAP1053 in the Annals of
Applied Probability (http://www.imstat.org/aap/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Quickest Online Selection of an Increasing Subsequence of Specified Size
Given a sequence of independent random variables with a common continuous
distribution, we consider the online decision problem where one seeks to
minimize the expected value of the time that is needed to complete the
selection of a monotone increasing subsequence of a prespecified length .
This problem is dual to some online decision problems that have been considered
earlier, and this dual problem has some notable advantages. In particular, the
recursions and equations of optimality lead with relative ease to asymptotic
formulas for mean and variance of the minimal selection time.Comment: 17 page
Optimal Online Selection of a Monotone Subsequence: a Central Limit Theorem
Consider a sequence of independent random variables with a common
continuous distribution , and consider the task of choosing an increasing
subsequence where the observations are revealed sequentially and where an
observation must be accepted or rejected when it is first revealed. There is a
unique selection policy that is optimal in the sense that it
maximizes the expected value of , the number of selected
observations. We investigate the distribution of ; in particular,
we obtain a central limit theorem for and a detailed
understanding of its mean and variance for large . Our results and methods
are complementary to the work of Bruss and Delbaen (2004) where an analogous
central limit theorem is found for monotone increasing selections from a finite
sequence with cardinality where is a Poisson random variable that is
independent of the sequence.Comment: 26 page
The Bruss-Robertson Inequality: Elaborations, Extensions, and Applications
The Bruss-Robertson inequality gives a bound on themaximal number of elements of a random sample whose sum is less than a specifiedvalue, and the extension of that inequality which is given hereneither requires the independence of the summands nor requires the equality of their marginal distributions. A review is also given of the applications of the Bruss-Robertson inequality,especially the applications to problems of combinatorial optimization such as the sequential knapsack problem and the sequential monotone subsequence selection problem
Fisher Information and Detection of a Euclidean Perturbation of an Independent Stationary Process
An independent stationary process {Xi}∞i=1 in ℝn is perturbed by a sequence of Euclidean motions to obtain a new process {Yi}∞i=1. Criteria are given for the singularity or equivalence of these processes. When the distribution of the X process has finite Fisher information, the criteria are necessary and sufficient. Moreover, it is proved that it is exactly under the condition of finite Fisher information that the criteria are necessary and sufficient
Families of Sample Means Converge Slowly
The uniform empirical integral differences of Sethuraman\u27s large deviation theorem are proved to converge arbitrarily slowly
Growth Rates of Euclidean Minimal Spanning Trees With Power Weighted Edges
Let Xi, 1 ≤ i \u3c ∞, denote independent random variables with values in Rd, d ≥ 2, and let Mn denote the cost of a minimal spanning tree of a complete graph with vertex set {X1, X2, . . . , Xn}, where the cost of an edge (Xi, Xj) is given by ⋺(|Xi - Xj|). Here |Xi - Xj| denotes the Euclidean distance between Xi and Xj and ⋺ is a monotone function. For bounded random variables and 0 \u3c a \u3c d, it is proved that as n → ∞ one has Mn ~ c(a, d)n(d-a)/d∫Rdf(x)(d-a)/d dx with probability 1, provided ⋺(x) ~ xa as x → 0. Here f(x) is the density of the absolutely continuous part of the distribution of the {Xi}
Optimal Triangulation of Random Samples in the Plane
Let Tn denote the length of the minimal triangulation of n points chosen independently and uniformly from the unit square. It is proved that Tn/√n converges almost surely to a positive constant. This settles a conjecture of György Turán
An Efron-Stein Inequality for Nonsymmetric Statistics
If S(x1,x2,⋯,xn) is any function of n variables and if Xi,X̂i,1 ≤ i ≤ n are 2n i.i.d. random variables then varS ≤ ½ E ∑i=1n (S - Si)2 where S = S (X1,X2,⋯,Xn) and Si is given by replacing the ith observation with X̂i, so Si=S(X1,X2,⋯,X̂i,⋯,Xn). This is applied to sharpen known variance bounds in the long common subsequence problem
- …