Location of Repository

On Estimation and Optimization of Probability ∗

By Xinjia Chen

Abstract

In this paper, we develop a general approach for probabilistic estimation and optimization. An explicit formula is derived for controlling the reliability of probabilistic estimation based on a mixed criterion of absolute and relative errors. By employing the Chernoff bound and the concept of sampling, the minimization of a probabilistic function is transformed into an optimization problem amenable for gradient descendent algorithms. 1 Estimation of Probability It is a ubiquitous problem to estimate the probability of an event. Such probability can be interpreted as the expectation, E[X], of a Bernoulli random variable X. More generally, if X is a random variable bounded in interval [0,1] with mean E[X] = µ ∈ (0,1), we can draw n i.i.d. samples X1, · · ·,Xn of X and estimate µ as µ = Pn i=1 Xi n crucial to control the statistical error. For this purpose, we have. Since µ is of random nature, it is Theorem 1 Let εa ∈ (0,1) and εr ∈ (0,1) be real numbers such that εa εr + εa ≤ 1 2. Let δ ∈ (0,1). Let X1, · · · Xn be i.i.d. random variables defined in probability space (Ω,F,Pr) such that 0 ≤ Xi ≤ 1 and E[Xi] = µ ∈ (0,1) for i = 1 · · ·,n. Let µ = Pr |µ − µ | < εa or µ − µ ∣ µ ∣ P n i=1 X

OAI identifier: oai:CiteSeerX.psu:10.1.1.312.8234
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://citeseerx.ist.psu.edu/v... (external link)
  • http://arxiv.org/pdf/0804.1399... (external link)
  • Suggested articles


    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.