9,976 research outputs found

    A polynomial time approximation scheme for computing the supremum of Gaussian processes

    Full text link
    We give a polynomial time approximation scheme (PTAS) for computing the supremum of a Gaussian process. That is, given a finite set of vectors VāŠ†RdV\subseteq\mathbb{R}^d, we compute a (1+Īµ)(1+\varepsilon)-factor approximation to EXā†Nd[supā”vāˆˆVāˆ£āŸØv,XāŸ©āˆ£]\mathop {\mathbb{E}}_{X\leftarrow\mathcal{N}^d}[\sup_{v\in V}|\langle v,X\rangle|] deterministically in time polyā”(d)ā‹…āˆ£Vāˆ£OĪµ(1)\operatorname {poly}(d)\cdot|V|^{O_{\varepsilon}(1)}. Previously, only a constant factor deterministic polynomial time approximation algorithm was known due to the work of Ding, Lee and Peres [Ann. of Math. (2) 175 (2012) 1409-1471]. This answers an open question of Lee (2010) and Ding [Ann. Probab. 42 (2014) 464-496]. The study of supremum of Gaussian processes is of considerable importance in probability with applications in functional analysis, convex geometry, and in light of the recent breakthrough work of Ding, Lee and Peres [Ann. of Math. (2) 175 (2012) 1409-1471], to random walks on finite graphs. As such our result could be of use elsewhere. In particular, combining with the work of Ding [Ann. Probab. 42 (2014) 464-496], our result yields a PTAS for computing the cover time of bounded-degree graphs. Previously, such algorithms were known only for trees. Along the way, we also give an explicit oblivious estimator for semi-norms in Gaussian space with optimal query complexity. Our algorithm and its analysis are elementary in nature, using two classical comparison inequalities, Slepian's lemma and Kanter's lemma.Comment: Published in at http://dx.doi.org/10.1214/13-AAP997 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Implications of Anticipated Regret and Endogenous Beliefs for Equilibrium Asset Prices: A Theoretical Framework

    Get PDF
    This paper builds upon Suryanarayanan (2006a) and further investigates the implications of the model of Anticipated Regret and endogenous beliefs based on the Savage (1951) Minmax Regret Criterion for equilibrium asset pricing. A decision maker chooses an action with state contingent consequences but cannot precisely assess the true probability distribution of the state. She distrusts her prior about the true distribution and surrounds it with a set of alternative but plausible probability distributions. The decision maker minimizes the worst expected regret over all plausible probability distributions and alternative actions, where regret is the loss experienced when the decision maker compares an action to a counterfactual feasible alternative for a given realization of the state. We first study the Merton portfolio problem and illustrate the effects of anticipated regret on the sensitivity of portfolio rules to asset returns.We then embed the model in a version of the Lucas (1978) economy. We characterize asset prices with distorted Euler equations and analyze the implications for the volatility puzzles and Euler pricing errors puzzles.

    Learning Graphical Models Using Multiplicative Weights

    Full text link
    We give a simple, multiplicative-weight update algorithm for learning undirected graphical models or Markov random fields (MRFs). The approach is new, and for the well-studied case of Ising models or Boltzmann machines, we obtain an algorithm that uses a nearly optimal number of samples and has quadratic running time (up to logarithmic factors), subsuming and improving on all prior work. Additionally, we give the first efficient algorithm for learning Ising models over general alphabets. Our main application is an algorithm for learning the structure of t-wise MRFs with nearly-optimal sample complexity (up to polynomial losses in necessary terms that depend on the weights) and running time that is nO(t)n^{O(t)}. In addition, given nO(t)n^{O(t)} samples, we can also learn the parameters of the model and generate a hypothesis that is close in statistical distance to the true MRF. All prior work runs in time nĪ©(d)n^{\Omega(d)} for graphs of bounded degree d and does not generate a hypothesis close in statistical distance even for t=3. We observe that our runtime has the correct dependence on n and t assuming the hardness of learning sparse parities with noise. Our algorithm--the Sparsitron-- is easy to implement (has only one parameter) and holds in the on-line setting. Its analysis applies a regret bound from Freund and Schapire's classic Hedge algorithm. It also gives the first solution to the problem of learning sparse Generalized Linear Models (GLMs)

    A Model of Anticipated Regret and Endogenous Beliefs

    Get PDF
    This paper clarifies and extends the model of anticipated regret and endogenous beliefs based on the Savage (1951) Minmax Regret Criterion developped in Suryanarayanan (2006a). A decision maker chooses an action with state contingent consequences but cannot precisely assess the true probability distribution of the state. She distrusts her prior about the true distribution and surrounds it with a set of alternative but plausible probability distributions. The decision maker minimizes the worst expected regret over all plausible probability distributions and alternative actions, where regret is the loss experienced when the decision maker compares an action to a counterfactual feasible alternative for a given realization of the state. Preliminary theoretical results provide a systematic algorithm to find the solution to the decision problem and show how models of Minmax Regret differs from models of ambiguity aversion and expected utility. In particular, the solution to the decision problem can always be represented as a saddle point solution to an equivalent zerosum game problem. This new problem jointly produces the solution to the Anticipated Regret problem and the endogenous belief. We then use the endogenous belief to define the implicit certainty equivalent and to build an infinite horizon and time consistent problem for a decision maker minimizing her lifetime worst expected regrets.
    • ā€¦
    corecore