253 research outputs found

    Quantitative estimates for the flux of TASEP with dilute site disorder

    Full text link
    We prove that the flux function of the totally asymmetric simple exclusion process (TASEP) with site disorder exhibits a flat segment for sufficiently dilute disorder. For high dilution, we obtain an accurate description of the flux. The result is established undera decay assumption of the maximum current in finite boxes, which is implied in particular by a sufficiently slow power tail assumption on the disorder distribution near its minimum. To circumvent the absence of explicit invariant measures, we use an original renormalization procedure and some ideas inspired by homogenization

    Multilinear Superhedging of Lookback Options

    Full text link
    In a pathbreaking paper, Cover and Ordentlich (1998) solved a max-min portfolio game between a trader (who picks an entire trading algorithm, θ()\theta(\cdot)) and "nature," who picks the matrix XX of gross-returns of all stocks in all periods. Their (zero-sum) game has the payoff kernel Wθ(X)/D(X)W_\theta(X)/D(X), where Wθ(X)W_\theta(X) is the trader's final wealth and D(X)D(X) is the final wealth that would have accrued to a $1\$1 deposit into the best constant-rebalanced portfolio (or fixed-fraction betting scheme) determined in hindsight. The resulting "universal portfolio" compounds its money at the same asymptotic rate as the best rebalancing rule in hindsight, thereby beating the market asymptotically under extremely general conditions. Smitten with this (1998) result, the present paper solves the most general tractable version of Cover and Ordentlich's (1998) max-min game. This obtains for performance benchmarks (read: derivatives) that are separately convex and homogeneous in each period's gross-return vector. For completely arbitrary (even non-measurable) performance benchmarks, we show how the axiom of choice can be used to "find" an exact maximin strategy for the trader.Comment: 41 pages, 3 figure

    Optimal Bounds on Approximation of Submodular and XOS Functions by Juntas

    Full text link
    We investigate the approximability of several classes of real-valued functions by functions of a small number of variables ({\em juntas}). Our main results are tight bounds on the number of variables required to approximate a function f:{0,1}n[0,1]f:\{0,1\}^n \rightarrow [0,1] within 2\ell_2-error ϵ\epsilon over the uniform distribution: 1. If ff is submodular, then it is ϵ\epsilon-close to a function of O(1ϵ2log1ϵ)O(\frac{1}{\epsilon^2} \log \frac{1}{\epsilon}) variables. This is an exponential improvement over previously known results. We note that Ω(1ϵ2)\Omega(\frac{1}{\epsilon^2}) variables are necessary even for linear functions. 2. If ff is fractionally subadditive (XOS) it is ϵ\epsilon-close to a function of 2O(1/ϵ2)2^{O(1/\epsilon^2)} variables. This result holds for all functions with low total 1\ell_1-influence and is a real-valued analogue of Friedgut's theorem for boolean functions. We show that 2Ω(1/ϵ)2^{\Omega(1/\epsilon)} variables are necessary even for XOS functions. As applications of these results, we provide learning algorithms over the uniform distribution. For XOS functions, we give a PAC learning algorithm that runs in time 2poly(1/ϵ)poly(n)2^{poly(1/\epsilon)} poly(n). For submodular functions we give an algorithm in the more demanding PMAC learning model (Balcan and Harvey, 2011) which requires a multiplicative 1+γ1+\gamma factor approximation with probability at least 1ϵ1-\epsilon over the target distribution. Our uniform distribution algorithm runs in time 2poly(1/(γϵ))poly(n)2^{poly(1/(\gamma\epsilon))} poly(n). This is the first algorithm in the PMAC model that over the uniform distribution can achieve a constant approximation factor arbitrarily close to 1 for all submodular functions. As follows from the lower bounds in (Feldman et al., 2013) both of these algorithms are close to optimal. We also give applications for proper learning, testing and agnostic learning with value queries of these classes.Comment: Extended abstract appears in proceedings of FOCS 201

    Hierarchical testing designs for pattern recognition

    Full text link
    We explore the theoretical foundations of a ``twenty questions'' approach to pattern recognition. The object of the analysis is the computational process itself rather than probability distributions (Bayesian inference) or decision boundaries (statistical learning). Our formulation is motivated by applications to scene interpretation in which there are a great many possible explanations for the data, one (``background'') is statistically dominant, and it is imperative to restrict intensive computation to genuinely ambiguous regions. The focus here is then on pattern filtering: Given a large set Y of possible patterns or explanations, narrow down the true one Y to a small (random) subset \hat Y\subsetY of ``detected'' patterns to be subjected to further, more intense, processing. To this end, we consider a family of hypothesis tests for Y\in A versus the nonspecific alternatives Y\in A^c. Each test has null type I error and the candidate sets A\subsetY are arranged in a hierarchy of nested partitions. These tests are then characterized by scope (|A|), power (or type II error) and algorithmic cost. We consider sequential testing strategies in which decisions are made iteratively, based on past outcomes, about which test to perform next and when to stop testing. The set \hat Y is then taken to be the set of patterns that have not been ruled out by the tests performed. The total cost of a strategy is the sum of the ``testing cost'' and the ``postprocessing cost'' (proportional to |\hat Y|) and the corresponding optimization problem is analyzed.Comment: Published at http://dx.doi.org/10.1214/009053605000000174 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore