11,860 research outputs found

    Sample average approximation with heavier tails II: localization in stochastic convex optimization and persistence results for the Lasso

    Full text link
    We present exponential finite-sample nonasymptotic deviation inequalities for the SAA estimator's near-optimal solution set over the class of stochastic optimization problems with heavy-tailed random \emph{convex} functions in the objective and constraints. Such setting is better suited for problems where a sub-Gaussian data generating distribution is less expected, e.g., in stochastic portfolio optimization. One of our contributions is to exploit \emph{convexity} of the perturbed objective and the perturbed constraints as a property which entails \emph{localized} deviation inequalities for joint feasibility and optimality guarantees. This means that our bounds are significantly tighter in terms of diameter and metric entropy since they depend only on the near-optimal solution set but not on the whole feasible set. As a result, we obtain a much sharper sample complexity estimate when compared to a general nonconvex problem. In our analysis, we derive some localized deterministic perturbation error bounds for convex optimization problems which are of independent interest. To obtain our results, we only assume a metric regular convex feasible set, possibly not satisfying the Slater condition and not having a metric regular solution set. In this general setting, joint near feasibility and near optimality are guaranteed. If in addition the set satisfies the Slater condition, we obtain finite-sample simultaneous \emph{exact} feasibility and near optimality guarantees (for a sufficiently small tolerance). Another contribution of our work is to present, as a proof of concept of our localized techniques, a persistent result for a variant of the LASSO estimator under very weak assumptions on the data generating distribution.Comment: 34 pages. Some correction

    Proteomic and epigenomic markers of sepsis-induced delirium (SID)

    Get PDF
    In elderly population sepsis is one of the leading causes of intensive care unit (ICU) admissions in the United States. Sepsis-induced delirium (SID) is the most frequent cause of delirium in ICU (Martin et al., 2010). Together delirium and SID represent under-recognized public health problems which place an increasing financial burden on the US health care system, currently estimated at 143-152 billion dollars per year (Leslie et al., 2008). The interest in SID was recently reignited as it was demonstrated that, contrary to prior beliefs, cognitive deficits induced by this condition may be irreversible and lead to dementia (Pandharipande et al., 2013; Brummel et al., 2014). Conversely, it is construed that diagnosing SID early or mitigating its full blown manifestations may preempt geriatric cognitive disorders. Biological markers specific for sepsis and SID would facilitate the development of potential therapies, monitor the disease process and at the same time enable elderly individuals to make better informed decisions regarding surgeries which may pose the risk of complications, including sepsis and delirium. This article proposes a battery of peripheral blood markers to be used for diagnostic and prognostic purposes in sepsis and SID. Though each individual marker may not be specific enough, we believe that together as a battery they may achieve the necessary accuracy to answer two important questions: who may be vulnerable to the development of sepsis, and who may develop SID and irreversible cognitive deficits following sepsis?

    Further Evidence for a Gravitational Fixed Point

    Full text link
    A theory of gravity with a generic action functional and minimally coupled to N matter fields has a nontrivial fixed point in the leading large N approximation. At this fixed point, the cosmological constant and Newton's constant are nonzero and UV relevant; the curvature squared terms are asymptotically free with marginal behaviour; all higher order terms are irrelevant and can be set to zero by a suitable choice of cutoff function.Comment: LaTEX, 4 pages. Relative to the published paper, a sign has been corrected in equations (17) and (18

    Estimating graph parameters with random walks

    Full text link
    An algorithm observes the trajectories of random walks over an unknown graph GG, starting from the same vertex xx, as well as the degrees along the trajectories. For all finite connected graphs, one can estimate the number of edges mm up to a bounded factor in O(trel3/4m/d)O\left(t_{\mathrm{rel}}^{3/4}\sqrt{m/d}\right) steps, where trelt_{\mathrm{rel}} is the relaxation time of the lazy random walk on GG and dd is the minimum degree in GG. Alternatively, mm can be estimated in O(tunif+trel5/6n)O\left(t_{\mathrm{unif}} +t_{\mathrm{rel}}^{5/6}\sqrt{n}\right), where nn is the number of vertices and tunift_{\mathrm{unif}} is the uniform mixing time on GG. The number of vertices nn can then be estimated up to a bounded factor in an additional O(tunifmn)O\left(t_{\mathrm{unif}}\frac{m}{n}\right) steps. Our algorithms are based on counting the number of intersections of random walk paths X,YX,Y, i.e. the number of pairs (t,s)(t,s) such that Xt=YsX_t=Y_s. This improves on previous estimates which only consider collisions (i.e., times tt with Xt=YtX_t=Y_t). We also show that the complexity of our algorithms is optimal, even when restricting to graphs with a prescribed relaxation time. Finally, we show that, given either mm or the mixing time of GG, we can compute the "other parameter" with a self-stopping algorithm
    corecore