3,517 research outputs found
Contamination Estimation via Convex Relaxations
Identifying anomalies and contamination in datasets is important in a wide
variety of settings. In this paper, we describe a new technique for estimating
contamination in large, discrete valued datasets. Our approach considers the
normal condition of the data to be specified by a model consisting of a set of
distributions. Our key contribution is in our approach to contamination
estimation. Specifically, we develop a technique that identifies the minimum
number of data points that must be discarded (i.e., the level of contamination)
from an empirical data set in order to match the model to within a specified
goodness-of-fit, controlled by a p-value. Appealing to results from large
deviations theory, we show a lower bound on the level of contamination is
obtained by solving a series of convex programs. Theoretical results guarantee
the bound converges at a rate of , where p is the size of
the empirical data set.Comment: To appear, ISIT 201
Probabilistic models of planetary contamination
Likely fundamental inadequacies in the model of planetary contamination advanced by Sagan and Coleman are discussed. It is shown that a relatively minor modification of the basic Sagan-Coleman formula yields approximations that are generally adequate with data in the range of interest. This approximation formula differs from the original Sagan-Coleman version only through an initial conditioning on landing outcome. It always yields an upper (conservative) bound for the total probability of contamination, this appealing feature is lost if the conditioning on landing outcome is deleted
Distributionally Robust Optimization: A Review
The concepts of risk-aversion, chance-constrained optimization, and robust
optimization have developed significantly over the last decade. Statistical
learning community has also witnessed a rapid theoretical and applied growth by
relying on these concepts. A modeling framework, called distributionally robust
optimization (DRO), has recently received significant attention in both the
operations research and statistical learning communities. This paper surveys
main concepts and contributions to DRO, and its relationships with robust
optimization, risk-aversion, chance-constrained optimization, and function
regularization
Output analysis for approximated stochastic programs
Because of incomplete information and also for the sake of numerical tractability one mostly solves an approximated stochastic program instead of the underlying ''true'' decision problem. However, without an additional analysis, the obtained output (the optimal value and optimal solutions of the approximated stochastic program) should not be used to replace the sought solution of the ''true'' problem. Methods of output analysis have to be tailored to the structure of the problem and they should also reflect the source, character and precision of the input data. The scope of various approaches based on results of asymptotic and robust statistics, of the moment problem and on general results of parametric programming will be discussed from the point of view of their applicability and possible extensions
Chance constrained problems: penalty reformulation and performance of sample approximation technique
summary:We explore reformulation of nonlinear stochastic programs with several joint chance constraints by stochastic programs with suitably chosen penalty-type objectives. We show that the two problems are asymptotically equivalent. Simpler cases with one chance constraint and particular penalty functions were studied in [6,11]. The obtained problems with penalties and with a fixed set of feasible solutions are simpler to solve and analyze then the chance constrained programs. We discuss solving both problems using Monte-Carlo simulation techniques for the cases when the set of feasible solution is finite or infinite bounded. The approach is applied to a financial optimization problem with Value at Risk constraint, transaction costs and integer allocations. We compare the ability to generate a feasible solution of the original chance constrained problem using the sample approximations of the chance constraints directly or via sample approximation of the penalty function objective
Solving joint chance constrained problems using regularization and Benders' decomposition
In this paper we investigate stochastic programms with joint chance constraints. We consider discrete scenario set and reformulate the problem by adding auxiliary variables. Since the resulting problem has a difficult feasible set, we regularize it. To decrease the dependence on the scenario number, we propose a numerical method by iteratively solving a master problem while adding Benders cuts. We find the solution of the slave problem (generating the Benders cuts) in a closed form and propose a heuristic method to decrease the number of cuts. We perform a numerical study by increasing the number of scenarios and compare our solution with a solution obtained by solving the same problem with continuous distribution
- …