82 research outputs found
Scenario trees and policy selection for multistage stochastic programming using machine learning
We propose a hybrid algorithmic strategy for complex stochastic optimization
problems, which combines the use of scenario trees from multistage stochastic
programming with machine learning techniques for learning a policy in the form
of a statistical model, in the context of constrained vector-valued decisions.
Such a policy allows one to run out-of-sample simulations over a large number
of independent scenarios, and obtain a signal on the quality of the
approximation scheme used to solve the multistage stochastic program. We
propose to apply this fast simulation technique to choose the best tree from a
set of scenario trees. A solution scheme is introduced, where several scenario
trees with random branching structure are solved in parallel, and where the
tree from which the best policy for the true problem could be learned is
ultimately retained. Numerical tests show that excellent trade-offs can be
achieved between run times and solution quality
Towards Machine Wald
The past century has seen a steady increase in the need of estimating and
predicting complex systems and making (possibly critical) decisions with
limited information. Although computers have made possible the numerical
evaluation of sophisticated statistical models, these models are still designed
\emph{by humans} because there is currently no known recipe or algorithm for
dividing the design of a statistical model into a sequence of arithmetic
operations. Indeed enabling computers to \emph{think} as \emph{humans} have the
ability to do when faced with uncertainty is challenging in several major ways:
(1) Finding optimal statistical models remains to be formulated as a well posed
problem when information on the system of interest is incomplete and comes in
the form of a complex combination of sample data, partial knowledge of
constitutive relations and a limited description of the distribution of input
random variables. (2) The space of admissible scenarios along with the space of
relevant information, assumptions, and/or beliefs, tend to be infinite
dimensional, whereas calculus on a computer is necessarily discrete and finite.
With this purpose, this paper explores the foundations of a rigorous framework
for the scientific computation of optimal statistical estimators/models and
reviews their connections with Decision Theory, Machine Learning, Bayesian
Inference, Stochastic Optimization, Robust Optimization, Optimal Uncertainty
Quantification and Information Based Complexity.Comment: 37 page
Bounding separable recourse functions with limited distribution information
The recourse function in a stochastic program with recourse can be approximated by separable functions of the original random variables or linear transformations of them. The resulting bound then involves summing simple integrals. These integrals may themselves be difficult to compute or may require more information about the random variables than is available. In this paper, we show that a special class of functions has an easily computable bound that achieves the best upper bound when only first and second moment constraints are available.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/44185/1/10479_2005_Article_BF02204821.pd
- …