512 research outputs found
Validating Sample Average Approximation Solutions with Negatively Dependent Batches
Sample-average approximations (SAA) are a practical means of finding
approximate solutions of stochastic programming problems involving an extremely
large (or infinite) number of scenarios. SAA can also be used to find estimates
of a lower bound on the optimal objective value of the true problem which, when
coupled with an upper bound, provides confidence intervals for the true optimal
objective value and valuable information about the quality of the approximate
solutions. Specifically, the lower bound can be estimated by solving multiple
SAA problems (each obtained using a particular sampling method) and averaging
the obtained objective values. State-of-the-art methods for lower-bound
estimation generate batches of scenarios for the SAA problems independently. In
this paper, we describe sampling methods that produce negatively dependent
batches, thus reducing the variance of the sample-averaged lower bound
estimator and increasing its usefulness in defining a confidence interval for
the optimal objective value. We provide conditions under which the new sampling
methods can reduce the variance of the lower bound estimator, and present
computational results to verify that our scheme can reduce the variance
significantly, by comparison with the traditional Latin hypercube approach
Normalization factors for magnetic relaxation of small particle systems in non-zero magnetic field
We critically discuss relaxation experiments in magnetic systems that can be
characterized in terms of an energy barrier distribution, showing that proper
normalization of the relaxation data is needed whenever curves corresponding to
different temperatures are to be compared. We show how these normalization
factors can be obtained from experimental data by using the
scaling method without making any assumptions about the nature of the energy
barrier distribution. The validity of the procedure is tested using a
ferrofluid of Fe_3O_4 particles.Comment: 5 pages, 6 eps figures added in April 22, to be published in Phys.
Rev. B 55 (1 April 1997
A learning-based algorithm to quickly compute good primal solutions for Stochastic Integer Programs
We propose a novel approach using supervised learning to obtain near-optimal
primal solutions for two-stage stochastic integer programming (2SIP) problems
with constraints in the first and second stages. The goal of the algorithm is
to predict a "representative scenario" (RS) for the problem such that,
deterministically solving the 2SIP with the random realization equal to the RS,
gives a near-optimal solution to the original 2SIP. Predicting an RS, instead
of directly predicting a solution ensures first-stage feasibility of the
solution. If the problem is known to have complete recourse, second-stage
feasibility is also guaranteed. For computational testing, we learn to find an
RS for a two-stage stochastic facility location problem with integer variables
and linear constraints in both stages and consistently provide near-optimal
solutions. Our computing times are very competitive with those of
general-purpose integer programming solvers to achieve a similar solution
quality
- …