1,593 research outputs found
Identifying efficient solutions via simulation: myopic multi-objective budget allocation for the bi-objective case
Simulation optimisation offers great opportunities in the design and optimisation of complex systems. In the presence of multiple objectives, there is usually no single solution that performs best on all objectives. Instead, there are several Pareto-optimal (efficient) solutions with different trade-offs which cannot be improved in any objective without sacrificing performance in another objective. For the case where alternatives are evaluated on multiple stochastic criteria, and the performance of an alternative can only be estimated via simulation, we consider the problem of efficiently identifying the Pareto-optimal designs out of a (small) given set of alternatives. We present a simple myopic budget allocation algorithm for multi-objective problems and propose several variants for different settings. In particular, this myopic method only allocates one simulation sample to one alternative in each iteration. This paper shows how the algorithm works in bi-objective problems under different settings. Empirical tests show that our algorithm can significantly reduce the necessary simulation budget
Optimal computing budget allocation for constrained optimization
Ph.DDOCTOR OF PHILOSOPH
Distribution on Warp Maps for Alignment of Open and Closed Curves
Alignment of curve data is an integral part of their statistical analysis,
and can be achieved using model- or optimization-based approaches. The
parameter space is usually the set of monotone, continuous warp maps of a
domain. Infinite-dimensional nature of the parameter space encourages sampling
based approaches, which require a distribution on the set of warp maps.
Moreover, the distribution should also enable sampling in the presence of
important landmark information on the curves which constrain the warp maps. For
alignment of closed and open curves in , possibly with
landmark information, we provide a constructive, point-process based definition
of a distribution on the set of warp maps of and the unit circle
that is (1) simple to sample from, and (2) possesses the
desiderata for decomposition of the alignment problem with landmark constraints
into multiple unconstrained ones. For warp maps on , the distribution is
related to the Dirichlet process. We demonstrate its utility by using it as a
prior distribution on warp maps in a Bayesian model for alignment of two
univariate curves, and as a proposal distribution in a stochastic algorithm
that optimizes a suitable alignment functional for higher-dimensional curves.
Several examples from simulated and real datasets are provided
Non-asymptotic confidence bounds for the optimal value of a stochastic program
We discuss a general approach to building non-asymptotic confidence bounds
for stochastic optimization problems. Our principal contribution is the
observation that a Sample Average Approximation of a problem supplies upper and
lower bounds for the optimal value of the problem which are essentially better
than the quality of the corresponding optimal solutions. At the same time, such
bounds are more reliable than "standard" confidence bounds obtained through the
asymptotic approach. We also discuss bounding the optimal value of MinMax
Stochastic Optimization and stochastically constrained problems. We conclude
with a simulation study illustrating the numerical behavior of the proposed
bounds
Coarse Grained Computations for a Micellar System
We establish, through coarse-grained computation, a connection between
traditional, continuum numerical algorithms (initial value problems as well as
fixed point algorithms) and atomistic simulations of the Larson model of
micelle formation. The procedure hinges on the (expected) evolution of a few
slow, coarse-grained mesoscopic observables of the MC simulation, and on
(computational) time scale separation between these and the remaining "slaved",
fast variables. Short bursts of appropriately initialized atomistic simulation
are used to estimate the (coarse-grained, deterministic) local dynamics of the
evolution of the observables. These estimates are then in turn used to
accelerate the evolution to computational stationarity through traditional
continuum algorithms (forward Euler integration, Newton-Raphson fixed point
computation). This "equation-free" framework, bypassing the derivation of
explicit, closed equations for the observables (e.g. equations of state) may
provide a computational bridge between direct atomistic / stochastic simulation
and the analysis of its macroscopic, system-level consequences
Noise reduction in coarse bifurcation analysis of stochastic agent-based models: an example of consumer lock-in
We investigate coarse equilibrium states of a fine-scale, stochastic
agent-based model of consumer lock-in in a duopolistic market. In the model,
agents decide on their next purchase based on a combination of their personal
preference and their neighbours' opinions. For agents with independent
identically-distributed parameters and all-to-all coupling, we derive an
analytic approximate coarse evolution-map for the expected average purchase. We
then study the emergence of coarse fronts when spatial segregation is present
in the relative perceived quality of products. We develop a novel Newton-Krylov
method that is able to compute accurately and efficiently coarse fixed points
when the underlying fine-scale dynamics is stochastic. The main novelty of the
algorithm is in the elimination of the noise that is generated when estimating
Jacobian-vector products using time-integration of perturbed initial
conditions. We present numerical results that demonstrate the convergence
properties of the numerical method, and use the method to show that macroscopic
fronts in this model destabilise at a coarse symmetry-breaking bifurcation.Comment: This version of the manuscript was accepted for publication on SIAD
- …