78,743 research outputs found
Optimization-based Calibration of Simulation Input Models
Studies on simulation input uncertainty often built on the availability of
input data. In this paper, we investigate an inverse problem where, given only
the availability of output data, we nonparametrically calibrate the input
models and other related performance measures of interest. We propose an
optimization-based framework to compute statistically valid bounds on input
quantities. The framework utilizes constraints that connect the statistical
information of the real-world outputs with the input-output relation via a
simulable map. We analyze the statistical guarantees of this approach from the
view of data-driven robust optimization, and show how the guarantees relate to
the function complexity of the constraints arising in our framework. We
investigate an iterative procedure based on a stochastic quadratic penalty
method to approximately solve the resulting optimization. We conduct numerical
experiments to demonstrate our performance in bounding the input models and
related quantities
Simulation optimization: A review of algorithms and applications
Simulation Optimization (SO) refers to the optimization of an objective
function subject to constraints, both of which can be evaluated through a
stochastic simulation. To address specific features of a particular
simulation---discrete or continuous decisions, expensive or cheap simulations,
single or multiple outputs, homogeneous or heterogeneous noise---various
algorithms have been proposed in the literature. As one can imagine, there
exist several competing algorithms for each of these classes of problems. This
document emphasizes the difficulties in simulation optimization as compared to
mathematical programming, makes reference to state-of-the-art algorithms in the
field, examines and contrasts the different approaches used, reviews some of
the diverse applications that have been tackled by these methods, and
speculates on future directions in the field
Robust Analysis in Stochastic Simulation: Computation and Performance Guarantees
Any performance analysis based on stochastic simulation is subject to the
errors inherent in misspecifying the modeling assumptions, particularly the
input distributions. In situations with little support from data, we
investigate the use of worst-case analysis to analyze these errors, by
representing the partial, nonparametric knowledge of the input models via
optimization constraints. We study the performance and robustness guarantees of
this approach. We design and analyze a numerical scheme for solving a general
class of simulation objectives and uncertainty specifications. The key steps
involve a randomized discretization of the probability spaces, a simulable
unbiased gradient estimator using a nonparametric analog of the likelihood
ratio method, and a Frank-Wolfe (FW) variant of the stochastic approximation
(SA) method (which we call FWSA) run on the space of input probability
distributions. A convergence analysis for FWSA on non-convex problems is
provided. We test the performance of our approach via several numerical
examples
Granger Causality Networks for Categorical Time Series
We present a new framework for learning Granger causality networks for
multivariate categorical time series, based on the mixture transition
distribution (MTD) model. Traditionally, MTD is plagued by a nonconvex
objective, non-identifiability, and presence of many local optima. To
circumvent these problems, we recast inference in the MTD as a convex problem.
The new formulation facilitates the application of MTD to high-dimensional
multivariate time series. As a baseline, we also formulate a multi-output
logistic autoregressive model (mLTD), which while a straightforward extension
of autoregressive Bernoulli generalized linear models, has not been previously
applied to the analysis of multivariate categorial time series. We develop
novel identifiability conditions of the MTD model and compare them to those for
mLTD. We further devise novel and efficient optimization algorithm for the MTD
based on the new convex formulation, and compare the MTD and mLTD in both
simulated and real data experiments. Our approach simultaneously provides a
comparison of methods for network inference in categorical time series and
opens the door to modern, regularized inference with the MTD model
Derivative-free optimization methods
In many optimization problems arising from scientific, engineering and
artificial intelligence applications, objective and constraint functions are
available only as the output of a black-box or simulation oracle that does not
provide derivative information. Such settings necessitate the use of methods
for derivative-free, or zeroth-order, optimization. We provide a review and
perspectives on developments in these methods, with an emphasis on highlighting
recent developments and on unifying treatment of such problems in the
non-linear optimization and machine learning literature. We categorize methods
based on assumed properties of the black-box functions, as well as features of
the methods. We first overview the primary setting of deterministic methods
applied to unconstrained, non-convex optimization problems where the objective
function is defined by a deterministic black-box oracle. We then discuss
developments in randomized methods, methods that assume some additional
structure about the objective (including convexity, separability and general
non-smooth compositions), methods for problems where the output of the
black-box oracle is stochastic, and methods for handling different types of
constraints
Improvement of PSO algorithm by memory based gradient search - application in inventory management
Advanced inventory management in complex supply chains requires effective and
robust nonlinear optimization due to the stochastic nature of supply and demand
variations. Application of estimated gradients can boost up the convergence of
Particle Swarm Optimization (PSO) algorithm but classical gradient calculation
cannot be applied to stochastic and uncertain systems. In these situations
Monte-Carlo (MC) simulation can be applied to determine the gradient. We
developed a memory based algorithm where instead of generating and evaluating
new simulated samples the stored and shared former function evaluations of the
particles are sampled to estimate the gradients by local weighted least squares
regression. The performance of the resulted regional gradient-based PSO is
verified by several benchmark problems and in a complex application example
where optimal reorder points of a supply chain are determined.Comment: book chapter, 20 pages, 7 figures, 2 table
Simultaneous Perturbation Methods for Adaptive Labor Staffing in Service Systems
Service systems are labor intensive due to the large variation in the tasks
required to address service requests from multiple customers. Aligning the
staffing levels to the forecasted workloads adaptively in such systems is
nontrivial because of a large number of parameters and operational variations
leading to a huge search space. A challenging problem here is to optimize the
staffing while maintaining the system in steady-state and compliant to
aggregate service level agreement (SLA) constraints. Further, because these
parameters change on a weekly basis, the optimization should not take longer
than a few hours. We formulate this problem as a constrained Markov cost
process parameterized by the (discrete) staffing levels. We propose novel
simultaneous perturbation stochastic approximation (SPSA) based SASOC (Staff
Allocation using Stochastic Optimization with Constraints) algorithms for
solving the above problem. The algorithms include both first order as well as
second order methods and incorporate SPSA based gradient estimates in the
primal, with dual ascent for the Lagrange multipliers. Both the algorithms that
we propose are online, incremental and easy to implement. Further, they involve
a certain generalized smooth projection operator, which is essential to project
the continuous-valued worker parameter tuned by SASOC algorithms onto the
discrete set. We validated our algorithms on five real-life service systems and
compared them with a state-of-the-art optimization tool-kit OptQuest. Being 25
times faster than OptQuest, our algorithms are particularly suitable for
adaptive labor staffing. Also, we observe that our algorithms guarantee
convergence and find better solutions than OptQuest in many cases
Constrained Bayesian Optimization with Noisy Experiments
Randomized experiments are the gold standard for evaluating the effects of
changes to real-world systems. Data in these tests may be difficult to collect
and outcomes may have high variance, resulting in potentially large measurement
error. Bayesian optimization is a promising technique for efficiently
optimizing multiple continuous parameters, but existing approaches degrade in
performance when the noise level is high, limiting its applicability to many
randomized experiments. We derive an expression for expected improvement under
greedy batch optimization with noisy observations and noisy constraints, and
develop a quasi-Monte Carlo approximation that allows it to be efficiently
optimized. Simulations with synthetic functions show that optimization
performance on noisy, constrained problems outperforms existing methods. We
further demonstrate the effectiveness of the method with two real-world
experiments conducted at Facebook: optimizing a ranking system, and optimizing
server compiler flags
Using models to improve optimizers for variational quantum algorithms
Variational quantum algorithms are a leading candidate for early applications
on noisy intermediate-scale quantum computers. These algorithms depend on a
classical optimization outer-loop that minimizes some function of a
parameterized quantum circuit. In practice, finite sampling error and gate
errors make this a stochastic optimization with unique challenges that must be
addressed at the level of the optimizer. The sharp trade-off between precision
and sampling time in conjunction with experimental constraints necessitates the
development of new optimization strategies to minimize overall wall clock time
in this setting. In this work, we introduce two optimization methods and
numerically compare their performance with common methods in use today. The
methods are surrogate model-based algorithms designed to improve reuse of
collected data. They do so by utilizing a least-squares quadratic fit of
sampled function values within a moving trusted region to estimate the gradient
or a policy gradient. To make fair comparisons between optimization methods, we
develop experimentally relevant cost models designed to balance efficiency in
testing and accuracy with respect to cloud quantum computing systems. The
results here underscore the need to both use relevant cost models and optimize
hyperparameters of existing optimization methods for competitive performance.
The methods introduced here have several practical advantages in realistic
experimental settings, and we have used one of them successfully in a
separately published experiment on Google's Sycamore device.Comment: 22 pages, 10 figures. Added Model Policy Gradient optimizer and
changed titl
Efficient Approximation of Channel Capacities
We propose an iterative method for approximately computing the capacity of
discrete memoryless channels, possibly under additional constraints on the
input distribution. Based on duality of convex programming, we derive explicit
upper and lower bounds for the capacity. The presented method requires to provide an estimate of the capacity to within
, where and denote the input and output alphabet size; a
single iteration has a complexity . We also show how to approximately
compute the capacity of memoryless channels having a bounded continuous input
alphabet and a countable output alphabet under some mild assumptions on the
decay rate of the channel's tail. It is shown that discrete-time Poisson
channels fall into this problem class. As an example, we compute sharp upper
and lower bounds for the capacity of a discrete-time Poisson channel with a
peak-power input constraint.Comment: 32 pages, 3 figures, revised versio
- …