202 research outputs found
Parallel Deterministic and Stochastic Global Minimization of Functions with Very Many Minima
The optimization of three problems with high dimensionality and many local minima are investigated
under five different optimization algorithms: DIRECT, simulated annealing, Spallâs SPSA algorithm, the KNITRO
package, and QNSTOP, a new algorithm developed at Indiana University
Newton based Stochastic Optimization using q-Gaussian Smoothed Functional Algorithms
We present the first q-Gaussian smoothed functional (SF) estimator of the
Hessian and the first Newton-based stochastic optimization algorithm that
estimates both the Hessian and the gradient of the objective function using
q-Gaussian perturbations. Our algorithm requires only two system simulations
(regardless of the parameter dimension) and estimates both the gradient and the
Hessian at each update epoch using these. We also present a proof of
convergence of the proposed algorithm. In a related recent work (Ghoshdastidar
et al., 2013), we presented gradient SF algorithms based on the q-Gaussian
perturbations. Our work extends prior work on smoothed functional algorithms by
generalizing the class of perturbation distributions as most distributions
reported in the literature for which SF algorithms are known to work and turn
out to be special cases of the q-Gaussian distribution. Besides studying the
convergence properties of our algorithm analytically, we also show the results
of several numerical simulations on a model of a queuing network, that
illustrate the significance of the proposed method. In particular, we observe
that our algorithm performs better in most cases, over a wide range of
q-values, in comparison to Newton SF algorithms with the Gaussian (Bhatnagar,
2007) and Cauchy perturbations, as well as the gradient q-Gaussian SF
algorithms (Ghoshdastidar et al., 2013).Comment: This is a longer of version of the paper with the same title accepted
in Automatic
Distributed self-tuning of sensor networks
This work is motivated by the need for an ad hoc sensor network to autonomously optimise its performance for given task objectives and constraints. Arguing that communication is the main bottleneck for distributed computation in a sensor network we formulate two approaches for optimisation of computing rates. The first is a team problem for maximising the minimum communication throughput of sensors and the second is a game problem in which cost for each sensor is a measure of its communication time with its neighbours. We investigate adaptive algorithms using which sensors can tune to the optimal channel attempt rates in a distributed fashion. For the team problem, the adaptive scheme is a stochastic gradient algorithm derived from the augmented Lagrangian formulation of the optimisation problem. The game formulation not only leads to an explicit characterisation of the Nash equilibrium but also to a simple iterative scheme by which sensors can learn the equilibrium attempt probabilities using only the estimates of transmission and reception times from their local measurements. Our approach is promising and should be seen as a step towards developing optimally self-organising architectures for sensor networks
Symmetric confidence regions and confidence intervals for normal map formulations of stochastic variational inequalities
Stochastic variational inequalities (SVI) model a large class of equilibrium
problems subject to data uncertainty, and are closely related to stochastic
optimization problems. The SVI solution is usually estimated by a solution to a
sample average approximation (SAA) problem. This paper considers the normal map
formulation of an SVI, and proposes a method to build asymptotically exact
confidence regions and confidence intervals for the solution of the normal map
formulation, based on the asymptotic distribution of SAA solutions. The
confidence regions are single ellipsoids with high probability. We also discuss
the computation of simultaneous and individual confidence intervals
IPA-based tuning of queue admission control under imperfect information
Includes bibliographical references (p. 14-16).Supported by the NSF. ECS-8552419Daniel Chonghwan Lee
Stochastic Dynamic Cache Partitioning for Encrypted Content Delivery
In-network caching is an appealing solution to cope with the increasing
bandwidth demand of video, audio and data transfer over the Internet.
Nonetheless, an increasing share of content delivery services adopt encryption
through HTTPS, which is not compatible with traditional ISP-managed approaches
like transparent and proxy caching. This raises the need for solutions
involving both Internet Service Providers (ISP) and Content Providers (CP): by
design, the solution should preserve business-critical CP information (e.g.,
content popularity, user preferences) on the one hand, while allowing for a
deeper integration of caches in the ISP architecture (e.g., in 5G femto-cells)
on the other hand.
In this paper we address this issue by considering a content-oblivious
ISP-operated cache. The ISP allocates the cache storage to various content
providers so as to maximize the bandwidth savings provided by the cache: the
main novelty lies in the fact that, to protect business-critical information,
ISPs only need to measure the aggregated miss rates of the individual CPs and
do not need to be aware of the objects that are requested, as in classic
caching. We propose a cache allocation algorithm based on a perturbed
stochastic subgradient method, and prove that the algorithm converges close to
the allocation that maximizes the overall cache hit rate. We use extensive
simulations to validate the algorithm and to assess its convergence rate under
stationary and non-stationary content popularity. Our results (i) testify the
feasibility of content-oblivious caches and (ii) show that the proposed
algorithm can achieve within 10\% from the global optimum in our evaluation
Optimization with Discrete Simultaneous Perturbation Stochastic Approximation Using Noisy Loss Function Measurements
Discrete stochastic optimization considers the problem of minimizing (or
maximizing) loss functions defined on discrete sets, where only noisy
measurements of the loss functions are available. The discrete stochastic
optimization problem is widely applicable in practice, and many algorithms have
been considered to solve this kind of optimization problem. Motivated by the
efficient algorithm of simultaneous perturbation stochastic approximation
(SPSA) for continuous stochastic optimization problems, we introduce the middle
point discrete simultaneous perturbation stochastic approximation (DSPSA)
algorithm for the stochastic optimization of a loss function defined on a
p-dimensional grid of points in Euclidean space. We show that the sequence
generated by DSPSA converges to the optimal point under some conditions.
Consistent with other stochastic approximation methods, DSPSA formally
accommodates noisy measurements of the loss function. We also show the rate of
convergence analysis of DSPSA by solving an upper bound of the mean squared
error of the generated sequence. In order to compare the performance of DSPSA
with the other algorithms such as the stochastic ruler algorithm (SR) and the
stochastic comparison algorithm (SC), we set up a bridge between DSPSA and the
other two algorithms by comparing the probability in a big-O sense of not
achieving the optimal solution. We show the theoretical and numerical
comparison results of DSPSA, SR, and SC. In addition, we consider an
application of DSPSA towards developing optimal public health strategies for
containing the spread of influenza given limited societal resources
- âŠ