34,624 research outputs found
Recommended from our members
Adaptive grid semidefinite programming for finding optimal designs
We find optimal designs for linear models using anovel algorithm that iteratively combines a semidefinite programming(SDP) approach with adaptive grid techniques.The proposed algorithm is also adapted to find locally optimaldesigns for nonlinear models. The search space is firstdiscretized, and SDP is applied to find the optimal designbased on the initial grid. The points in the next grid set arepoints that maximize the dispersion function of the SDPgeneratedoptimal design using nonlinear programming. Theprocedure is repeated until a user-specified stopping rule isreached. The proposed algorithm is broadly applicable, andwe demonstrate its flexibility using (i) models with one ormore variables and (ii) differentiable design criteria, suchas A-, D-optimality, and non-differentiable criterion like Eoptimality,including the mathematically more challengingcasewhen theminimum eigenvalue of the informationmatrixof the optimal design has geometric multiplicity larger than 1. Our algorithm is computationally efficient because it isbased on mathematical programming tools and so optimalityis assured at each stage; it also exploits the convexity of theproblems whenever possible. Using several linear and nonlinearmodelswith one or more factors, we showthe proposedalgorithm can efficiently find optimal designs
Dispersion for Data-Driven Algorithm Design, Online Learning, and Private Optimization
Data-driven algorithm design, that is, choosing the best algorithm for a
specific application, is a crucial problem in modern data science.
Practitioners often optimize over a parameterized algorithm family, tuning
parameters based on problems from their domain. These procedures have
historically come with no guarantees, though a recent line of work studies
algorithm selection from a theoretical perspective. We advance the foundations
of this field in several directions: we analyze online algorithm selection,
where problems arrive one-by-one and the goal is to minimize regret, and
private algorithm selection, where the goal is to find good parameters over a
set of problems without revealing sensitive information contained therein. We
study important algorithm families, including SDP-rounding schemes for problems
formulated as integer quadratic programs, and greedy techniques for canonical
subset selection problems. In these cases, the algorithm's performance is a
volatile and piecewise Lipschitz function of its parameters, since tweaking the
parameters can completely change the algorithm's behavior. We give a sufficient
and general condition, dispersion, defining a family of piecewise Lipschitz
functions that can be optimized online and privately, which includes the
functions measuring the performance of the algorithms we study. Intuitively, a
set of piecewise Lipschitz functions is dispersed if no small region contains
many of the functions' discontinuities. We present general techniques for
online and private optimization of the sum of dispersed piecewise Lipschitz
functions. We improve over the best-known regret bounds for a variety of
problems, prove regret bounds for problems not previously studied, and give
matching lower bounds. We also give matching upper and lower bounds on the
utility loss due to privacy. Moreover, we uncover dispersion in auction design
and pricing problems
Online Pricing with Offline Data: Phase Transition and Inverse Square Law
This paper investigates the impact of pre-existing offline data on online
learning, in the context of dynamic pricing. We study a single-product dynamic
pricing problem over a selling horizon of periods. The demand in each
period is determined by the price of the product according to a linear demand
model with unknown parameters. We assume that before the start of the selling
horizon, the seller already has some pre-existing offline data. The offline
data set contains samples, each of which is an input-output pair consisting
of a historical price and an associated demand observation. The seller wants to
utilize both the pre-existing offline data and the sequential online data to
minimize the regret of the online learning process.
We characterize the joint effect of the size, location and dispersion of the
offline data on the optimal regret of the online learning process.
Specifically, the size, location and dispersion of the offline data are
measured by the number of historical samples , the distance between the
average historical price and the optimal price , and the standard
deviation of the historical prices , respectively. We show that the
optimal regret is , and design a learning algorithm based on the
"optimism in the face of uncertainty" principle, whose regret is optimal up to
a logarithmic factor. Our results reveal surprising transformations of the
optimal regret rate with respect to the size of the offline data, which we
refer to as phase transitions. In addition, our results demonstrate that the
location and dispersion of the offline data also have an intrinsic effect on
the optimal regret, and we quantify this effect via the inverse-square law.Comment: Forthcoming in Management Scienc
Considering Transmission Impairments in Wavelength Routed Networks
Abstract â We consider dynamically reconfigurable wavelength routed networks in which lightpaths carrying IP traffic are on demand established. We face the Routing and Wavelength Assignment problem considering as constraints the physical impairments that arise in all-optical wavelength routed networks. In particular, we study the impact of the physical layer when establishing a lightpath in transparent optical network. Because no signal transformation and regeneration at intermediate nodes occurs, noise and signal distortions due to non-ideal transmission devices are accumulated along the physical path, and they degrade the quality of the received signal. We propose a simple yet accurate model for the physical layer which consider both static and dynamic impairments, i.e., nonlinear effects depending on the actual wavelength/lightpath allocation. We then propose a novel algorithm to solve the RWA problem that explicitly considers the physical impairments. Simulation results show the effectiveness of our approach. Indeed, when the transmission impairments come into play, an accurate selection of paths and wavelengths which is driven by physical consideration is mandatory. I
Recommended from our members
Optimal exact designs of experiments via Mixed Integer Nonlinear Programming
Optimal exact designs are problematic to find and study because there is no unified theory for determining them and studyingtheir properties. Each has its own challenges and when a method exists to confirm the design optimality, it is invariablyapplicable to the particular problem only.We propose a systematic approach to construct optimal exact designs by incorporatingthe Cholesky decomposition of the Fisher Information Matrix in a Mixed Integer Nonlinear Programming formulation. Asexamples, we apply the methodology to find D- and A-optimal exact designs for linear and nonlinear models using global orlocal optimizers. Our examples include design problems with constraints on the locations or the number of replicates at theoptimal design points
Min Max Generalization for Two-stage Deterministic Batch Mode Reinforcement Learning: Relaxation Schemes
We study the minmax optimization problem introduced in [22] for computing
policies for batch mode reinforcement learning in a deterministic setting.
First, we show that this problem is NP-hard. In the two-stage case, we provide
two relaxation schemes. The first relaxation scheme works by dropping some
constraints in order to obtain a problem that is solvable in polynomial time.
The second relaxation scheme, based on a Lagrangian relaxation where all
constraints are dualized, leads to a conic quadratic programming problem. We
also theoretically prove and empirically illustrate that both relaxation
schemes provide better results than those given in [22]
- âŚ