2,755 research outputs found
Overcommitment in Cloud Services -- Bin packing with Chance Constraints
This paper considers a traditional problem of resource allocation, scheduling
jobs on machines. One such recent application is cloud computing, where jobs
arrive in an online fashion with capacity requirements and need to be
immediately scheduled on physical machines in data centers. It is often
observed that the requested capacities are not fully utilized, hence offering
an opportunity to employ an overcommitment policy, i.e., selling resources
beyond capacity. Setting the right overcommitment level can induce a
significant cost reduction for the cloud provider, while only inducing a very
low risk of violating capacity constraints. We introduce and study a model that
quantifies the value of overcommitment by modeling the problem as a bin packing
with chance constraints. We then propose an alternative formulation that
transforms each chance constraint into a submodular function. We show that our
model captures the risk pooling effect and can guide scheduling and
overcommitment decisions. We also develop a family of online algorithms that
are intuitive, easy to implement and provide a constant factor guarantee from
optimal. Finally, we calibrate our model using realistic workload data, and
test our approach in a practical setting. Our analysis and experiments
illustrate the benefit of overcommitment in cloud services, and suggest a cost
reduction of 1.5% to 17% depending on the provider's risk tolerance
Anytime Point-Based Approximations for Large POMDPs
The Partially Observable Markov Decision Process has long been recognized as
a rich framework for real-world planning and control problems, especially in
robotics. However exact solutions in this framework are typically
computationally intractable for all but the smallest problems. A well-known
technique for speeding up POMDP solving involves performing value backups at
specific belief points, rather than over the entire belief simplex. The
efficiency of this approach, however, depends greatly on the selection of
points. This paper presents a set of novel techniques for selecting informative
belief points which work well in practice. The point selection procedure is
combined with point-based value backups to form an effective anytime POMDP
algorithm called Point-Based Value Iteration (PBVI). The first aim of this
paper is to introduce this algorithm and present a theoretical analysis
justifying the choice of belief selection technique. The second aim of this
paper is to provide a thorough empirical comparison between PBVI and other
state-of-the-art POMDP methods, in particular the Perseus algorithm, in an
effort to highlight their similarities and differences. Evaluation is performed
using both standard POMDP domains and realistic robotic tasks
Wasserstein Distributionally Robust Look-Ahead Economic Dispatch
We consider the problem of look-ahead economic dispatch (LAED) with uncertain
renewable energy generation. The goal of this problem is to minimize the cost
of conventional energy generation subject to uncertain operational constraints.
The risk of violating these constraints must be below a given threshold for a
family of probability distributions with characteristics similar to observed
past data or predictions. We present two data-driven approaches based on two
novel mathematical reformulations of this distributionally robust decision
problem. The first one is a tractable convex program in which the uncertain
constraints are defined via the distributionally robust
conditional-value-at-risk. The second one is a scalable robust optimization
program that yields an approximate distributionally robust chance-constrained
LAED. Numerical experiments on the IEEE 39-bus system with real solar
production data and forecasts illustrate the effectiveness of these approaches.
We discuss how system operators should tune these techniques in order to seek
the desired robustness-performance trade-off and we compare their computational
scalability
The State-of-the-Art Survey on Optimization Methods for Cyber-physical Networks
Cyber-Physical Systems (CPS) are increasingly complex and frequently
integrated into modern societies via critical infrastructure systems, products,
and services. Consequently, there is a need for reliable functionality of these
complex systems under various scenarios, from physical failures due to aging,
through to cyber attacks. Indeed, the development of effective strategies to
restore disrupted infrastructure systems continues to be a major challenge.
Hitherto, there have been an increasing number of papers evaluating
cyber-physical infrastructures, yet a comprehensive review focusing on
mathematical modeling and different optimization methods is still lacking.
Thus, this review paper appraises the literature on optimization techniques for
CPS facing disruption, to synthesize key findings on the current methods in
this domain. A total of 108 relevant research papers are reviewed following an
extensive assessment of all major scientific databases. The main mathematical
modeling practices and optimization methods are identified for both
deterministic and stochastic formulations, categorizing them based on the
solution approach (exact, heuristic, meta-heuristic), objective function, and
network size. We also perform keyword clustering and bibliographic coupling
analyses to summarize the current research trends. Future research needs in
terms of the scalability of optimization algorithms are discussed. Overall,
there is a need to shift towards more scalable optimization solution
algorithms, empowered by data-driven methods and machine learning, to provide
reliable decision-support systems for decision-makers and practitioners
Optimization for L1-Norm Error Fitting via Data Aggregation
We propose a data aggregation-based algorithm with monotonic convergence to a
global optimum for a generalized version of the L1-norm error fitting model
with an assumption of the fitting function. The proposed algorithm generalizes
the recent algorithm in the literature, aggregate and iterative disaggregate
(AID), which selectively solves three specific L1-norm error fitting problems.
With the proposed algorithm, any L1-norm error fitting model can be solved
optimally if it follows the form of the L1-norm error fitting problem and if
the fitting function satisfies the assumption. The proposed algorithm can also
solve multi-dimensional fitting problems with arbitrary constraints on the
fitting coefficients matrix. The generalized problem includes popular models
such as regression and the orthogonal Procrustes problem. The results of the
computational experiment show that the proposed algorithms are faster than the
state-of-the-art benchmarks for L1-norm regression subset selection and L1-norm
regression over a sphere. Further, the relative performance of the proposed
algorithm improves as data size increases
- …