908 research outputs found
Learning-aided Stochastic Network Optimization with Imperfect State Prediction
We investigate the problem of stochastic network optimization in the presence
of imperfect state prediction and non-stationarity. Based on a novel
distribution-accuracy curve prediction model, we develop the predictive
learning-aided control (PLC) algorithm, which jointly utilizes historic and
predicted network state information for decision making. PLC is an online
algorithm that requires zero a-prior system statistical information, and
consists of three key components, namely sequential distribution estimation and
change detection, dual learning, and online queue-based control.
Specifically, we show that PLC simultaneously achieves good long-term
performance, short-term queue size reduction, accurate change detection, and
fast algorithm convergence. In particular, for stationary networks, PLC
achieves a near-optimal , utility-delay
tradeoff. For non-stationary networks, \plc{} obtains an
utility-backlog tradeoff for distributions that last
time, where
is the prediction accuracy and is a constant (the
Backpressue algorithm \cite{neelynowbook} requires an length
for the same utility performance with a larger backlog). Moreover, PLC detects
distribution change slots faster with high probability ( is the
prediction size) and achieves an convergence time. Our results demonstrate
that state prediction (even imperfect) can help (i) achieve faster detection
and convergence, and (ii) obtain better utility-delay tradeoffs
Fast-Convergent Learning-aided Control in Energy Harvesting Networks
In this paper, we present a novel learning-aided energy management scheme
() for multihop energy harvesting networks. Different from prior
works on this problem, our algorithm explicitly incorporates information
learning into system control via a step called \emph{perturbed dual learning}.
does not require any statistical information of the system
dynamics for implementation, and efficiently resolves the challenging energy
outage problem. We show that achieves the near-optimal
utility-delay tradeoff with an
energy buffers (). More interestingly,
possesses a \emph{convergence time} of , which is much faster than the time of
pure queue-based techniques or the time of approaches
that rely purely on learning the system statistics. This fast convergence
property makes more adaptive and efficient in resource
allocation in dynamic environments. The design and analysis of
demonstrate how system control algorithms can be augmented by learning and what
the benefits are. The methodology and algorithm can also be applied to similar
problems, e.g., processing networks, where nodes require nonzero amount of
contents to support their actions
The Value-of-Information in Matching with Queues
We consider the problem of \emph{optimal matching with queues} in dynamic
systems and investigate the value-of-information. In such systems, the
operators match tasks and resources stored in queues, with the objective of
maximizing the system utility of the matching reward profile, minus the average
matching cost. This problem appears in many practical systems and the main
challenges are the no-underflow constraints, and the lack of matching-reward
information and system dynamics statistics. We develop two online matching
algorithms: Learning-aided Reward optimAl Matching () and
Dual- () to effectively resolve both challenges.
Both algorithms are equipped with a learning module for estimating the
matching-reward information, while incorporates an additional
module for learning the system dynamics. We show that both algorithms achieve
an close-to-optimal utility performance for any
, while achieves a faster convergence speed and a
better delay compared to , i.e., delay and convergence under
compared to delay and convergence under
( and are maximum estimation errors for
reward and system dynamics). Our results reveal that information of different
system components can play very different roles in algorithm performance and
provide a systematic way for designing joint learning-control algorithms for
dynamic systems
Algorithms for Max-Min Share Fair Allocation of Indivisible Chores
We consider Max-min Share (MmS) fair allocations of indivisible chores (items with negative utilities). We show that allocation of chores and classical allocation of goods (items with positive utilities) have some fundamental connections but also differences which prevent a straightforward application of algorithms for goods in the chores setting and viceversa. We prove that an MmS allocation does not need to exist for chores and computing an MmS allocation - if it exists - is strongly NP-hard. In view of these non-existence and complexity results, we present a polynomial-time 2-approximation algorithm for MmS fairness for chores. We then introduce a new fairness concept called optimal MmS that represents the best possible allocation in terms of MmS that is guaranteed to exist. We use connections to parallel machine scheduling to give (1) a polynomial-time approximation scheme for computing an optimal MmS allocation when the number of agents is fixed and (2) an effective and efficient heuristic with an ex-post worst-case analysis
Leximin Approximation: From Single-Objective to Multi-Objective
Leximin is a common approach to multi-objective optimization, frequently
employed in fair division applications. In leximin optimization, one first aims
to maximize the smallest objective value; subject to this, one maximizes the
second-smallest objective; and so on. Often, even the single-objective problem
of maximizing the smallest value cannot be solved accurately. What can we hope
to accomplish for leximin optimization in this situation? Recently, Henzinger
et al. (2022) defined a notion of \emph{approximate} leximin optimality. Their
definition, however, considers only an additive approximation.
In this work, we first define the notion of approximate leximin optimality,
allowing both multiplicative and additive errors. We then show how to compute,
in polynomial time, such an approximate leximin solution, using an oracle that
finds an approximation to a single-objective problem. The approximation factors
of the algorithms are closely related: an -approximation for
the single-objective problem (where and
are the multiplicative and additive factors respectively) translates into an
-approximation for the multi-objective leximin problem,
regardless of the number of objectives.
Finally, we apply our algorithm to obtain an approximate leximin solution for
the problem of \emph{stochastic allocations of indivisible goods}. For this
problem, assuming sub-modular objectives functions, the single-objective
egalitarian welfare can be approximated, with only a multiplicative error, to
an optimal factor w.h.p. We show how to extend the
approximation to leximin, over all the objective functions, to a multiplicative
factor of w.h.p or
deterministically
A Bandit Approach to Online Pricing for Heterogeneous Edge Resource Allocation
Edge Computing (EC) offers a superior user experience by positioning cloud
resources in close proximity to end users. The challenge of allocating edge
resources efficiently while maximizing profit for the EC platform remains a
sophisticated problem, especially with the added complexity of the online
arrival of resource requests. To address this challenge, we propose to cast the
problem as a multi-armed bandit problem and develop two novel online pricing
mechanisms, the Kullback-Leibler Upper Confidence Bound (KL-UCB) algorithm and
the Min-Max Optimal algorithm, for heterogeneous edge resource allocation.
These mechanisms operate in real-time and do not require prior knowledge of
demand distribution, which can be difficult to obtain in practice. The proposed
posted pricing schemes allow users to select and pay for their preferred
resources, with the platform dynamically adjusting resource prices based on
observed historical data. Numerical results show the advantages of the proposed
mechanisms compared to several benchmark schemes derived from traditional
bandit algorithms, including the Epsilon-Greedy, basic UCB, and Thompson
Sampling algorithms
- …