5 research outputs found
2-Approximation for Prize-Collecting Steiner Forest
Approximation algorithms for the prize-collecting Steiner forest problem
(PCSF) have been a subject of research for over three decades, starting with
the seminal works of Agrawal, Klein, and Ravi and Goemans and Williamson on
Steiner forest and prize-collecting problems. In this paper, we propose and
analyze a natural deterministic algorithm for PCSF that achieves a
-approximate solution in polynomial time. This represents a significant
improvement compared to the previously best known algorithm with a
-approximation factor developed by Hajiaghayi and Jain in 2006.
Furthermore, K{\"{o}}nemann, Olver, Pashkovich, Ravi, Swamy, and Vygen have
established an integrality gap of at least for the natural LP relaxation
for PCSF. However, we surpass this gap through the utilization of a
combinatorial algorithm and a novel analysis technique. Since is the best
known approximation guarantee for Steiner forest problem, which is a special
case of PCSF, our result matches this factor and closes the gap between the
Steiner forest problem and its generalized version, PCSF
Dynamic Constrained Submodular Optimization with Polylogarithmic Update Time
Maximizing a monotone submodular function under cardinality constraint is
a core problem in machine learning and database with many basic applications,
including video and data summarization, recommendation systems, feature
extraction, exemplar clustering, and coverage problems. We study this classic
problem in the fully dynamic model where a stream of insertions and deletions
of elements of an underlying ground set is given and the goal is to maintain an
approximate solution using a fast update time.
A recent paper at NeurIPS'20 by Lattanzi, Mitrovic, Norouzi{-}Fard,
Tarnawski, Zadimoghaddam claims to obtain a dynamic algorithm for this problem
with a approximation ratio and a query complexity
bounded by . However, as we
explain in this paper, the analysis has some important gaps. Having a dynamic
algorithm for the problem with polylogarithmic update time is even more
important in light of a recent result by Chen and Peng at STOC'22 who show a
matching lower bound for the problem -- any randomized algorithm with a
approximation ratio must have an amortized query
complexity that is polynomial in .
In this paper, we develop a simpler algorithm for the problem that maintains
a -approximate solution for submodular maximization
under cardinality constraint using a polylogarithmic amortized update time
A Novel Prediction Setup for Online Speed-Scaling
Given the rapid rise in energy demand by data centers and computing systems in general, it is fundamental to incorporate energy considerations when designing (scheduling) algorithms. Machine learning can be a useful approach in practice by predicting the future load of the system based on, for example, historical data. However, the effectiveness of such an approach highly depends on the quality of the predictions and can be quite far from optimal when predictions are sub-par. On the other hand, while providing a worst-case guarantee, classical online algorithms can be pessimistic for large classes of inputs arising in practice.
This paper, in the spirit of the new area of machine learning augmented algorithms, attempts to obtain the best of both worlds for the classical, deadline based, online speed-scaling problem: Based on the introduction of a novel prediction setup, we develop algorithms that (i) obtain provably low energy-consumption in the presence of adequate predictions, and (ii) are robust against inadequate predictions, and (iii) are smooth, i.e., their performance gradually degrades as the prediction error increases
A Novel Prediction Setup for Online Speed-Scaling
Given the rapid rise in energy demand by data centers and computing systems in general, it is fundamental to incorporate energy considerations when designing (scheduling) algorithms. Machine learning can be a useful approach in practice by predicting the future load of the system based on, for example, historical data. However, the effectiveness of such an approach highly depends on the quality of the predictions and can be quite far from optimal when predictions are sub-par. On the other hand, while providing a worst-case guarantee, classical online algorithms can be pessimistic for large classes of inputs arising in practice.
This paper, in the spirit of the new area of machine learning augmented algorithms, attempts to obtain the best of both worlds for the classical, deadline based, online speed-scaling problem: Based on the introduction of a novel prediction setup, we develop algorithms that (i) obtain provably low energy-consumption in the presence of adequate predictions, and (ii) are robust against inadequate predictions, and (iii) are smooth, i.e., their performance gradually degrades as the prediction error increases