4,466 research outputs found
A Modica-Mortola approximation for branched transport
The M^\alpha energy which is usually minimized in branched transport problems
among singular 1-dimensional rectifiable vector measures with prescribed
divergence is approximated (and convergence is proved) by means of a sequence
of elliptic energies, defined on more regular vector fields. The procedure
recalls the Modica-Mortola one for approximating the perimeter, and the
double-well potential is replaced by a concave power
Approximating Knapsack and Partition via Dense Subset Sums
Knapsack and Partition are two important additive problems whose fine-grained
complexities in the -approximation setting are not yet
settled. In this work, we make progress on both problems by giving improved
algorithms.
- Knapsack can be -approximated in time, improving the previous by Jin (ICALP'19). There is a known conditional
lower bound of based on -convolution
hypothesis.
- Partition can be -approximated in time, improving the previous by Bringmann and Nakos (SODA'21). There is a known
conditional lower bound of based on Strong
Exponential Time Hypothesis.
Both of our new algorithms apply the additive combinatorial results on dense
subset sums by Galil and Margalit (SICOMP'91), Bringmann and Wellnitz
(SODA'21). Such techniques have not been explored in the context of Knapsack
prior to our work. In addition, we design several new methods to speed up the
divide-and-conquer steps which naturally arise in solving additive problems.Comment: To appear in SODA 2023. Corrects minor mistakes in Lemma 3.3 and
Lemma 3.5 in the proceedings version of this pape
Why and When Can Deep -- but Not Shallow -- Networks Avoid the Curse of Dimensionality: a Review
The paper characterizes classes of functions for which deep learning can be
exponentially better than shallow learning. Deep convolutional networks are a
special case of these conditions, though weight sharing is not the main reason
for their exponential advantage
Bisection of Bounded Treewidth Graphs by Convolutions
In the Bisection problem, we are given as input an edge-weighted graph G. The task is to find a partition of V(G) into two parts A and B such that ||A| - |B|| <= 1 and the sum of the weights of the edges with one endpoint in A and the other in B is minimized. We show that the complexity of the Bisection problem on trees, and more generally on graphs of bounded treewidth, is intimately linked to the (min, +)-Convolution problem. Here the input consists of two sequences (a[i])^{n-1}_{i = 0} and (b[i])^{n-1}_{i = 0}, the task is to compute the sequence (c[i])^{n-1}_{i = 0}, where c[k] = min_{i=0,...,k}(a[i] + b[k - i]).
In particular, we prove that if (min, +)-Convolution can be solved in O(tau(n)) time, then Bisection of graphs of treewidth t can be solved in time O(8^t t^{O(1)} log n * tau(n)), assuming a tree decomposition of width t is provided as input. Plugging in the naive O(n^2) time algorithm for (min, +)-Convolution yields a O(8^t t^{O(1)} n^2 log n) time algorithm for Bisection. This improves over the (dependence on n of the) O(2^t n^3) time algorithm of Jansen et al. [SICOMP 2005] at the cost of a worse dependence on t. "Conversely", we show that if Bisection can be solved in time O(beta(n)) on edge weighted trees, then (min, +)-Convolution can be solved in O(beta(n)) time as well. Thus, obtaining a sub-quadratic algorithm for Bisection on trees is extremely challenging, and could even be impossible. On the other hand, for unweighted graphs of treewidth t, by making use of a recent algorithm for Bounded Difference (min, +)-Convolution of Chan and Lewenstein [STOC 2015], we obtain a sub-quadratic algorithm for Bisection with running time O(8^t t^{O(1)} n^{1.864} log n)
Faster 0-1-Knapsack via Near-Convex Min-Plus-Convolution
We revisit the classic 0-1-Knapsack problem, in which we are given items
with their weights and profits as well as a weight budget , and the goal is
to find a subset of items of total weight at most that maximizes the total
profit. We study pseudopolynomial-time algorithms parameterized by the largest
profit of any item , and the largest weight of any item .
Our main result are algorithms for 0-1-Knapsack running in time
\tilde{O}(n\,w_\max\,p_\max^{2/3}) and \tilde{O}(n\,p_\max\,w_\max^{2/3}),
improving upon an algorithm in time O(n\,p_\max\,w_\max) by Pisinger [J.
Algorithms '99]. In the regime p_\max \approx w_\max \approx n (and ) our algorithms are the first to break the
cubic barrier .
To obtain our result, we give an efficient algorithm to compute the min-plus
convolution of near-convex functions. More precisely, we say that a function is -near convex with , if
there is a convex function such that for every . We design an algorithm computing the
min-plus convolution of two -near convex functions in time
. This tool can replace the usage of the prediction
technique of Bateni, Hajiaghayi, Seddighin and Stein [STOC '18] in all
applications we are aware of, and we believe it has wider applicability
Distributed top-k aggregation queries at large
Top-k query processing is a fundamental building block for efficient ranking in a large number of applications. Efficiency is a central issue, especially for distributed settings, when the data is spread across different nodes in a network. This paper introduces novel optimization methods for top-k aggregation queries in such distributed environments. The optimizations can be applied to all algorithms that fall into the frameworks of the prior TPUT and KLEE methods. The optimizations address three degrees of freedom: 1) hierarchically grouping input lists into top-k operator trees and optimizing the tree structure, 2) computing data-adaptive scan depths for different input sources, and 3) data-adaptive sampling of a small subset of input sources in scenarios with hundreds or thousands of query-relevant network nodes. All optimizations are based on a statistical cost model that utilizes local synopses, e.g., in the form of histograms, efficiently computed convolutions, and estimators based on order statistics. The paper presents comprehensive experiments, with three different real-life datasets and using the ns-2 network simulator for a packet-level simulation of a large Internet-style network
Fast Image Recovery Using Variable Splitting and Constrained Optimization
We propose a new fast algorithm for solving one of the standard formulations
of image restoration and reconstruction which consists of an unconstrained
optimization problem where the objective includes an data-fidelity
term and a non-smooth regularizer. This formulation allows both wavelet-based
(with orthogonal or frame-based representations) regularization or
total-variation regularization. Our approach is based on a variable splitting
to obtain an equivalent constrained optimization formulation, which is then
addressed with an augmented Lagrangian method. The proposed algorithm is an
instance of the so-called "alternating direction method of multipliers", for
which convergence has been proved. Experiments on a set of image restoration
and reconstruction benchmark problems show that the proposed algorithm is
faster than the current state of the art methods.Comment: Submitted; 11 pages, 7 figures, 6 table
- âŠ