532 research outputs found
Prizing on Paths: A PTAS for the Highway Problem
In the highway problem, we are given an n-edge line graph (the highway), and
a set of paths (the drivers), each one with its own budget. For a given
assignment of edge weights (the tolls), the highway owner collects from each
driver the weight of the associated path, when it does not exceed the budget of
the driver, and zero otherwise. The goal is choosing weights so as to maximize
the profit.
A lot of research has been devoted to this apparently simple problem. The
highway problem was shown to be strongly NP-hard only recently
[Elbassioni,Raman,Ray-'09]. The best-known approximation is O(\log n/\log\log
n) [Gamzu,Segev-'10], which improves on the previous-best O(\log n)
approximation [Balcan,Blum-'06].
In this paper we present a PTAS for the highway problem, hence closing the
complexity status of the problem. Our result is based on a novel randomized
dissection approach, which has some points in common with Arora's quadtree
dissection for Euclidean network design [Arora-'98]. The basic idea is
enclosing the highway in a bounding path, such that both the size of the
bounding path and the position of the highway in it are random variables. Then
we consider a recursive O(1)-ary dissection of the bounding path, in subpaths
of uniform optimal weight. Since the optimal weights are unknown, we construct
the dissection in a bottom-up fashion via dynamic programming, while computing
the approximate solution at the same time. Our algorithm can be easily
derandomized. We demonstrate the versatility of our technique by presenting
PTASs for two variants of the highway problem: the tollbooth problem with a
constant number of leaves and the maximum-feasibility subsystem problem on
interval matrices. In both cases the previous best approximation factors are
polylogarithmic [Gamzu,Segev-'10,Elbassioni,Raman,Ray,Sitters-'09]
Packing Cars into Narrow Roads: PTASs for Limited Supply Highway
In the Highway problem, we are given a path with n edges (the highway), and a set of m drivers, each one characterized by a subpath and a budget. For a given assignment of edge prices (the tolls), the highway owner collects from each driver the total price of the associated path when it does not exceed drivers\u27s budget, and zero otherwise. The goal is to choose the prices to maximize the total profit. A PTAS is known for this (strongly NP-hard) problem [Grandoni,Rothvoss-SODA\u2711, SICOMP\u2716].
In this paper we study the limited supply generalization of Highway, that incorporates capacity constraints. Here the input also includes a capacity u_e >= 0 for each edge e; we need to select, among drivers that can afford the required price, a subset such that the number of drivers that use each edge e is at most u_e (and we get profit only from selected drivers). To the best of our knowledge, the only approximation algorithm known for this problem is a folklore O(log m) approximation based on a reduction to the related Unsplittable Flow on a Path problem (UFP). The main result of this paper is a PTAS for limited supply highway.
As a second contribution, we study a natural generalization of the problem where each driver i demands a different amount d_i of capacity. Using known techniques, it is not hard to derive a QPTAS for this problem. Here we present a PTAS for the case that drivers have uniform budgets. Finding a PTAS for non-uniform-demand limited supply highway is left as a challenging open problem
Experimental investigation of inter-element isolation in a medical array transducer at various manufacturing stages
This work presents the experimental investigation of vibration maps of a linear array transducer with 192 piezoelements by means of a laser Doppler vibrometer at various manufacturing finishing steps in air and in water. Over the years, many researchers have investigated cross-coupling in fabricated prototypes but not in arrays at various manufacturing stages. Only the central element of the array was driven at its working frequency of 5 MHz. The experimental results showed that the contributions of cross-coupling depend on the elements of the acoustic stack: Lead Zirconate Titanate (PZT), kerf, filler, matching layer, and lens. The oscillation amplitudes spanned from (6 ± 38%) nm to (110 ± 40%) nm when the energized element was tested in air and from (6 ± 57%) nm to (80 ± 67%) nm when measurements were obtained under water. The best inter-element isolation of -22 dB was measured in air after cutting the kerfs, whereas the poorest isolation was -2 dB under water with an acoustic lens (complete acoustic stack). The vibration pattern in water showed a higher standard deviation on the displacement measurements than the one obtained in air, due to the influence of acousto-optic interactions. The amount increased to 30% in water, as estimated by a comparison with the measurements in air. This work describes a valuable method for manufacturers to investigate the correspondence between the manufacturing process and the quantitative evaluations of the resulting effects
Parameterized Approximation Schemes for Independent Set of Rectangles and Geometric Knapsack
The area of parameterized approximation seeks to combine approximation and parameterized algorithms to obtain, e.g., (1+epsilon)-approximations in f(k,epsilon)n^O(1) time where k is some parameter of the input. The goal is to overcome lower bounds from either of the areas. We obtain the following results on parameterized approximability:
- In the maximum independent set of rectangles problem (MISR) we are given a collection of n axis parallel rectangles in the plane. Our goal is to select a maximum-cardinality subset of pairwise non-overlapping rectangles. This problem is NP-hard and also W[1]-hard [Marx, ESA\u2705]. The best-known polynomial-time approximation factor is O(log log n) [Chalermsook and Chuzhoy, SODA\u2709] and it admits a QPTAS [Adamaszek and Wiese, FOCS\u2713; Chuzhoy and Ene, FOCS\u2716]. Here we present a parameterized approximation scheme (PAS) for MISR, i.e. an algorithm that, for any given constant epsilon>0 and integer k>0, in time f(k,epsilon)n^g(epsilon), either outputs a solution of size at least k/(1+epsilon), or declares that the optimum solution has size less than k.
- In the (2-dimensional) geometric knapsack problem (2DK) we are given an axis-aligned square knapsack and a collection of axis-aligned rectangles in the plane (items). Our goal is to translate a maximum cardinality subset of items into the knapsack so that the selected items do not overlap. In the version of 2DK with rotations (2DKR), we are allowed to rotate items by 90 degrees. Both variants are NP-hard, and the best-known polynomial-time approximation factor is 2+epsilon [Jansen and Zhang, SODA\u2704]. These problems admit a QPTAS for polynomially bounded item sizes [Adamaszek and Wiese, SODA\u2715]. We show that both variants are W[1]-hard. Furthermore, we present a PAS for 2DKR.
For all considered problems, getting time f(k,epsilon)n^O(1), rather than f(k,epsilon)n^g(epsilon), would give FPT time f\u27(k)n^O(1) exact algorithms by setting epsilon=1/(k+1), contradicting W[1]-hardness. Instead, for each fixed epsilon>0, our PASs give (1+epsilon)-approximate solutions in FPT time.
For both MISR and 2DKR our techniques also give rise to preprocessing algorithms that take n^g(epsilon) time and return a subset of at most k^g(epsilon) rectangles/items that contains a solution of size at least k/(1+epsilon) if a solution of size k exists. This is a special case of the recently introduced notion of a polynomial-size approximate kernelization scheme [Lokshtanov et al., STOC\u2717]
A mazing 2+ε approximation for unsplittable flow on a path
We study the problem of unsplittable flow on a path (UFP), which arises naturally in many applications such as bandwidth allocation, job scheduling, and caching. Here we are given a path with nonnegative edge capacities and a set of tasks, which are characterized by a subpath, a demand, and a profit. The goal is to find the most profitable subset of tasks whose total demand does not violate the edge capacities. Not surprisingly, this problem has received a lot of attention in the research community. If the demand of each task is at most a small-enough fraction δ of the capacity along its subpath (δ-small tasks), then it has been known for a long time [Chekuri et al., ICALP 2003] how to compute a solution of value arbitrarily close to the optimum via LP rounding. However, much remains unknown for the complementary case, that is, when the demand of each task is at least some fraction δ > 0 of the smallest capacity of its subpath (δ-large tasks). For this setting, a constant factor approximation is known, improving on an earlier logarithmic approximation [Bonsma et al., FOCS 2011]. In this article, we present a polynomial-time approximation scheme (PTAS) for δ-large tasks, for any constant δ > 0. Key to this result is a complex geometrically inspired dynamic program. Each task is represented as a segment underneath the capacity curve, and we identify a proper maze-like structure so that each corridor of the maze is crossed by only O(1) tasks in the optimal solution. The maze has a tree topology, which guides our dynamic program. Our result implies a 2 + ε approximation for UFP, for any constant ε > 0, improving on the previously best 7 + ε approximation by Bonsma et al. We remark that our improved approximation algorithm matches the best known approximation ratio for the considerably easier special case of uniform edge capacities
Incrementally Maintaining the Number of l-cliques
The main contribution of this paper is an incremental algorithm to update the number of -cliques, for , in which each node of a graph is contained, after the deletion of an arbitrary node. The initialization cost is , where is the number of nodes, , , and is the exponent of the multiplication of two matrices. The amortized updating cost is for any , where and is the exponent of the multiplication of an matrix by an matrix. The current best bounds on imply an initialization cost, an updating cost for , and an updating cost for . An interesting application to constraint programming is also considered
Experimental characterization of respiratory droplet emission
The droplet-laden air cloud exhaled by humans during different respiratory activities plays a major role in infectious disease transmission. That exhaled droplets contain pathogen is a well-known fact in the scientific community since the 19th Century. Unfortunately, pandemics as COVID-19, SARS, and MERS, have recently brought back the attention to this issue, which is rather complex since multiple-scale phenomena and different disciplines (epidemiology, biology, fluid mechanics) are involved. Fluid mechanics plays a major role in the comprehension of droplet-laden air cloud dynamics and mitigation of the related risks. Indeed, the pathogens interact with fluids from their encapsulation within the droplets in the airways to their inhalation by susceptible individuals. The prediction of the fate of the droplets after their emission have widely been improved, especially in the past three years, by means of experiments and models. However, a lack of knowledge of the air and droplet properties at the emission (mouth) emerges from the literature. Providing precise information on emission characteristics to numerical or theoretical models that predict droplet dispersion is of striking importance to obtain reliable results. The present thesis aims to contribute to this field by improving the characterization of droplet emission, namely, their size and velocity distribution. A series of laboratory experiments have been conducted considering different respiratory activities, namely, speaking, coughing and breathing. The Interferometric Laser Imaging for Droplet Sizing (ILIDS) technique has been used for data collection. Both the setup and the related data processing have been improved with respect to ILIDS standard applications in order to detect droplets with size down to 2 μm and to measure all their three velocity components. Two experimental campaigns involving twenty-three volunteers have been carried out. The effects of protection masks and the variability in the results obtained for the same volunteer repeating the tests are also assessed. Finally, droplet size and velocity distributions have been used as input data for Computational Fluid Dynamics simulations in order to analyse their role in the dispersion process following their emission
- …