82 research outputs found

### Restricted Max-Min Allocation: Approximation and Integrality Gap

Asadpour, Feige, and Saberi proved that the integrality gap of the configuration LP for the restricted max-min allocation problem is at most 4. However, their proof does not give a polynomial-time approximation algorithm. A lot of efforts have been devoted to designing an efficient algorithm whose approximation ratio can match this upper bound for the integrality gap. In ICALP 2018, we present a (6 + delta)-approximation algorithm where delta can be any positive constant, and there is still a gap of roughly 2. In this paper, we narrow the gap significantly by proposing a (4+delta)-approximation algorithm where delta can be any positive constant. The approximation ratio is with respect to the optimal value of the configuration LP, and the running time is poly(m,n)* n^{poly(1/(delta))} where n is the number of players and m is the number of resources. We also improve the upper bound for the integrality gap of the configuration LP to 3 + 21/26 =~ 3.808

### Restricted Max-Min Fair Allocation

The restricted max-min fair allocation problem seeks an allocation of resources to players that maximizes the minimum total value obtained by any player. It is NP-hard to approximate the problem to a ratio less than 2. Comparing the current best algorithm for estimating the optimal value with the current best for constructing an allocation, there is quite a gap between the ratios that can be achieved in polynomial time: 4+delta for estimation and 6 + 2 sqrt{10} + delta ~~ 12.325 + delta for construction, where delta is an arbitrarily small constant greater than 0. We propose an algorithm that constructs an allocation with value within a factor 6 + delta from the optimum for any constant delta > 0. The running time is polynomial in the input size for any constant delta chosen

### Initial data gluing in the asymptotically flat regime via solution operators with prescribed support properties

We give new proofs of general relativistic initial data gluing results on
unit-scale annuli based on explicit solution operators for the linearized
constraint equation around the flat case with prescribed support properties.
These results retrieve and optimize - in terms of positivity, regularity, size
and/or spatial decay requirements - a number of known theorems concerning
asymptotically flat initial data, including Kerr exterior gluing by
Corvino-Schoen and Chru\'sciel-Delay, interior gluing (or "fill-in") by
Bieri-Chru\'sciel, and obstruction-free gluing by Czimek-Rodnianski. In
particular, our proof of the strengthened obstruction-free gluing theorem
relies on purely spacelike techniques, rather than null gluing as in the
original approach.Comment: 30 pages, 1 figure. Comments are welcome

### Separating the memory of reionization from cosmology in the Ly$\alpha$ forest power spectrum at the post-reionization era

It has been recently shown that the astrophysics of reionization can be
extracted from the Ly$\alpha$ forest power spectrum by marginalizing the memory
of reionization over cosmological information. This impact of cosmic
reionization on the Ly$\alpha$ forest power spectrum can survive cosmological
time scales because cosmic reionization, which is inhomogeneous, and subsequent
shocks from denser regions can heat the gas in low-density regions to $\sim
3\times10^4$ K and compress it to mean-density. Current approach of
marginalization over the memory of reionization, however, is not only
model-dependent, based on the assumption of a specific reionization model, but
also computationally expensive. Here we propose a simple analytical template
for the impact of cosmic reionization, thereby treating it as a broadband
systematic to be marginalized over for Bayesian inference of cosmological
information from the Ly$\alpha$ forest in a model-independent manner. This
template performs remarkably well with an error of $\leq 6 \%$ at large scales
$k \approx 0.19$ Mpc$^{-1}$ where the effect of the memory of reionization is
important, and reproduces the broadband effect of the memory of reionization in
the Ly$\alpha$ forest correlation function, as well as the expected bias of
cosmological parameters due to this systematic. The template can successfully
recover the morphology of forecast errors in cosmological parameter space as
expected when assuming a specific reionization model for marginalization
purposes, with a slight overestimation of tens of per cent for the forecast
errors on the cosmological parameters. We further propose a similar template
for this systematic on the Ly$\alpha$ forest 1D power spectrum.Comment: Comments welcome, 13 pages, 10 figure

### Faster Algorithms for Bounded Knapsack and Bounded Subset Sum Via Fine-Grained Proximity Results

We investigate pseudopolynomial-time algorithms for Bounded Knapsack and
Bounded Subset Sum. Recent years have seen a growing interest in settling their
fine-grained complexity with respect to various parameters. For Bounded
Knapsack, the number of items $n$ and the maximum item weight $w_{\max}$ are
two of the most natural parameters that have been studied extensively in the
literature. The previous best running time in terms of $n$ and $w_{\max}$ is
$O(n + w^3_{\max})$ [Polak, Rohwedder, Wegrzycki '21]. There is a conditional
lower bound of $O((n + w_{\max})^{2-o(1)})$ based on $(\min,+)$-convolution
hypothesis [Cygan, Mucha, Wegrzycki, Wlodarczyk '17]. We narrow the gap
significantly by proposing a $\tilde{O}(n + w^{12/5}_{\max})$-time algorithm.
Note that in the regime where $w_{\max} \approx n$, our algorithm runs in
$\tilde{O}(n^{12/5})$ time, while all the previous algorithms require
$\Omega(n^3)$ time in the worst case.
For Bounded Subset Sum, we give two algorithms running in
$\tilde{O}(nw_{\max})$ and $\tilde{O}(n + w^{3/2}_{\max})$ time, respectively.
These results match the currently best running time for 0-1 Subset Sum. Prior
to our work, the best running times (in terms of $n$ and $w_{\max}$) for
Bounded Subset Sum is $\tilde{O}(n + w^{5/3}_{\max})$ [Polak, Rohwedder,
Wegrzycki '21] and $\tilde{O}(n + \mu_{\max}^{1/2}w_{\max}^{3/2})$ [implied by
Bringmann '19 and Bringmann, Wellnitz '21], where $\mu_{\max}$ refers to the
maximum multiplicity of item weights

### A Nearly Quadratic-Time FPTAS for Knapsack

We investigate polynomial-time approximation schemes for the classic 0-1
knapsack problem. The previous algorithm by Deng, Jin, and Mao (SODA'23) has
approximation factor 1 + \eps with running time \widetilde{O}(n +
\frac{1}{\eps^{2.2}}). There is a lower Bound of (n +
\frac{1}{\eps})^{2-o(1)} conditioned on the hypothesis that $(\min, +)$ has no
truly subquadratic algorithm. We close the gap by proposing an approximation
scheme that runs in \widetilde{O}(n + \frac{1}{\eps^2}) time

### THE HYDRAULICS OF NATURE-LIKE FISHWAYS

Nature-like fishway arrangements are commonly used because these structures imitate the characteristics of natural rivers and effectively allow fish to migrate past river sections blocked by hydraulic structures. In this paper, physical models were analyzed, and the velocity distributions of two different fishway structures (Types I and II) were compared. Results showed that the maximum mainstream velocity of the Type I structure was 5.3% lower than that of the Type II structure. However, the average mainstream velocity of the Type I structure was 21.1% greater than that of the Type II structure. The total per-cycle length of the mainstream path in the Type II structure was 2.1 times greater than that of the Type I structure, which indicated that the length of the mainstream path was somewhat proportional to the average velocity of the mainstream. When the flow rate was kept constant, increases in the velocity of the main flow associated with changes in the internal structure of the fishway decreased the average velocity of the main flow, while decreases in the total length of the flow path led to increases in the average velocity of the main flow. Due to frictional head loss along the fishway and local head loss, as well as the overlaps between these factors, the overall flow rate gradually decreased every cycle, despite periodic fluctuations

Recommended from our members

### Spatial Analysis of Poverty, Tourism, and Opportunity in North Carolina

This study collected online secondary data in terms of tourism economic impact, human development, natural amenities, and self-employment income at the county level in the State of North Carolina and used GIS to conduct a spatial analysis of the distribution of and the interaction between tourism, poverty, and micro-entrepreneurship. It is aimed at identifying the areas where tourism( can( be( utilized( to( cope( with( poverty( by( creating( employment( and( tax( revenues(and(where(tourism(micro8entrepreneurship(might(have(an(important(role(in(enabling( individuals( to( earn( their( way( to( equitable( and( sustainable( prosperity.( ( The results show variations across the counties and the four geographic regions in North Carolina. Tourism business startup factors, the ways to achieve success in business, and the potential of tourism micro-entrepreneurship as a strategy for enabling sustainable livelihoods at the state or national scale are also examined based on the findings

- …