76 research outputs found

    Solving Fr\'echet Distance Problems by Algebraic Geometric Methods

    Full text link
    We study several polygonal curve problems under the Fr\'{e}chet distance via algebraic geometric methods. Let Xmd\mathbb{X}_m^d and Xkd\mathbb{X}_k^d be the spaces of all polygonal curves of mm and kk vertices in Rd\mathbb{R}^d, respectively. We assume that kmk \leq m. Let Rk,md\mathcal{R}^d_{k,m} be the set of ranges in Xmd\mathbb{X}_m^d for all possible metric balls of polygonal curves in Xkd\mathbb{X}_k^d under the Fr\'{e}chet distance. We prove a nearly optimal bound of O(dklog(km))O(dk\log (km)) on the VC dimension of the range space (Xmd,Rk,md)(\mathbb{X}_m^d,\mathcal{R}_{k,m}^d), improving on the previous O(d2k2log(dkm))O(d^2k^2\log(dkm)) upper bound and approaching the current Ω(dklogk)\Omega(dk\log k) lower bound. Our upper bound also holds for the weak Fr\'{e}chet distance. We also obtain exact solutions that are hitherto unknown for curve simplification, range searching, nearest neighbor search, and distance oracle.Comment: To appear at SODA24, correct some reference

    Curve Simplification and Clustering under Fr\'echet Distance

    Full text link
    We present new approximation results on curve simplification and clustering under Fr\'echet distance. Let T={τi:i[n]}T = \{\tau_i : i \in [n] \} be polygonal curves in RdR^d of mm vertices each. Let ll be any integer from [m][m]. We study a generalized curve simplification problem: given error bounds δi>0\delta_i > 0 for i[n]i \in [n], find a curve σ\sigma of at most ll vertices such that dF(σ,τi)δid_F(\sigma,\tau_i) \le \delta_i for i[n]i \in [n]. We present an algorithm that returns a null output or a curve σ\sigma of at most ll vertices such that dF(σ,τi)δi+ϵδmaxd_F(\sigma,\tau_i) \le \delta_i + \epsilon\delta_{\max} for i[n]i \in [n], where δmax=maxi[n]δi\delta_{\max} = \max_{i \in [n]} \delta_i. If the output is null, there is no curve of at most ll vertices within a Fr\'echet distance of δi\delta_i from τi\tau_i for i[n]i \in [n]. The running time is O~(nO(l)mO(l2)(dl/ϵ)O(dl))\tilde{O}\bigl(n^{O(l)} m^{O(l^2)} (dl/\epsilon)^{O(dl)}\bigr). This algorithm yields the first polynomial-time bicriteria approximation scheme to simplify a curve τ\tau to another curve σ\sigma, where the vertices of σ\sigma can be anywhere in RdR^d, so that dF(σ,τ)(1+ϵ)δd_F(\sigma,\tau) \le (1+\epsilon)\delta and σ(1+α)min{c:dF(c,τ)δ}|\sigma| \le (1+\alpha) \min\{|c| : d_F(c,\tau) \le \delta\} for any given δ>0\delta > 0 and any fixed α,ϵ(0,1)\alpha, \epsilon \in (0,1). The running time is O~(mO(1/α)(d/(αϵ))O(d/α))\tilde{O}\bigl(m^{O(1/\alpha)} (d/(\alpha\epsilon))^{O(d/\alpha)}\bigr). By combining our technique with some previous results in the literature, we obtain an approximation algorithm for (k,l)(k,l)-median clustering. Given TT, it computes a set Σ\Sigma of kk curves, each of ll vertices, such that i[n]minσΣdF(σ,τi)\sum_{i \in [n]} \min_{\sigma \in \Sigma} d_F(\sigma,\tau_i) is within a factor 1+ϵ1+\epsilon of the optimum with probability at least 1μ1-\mu for any given μ,ϵ(0,1)\mu, \epsilon \in (0,1). The running time is O~(nmO(kl2)μO(kl)(dkl/ϵ)O((dkl/ϵ)log(1/μ)))\tilde{O}\bigl(n m^{O(kl^2)} \mu^{-O(kl)} (dkl/\epsilon)^{O((dkl/\epsilon)\log(1/\mu))}\bigr).Comment: 28 pages; Corrected some wrong descriptions concerning related wor

    Bounded incentives in manipulating the probabilistic serial rule

    Get PDF
    The Probabilistic Serial mechanism is valued for its fairness and efficiency in addressing the random assignment problem. However, it lacks truthfulness, meaning it works well only when agents' stated preferences match their true ones. Significant utility gains from strategic actions may lead self-interested agents to manipulate the mechanism, undermining its practical adoption. To gauge the potential for manipulation, we explore an extreme scenario where a manipulator has complete knowledge of other agents' reports and unlimited computational resources to find their best strategy. We establish tight incentive ratio bounds of the mechanism. Furthermore, we complement these worst-case guarantees by conducting experiments to assess an agent's average utility gain through manipulation. The findings reveal that the incentive for manipulation is very small. These results offer insights into the mechanism's resilience against strategic manipulation, moving beyond the recognition of its lack of incentive compatibility

    Cost Minimization for Equilibrium Transition

    Full text link
    In this paper, we delve into the problem of using monetary incentives to encourage players to shift from an initial Nash equilibrium to a more favorable one within a game. Our main focus revolves around computing the minimum reward required to facilitate this equilibrium transition. The game involves a single row player who possesses mm strategies and kk column players, each endowed with nn strategies. Our findings reveal that determining whether the minimum reward is zero is NP-complete, and computing the minimum reward becomes APX-hard. Nonetheless, we bring some positive news, as this problem can be efficiently handled if either kk or nn is a fixed constant. Furthermore, we have devised an approximation algorithm with an additive error that runs in polynomial time. Lastly, we explore a specific case wherein the utility functions exhibit single-peaked characteristics, and we successfully demonstrate that the optimal reward can be computed in polynomial time.Comment: To appear in the proceeding of AAAI202

    Learning Raw Image Denoising with Bayer Pattern Unification and Bayer Preserving Augmentation

    Full text link
    In this paper, we present new data pre-processing and augmentation techniques for DNN-based raw image denoising. Compared with traditional RGB image denoising, performing this task on direct camera sensor readings presents new challenges such as how to effectively handle various Bayer patterns from different data sources, and subsequently how to perform valid data augmentation with raw images. To address the first problem, we propose a Bayer pattern unification (BayerUnify) method to unify different Bayer patterns. This allows us to fully utilize a heterogeneous dataset to train a single denoising model instead of training one model for each pattern. Furthermore, while it is essential to augment the dataset to improve model generalization and performance, we discovered that it is error-prone to modify raw images by adapting augmentation methods designed for RGB images. Towards this end, we present a Bayer preserving augmentation (BayerAug) method as an effective approach for raw image augmentation. Combining these data processing technqiues with a modified U-Net, our method achieves a PSNR of 52.11 and a SSIM of 0.9969 in NTIRE 2019 Real Image Denoising Challenge, demonstrating the state-of-the-art performance. Our code is available at https://github.com/Jiaming-Liu/BayerUnifyAug.Comment: Accepted by CVPRW 201
    corecore