18 research outputs found

    A Framework for Exponential-Time-Hypothesis--Tight Algorithms and Lower Bounds in Geometric Intersection Graphs

    Get PDF
    We give an algorithmic and lower bound framework that facilitates the construction of subexponential algorithms and matching conditional complexity bounds. It can be applied to intersection graphs of similarly-sized fat objects, yielding algorithms with running time 2O(n1−1/d)2^{O(n^{1-1/d})} for any fixed dimension d≥2d\ge 2 for many well-known graph problems, including Independent Set, rr-Dominating Set for constant rr, and Steiner Tree. For most problems, we get improved running times compared to prior work; in some cases, we give the first known subexponential algorithm in geometric intersection graphs. Additionally, most of the obtained algorithms are representation-agnostic, i.e., they work on the graph itself and do not require the geometric representation. Our algorithmic framework is based on a weighted separator theorem and various treewidth techniques. The lower bound framework is based on a constructive embedding of graphs into dd-dimensional grids, and it allows us to derive matching 2Ω(n1−1/d)2^{\Omega(n^{1-1/d})} lower bounds under the exponential time hypothesis even in the much more restricted class of dd-dimensional induced grid graphs

    Dynamic set cover : improved amortized and worst-case update time

    Get PDF
    In the dynamic minimum set cover problem, a challenge is to minimize the update time while guaranteeing close to the optimal min(O(log n), f) approximation factor. (Throughout, m, n, f, and C are parameters denoting the maximum number of sets, number of elements, frequency, and the cost range.) In the high-frequency range, when f = Ω(log n), this was achieved by a deterministic O(log n)-approximation algorithm with O(f log n) amortized update time [Gupta et al. STOC'17]. In the low-frequency range, the line of work by Gupta et al. [STOC'17], Abboud et al. [STOC'19], and Bhattacharya et al. [ICALP'15, IPCO'17, FOCS'19] led to a deterministic (1 + ∊) f-approximation algorithm with O(f log(Cn)/∊2) amortized update time. In this paper we improve the latter update time and provide the first bounds that subsume (and sometimes improve) the state-of-the-art dynamic vertex cover algorithms. We obtain: (1) (1 + ∊) f-approximation ratio in O(f log2(Cn)/∊3) worst-case update time: No non-trivial worst-case update time was previously known for dynamic set cover. Our bound subsumes and improves by a logarithmic factor the O(log3 n/poly(∊)) worst-case update time for unweighted dynamic vertex cover (i.e., when f = 2 and C = 1) by Bhattacharya et al. [SODA'17]. (2) (1 + ∊) f-approximation ratio in O ((f2/∊3) + (f/∊2) log C) amortized update time: This result improves the previous O(f log (Cn)/∊2) update time bound for most values of f in the low-frequency range, i.e. whenever f = o(log n). It is the first that is independent of m and n. It subsumes the constant amortized update time of Bhattacharya and Kulkarni [SODA'19] for unweighted dynamic vertex cover (i.e., when f = 2 and C = 1). These results are achieved by leveraging the approximate complementary slackness and background schedulers techniques. These techniques were used in the local update scheme for dynamic vertex cover. Our main technical contribution is to adapt these techniques within the global update scheme of Bhattacharya et al. [FOCS'19] for the dynamic set cover problem

    LIPIcs, Volume 244, ESA 2022, Complete Volume

    Get PDF
    LIPIcs, Volume 244, ESA 2022, Complete Volum

    Monte Carlo methods for combining sample approximations of distributions

    Get PDF
    Combining several (sample approximations of) distributions, which we term sub-posteriors, into a single distribution proportional to their product, is a common challenge in statistics and data science. For instance, this can occur in distributed `big data' problems, tempering problems, or when working under multi-party privacy constraints. Many existing approaches resort to approximating the individual sub-posteriors for practical necessity, then finding either an analytical approximation or sample approximation of the resulting (product-pooled) posterior. The quality of the posterior approximation for these approaches is poor when the sub-posteriors fall out-with a narrow range of distributional form, such as being approximately Gaussian. Recently, a Fusion approach has been proposed which finds a direct and exact Monte Carlo approximation of the posterior (as opposed to the sub-posteriors), circumventing the drawbacks of approximate approaches. Unfortunately, existing Fusion approaches have a number of computational limitations, particularly when unifying a large number of sub-posteriors or when the sub-posteriors exhibit large correlation. In this thesis, we generalise the theory underpinning existing Fusion approaches, and embed the resulting methodology within a recursive divide-and-conquer sequential Monte Carlo paradigm. This ultimately leads to a competitive Fusion approach, which is appreciably more robust and scalable in a variety of practical settings

    Fictitious boundary and penalization methods for treatment of rigid objects in incompressible flows

    Get PDF
    The Fictitious Boundary Method (FBM) and the Penalty Method (PM) for solving the incompressible Navier-Stokes equations modeling steady or unsteady incompressible flow around solid and rigid, non-deformable objects are presented and numerically analyzed and compared in this thesis. The proposed methods are finite element methods to simulate incompressible flows with small-scale time-(in)dependent geometrical details. The FBM, described and already validated in [1, 43, 48], is based on a finite element method background grid which covers the whole computational domain and is independent of the shape, number and size of any solid obstacle contained inside. The fluid part is computed by a multigrid finite element solver, while the behavior of the solid part is governed by the mechanics principles regarding motion and interactions of type fluid-solid, solid-solid or solid-wall collisions. A new treatment of imposing the Dirichlet boundary conditions for the case of immersed rigid boundary objects is proposed by using the penalization method as a more general framework then the FBM, but containing it as a special case. The new PM approach has a stronger mathematical background. In contrast to FBM, the PM does not imply a direct modification or artificial techniques over the matrix of the system of equations like the fictitious boundary method. A pairing of the penalty method with multigrid solvers is used, while the computational domain is fixed and needs no re-meshing during the simulations. However, the degree of geometrical details that the coarse mesh contains has an impact onto numerical results, a fact which will be investigated/ clarified in this thesis. The presented method is a finite element method, easy to be incorporated into standard CFD codes, for simulating particulate flow or, in general, flows with immersed time-(in)dependent and complicated shaped objects. The aim is to analyze and validate the penalty method and compare, qualitatively and quantitatively, with the already validated FBM regarding the aspects of accuracy of the solution, efficiency, robustness and behavior of the solvers. Different techniques to avoid the numerical difficulties that arise by using penalty method will be particularly described and analyzed

    Measurement based fault tolerant error correcting quantum codes on foliated cluster states

    Get PDF

    In a network state of mind

    Get PDF
    Scheltens, P. [Promotor]Stam, C.J. [Promotor]Flier, W.M. van der [Copromotor

    Invariant Manifolds for Physical and Chemical Kinetics

    Full text link
    corecore