1,847 research outputs found

    Optimal Sparsification for Some Binary CSPs Using Low-degree Polynomials

    Full text link
    This paper analyzes to what extent it is possible to efficiently reduce the number of clauses in NP-hard satisfiability problems, without changing the answer. Upper and lower bounds are established using the concept of kernelization. Existing results show that if NP is not contained in coNP/poly, no efficient preprocessing algorithm can reduce n-variable instances of CNF-SAT with d literals per clause, to equivalent instances with O(nd−e)O(n^{d-e}) bits for any e > 0. For the Not-All-Equal SAT problem, a compression to size O˜(nd−1)\~O(n^{d-1}) exists. We put these results in a common framework by analyzing the compressibility of binary CSPs. We characterize constraint types based on the minimum degree of multivariate polynomials whose roots correspond to the satisfying assignments, obtaining (nearly) matching upper and lower bounds in several settings. Our lower bounds show that not just the number of constraints, but also the encoding size of individual constraints plays an important role. For example, for Exact Satisfiability with unbounded clause length it is possible to efficiently reduce the number of constraints to n+1, yet no polynomial-time algorithm can reduce to an equivalent instance with O(n2−e)O(n^{2-e}) bits for any e > 0, unless NP is a subset of coNP/poly.Comment: Updated the cross-composition in lemma 18 (minor update), since the previous version did NOT satisfy requirement 4 of lemma 18 (the proof of Claim 20 was incorrect

    Independent predictors of breast malignancy in screen-detected microcalcifications: biopsy results in 2545 cases

    Get PDF
    Background: Mammographic microcalcifications are associated with many benign lesions, ductal carcinoma in situ (DCIS) and invasive cancer. Careful assessment criteria are required to minimise benign biopsies while optimising cancer diagnosis. We wished to evaluate the assessment outcomes of microcalcifications biopsied in the setting of population-based breast cancer screening. Methods: Between January 1992 and December 2007, cases biopsied in which microcalcifications were the only imaging abnormality were included. Patient demographics, imaging features and final histology were subjected to statistical analysis to determine independent predictors of malignancy. Results: In all, 2545 lesions, with a mean diameter of 21.8 mm (s.d. 23.8 mm) and observed in patients with a mean age of 57.7 years (s.d. 8.4 years), were included. Using the grading system adopted by the RANZCR, the grade was 3 in 47.7%; 4 in 28.3% and 5 in 24.0%. After assessment, 1220 lesions (47.9%) were malignant (809 DCIS only, 411 DCIS with invasive cancer) and 1325 (52.1%) were non-malignant, including 122 (4.8%) premalignant lesions (lobular carcinoma in situ, atypical lobular hyperplasia and atypical ductal hyperplasia). Only 30.9% of the DCIS was of low grade. Mammographic extent of microcalcifications >15 mm, imaging grade, their pattern of distribution, presence of a palpable mass and detection after the first screening episode showed significant univariate associations with malignancy. On multivariate modeling imaging grade, mammographic extent of microcalcifications >15 mm, palpable mass and screening episode were retained as independent predictors of malignancy. Radiological grade had the largest effect with lesions of grade 4 and 5 being 2.2 and 3.3 times more likely to be malignant, respectively, than grade 3 lesions. Conclusion: The radiological grading scheme used throughout Australia and parts of Europe is validated as a useful system of stratifying microcalcifications into groups with significantly different risks of malignancy. Biopsy assessment of appropriately selected microcalcifications is an effective method of detecting invasive breast cancer and DCIS, particularly of non-low-grade subtypes.G Farshid, T Sullivan, P Downey, P G Gill, and S Pieters

    Optimal Data Reduction for Graph Coloring Using Low-Degree Polynomials

    Get PDF
    The theory of kernelization can be used to rigorously analyze data reduction for graph coloring problems. Here, the aim is to reduce a q-Coloring input to an equivalent but smaller input whose size is provably bounded in terms of structural properties, such as the size of a minimum vertex cover. In this paper we settle two open problems about data reduction for q-Coloring. First, we use a recent technique of finding redundant constraints by representing them as low-degree polynomials, to obtain a kernel of bitsize O(k^(q-1) log k) for q-Coloring parameterized by Vertex Cover for any q >= 3. This size bound is optimal up to k^o(1) factors assuming NP is not a subset of coNP/poly, and improves on the previous-best kernel of size O(k^q). Our second result shows that 3-Coloring does not admit non-trivial sparsification: assuming NP is not a subset of coNP/poly, the parameterization by the number of vertices n admits no (generalized) kernel of size O(n^(2-e)) for any e > 0. Previously, such a lower bound was only known for coloring with q >= 4 colors

    Sparsification Upper and Lower Bounds for Graphs Problems and Not-All-Equal SAT

    Get PDF
    We present several sparsification lower and upper bounds for classic problems in graph theory and logic. For the problems 4-Coloring, (Directed) Hamiltonian Cycle, and (Connected) Dominating Set, we prove that there is no polynomial-time algorithm that reduces any n-vertex input to an equivalent instance, of an arbitrary problem, with bitsize O(n^{2-epsilon}) for epsilon > 0, unless NP is a subset of coNP/poly and the polynomial-time hierarchy collapses. These results imply that existing linear-vertex kernels for k-Nonblocker and k-Max Leaf Spanning Tree (the parametric duals of (Connected) Dominating Set) cannot be improved to have O(k^{2-epsilon}) edges, unless NP is a subset of NP/poly. We also present a positive result and exhibit a non-trivial sparsification algorithm for d-Not-All-Equal-SAT. We give an algorithm that reduces an n-variable input with clauses of size at most d to an equivalent input with O(n^{d-1}) clauses, for any fixed d. Our algorithm is based on a linear-algebraic proof of Lovász that bounds the number of hyperedges in critically 3-chromatic d-uniform n-vertex hypergraphs by binom{n}{d-1}. We show that our kernel is tight under the assumption that NP is not a subset of NP/poly

    Optimal Sparsification for Some Binary CSPs Using Low-Degree Polynomials

    Get PDF
    This paper analyzes to what extent it is possible to efficiently reduce the number of clauses in NP-hard satisfiability problems, without changing the answer. Upper and lower bounds are established using the concept of kernelization. Existing results show that if NP is not contained in coNP/poly, no efficient preprocessing algorithm can reduce n-variable instances of CNF-SAT with d literals per clause, to equivalent instances with O(n^{d-epsilon}) bits for any epsilon > 0. For the Not-All-Equal SAT problem, a compression to size tilde-O(n^{d-1}) exists. We put these results in a common framework by analyzing the compressibility of binary CSPs. We characterize constraint types based on the minimum degree of multivariate polynomials whose roots correspond to the satisfying assignments, obtaining (nearly) matching upper and lower bounds in several settings. Our lower bounds show that not just the number of constraints, but also the encoding size of individual constraints plays an important role. For example, for Exact Satisfiability with unbounded clause length it is possible to efficiently reduce the number of constraints to n+1, yet no polynomial-time algorithm can reduce to an equivalent instance with O(n^{2-epsilon}) bits for any epsilon > 0, unless NP is contained in coNP/poly

    Polynomial Kernels for Hitting Forbidden Minors under Structural Parameterizations

    Get PDF
    We investigate polynomial-time preprocessing for the problem of hitting forbidden minors in a graph, using the framework of kernelization. For a fixed finite set of graphs F, the F-Deletion problem is the following: given a graph G and integer k, is it possible to delete k vertices from G to ensure the resulting graph does not contain any graph from F as a minor? Earlier work by Fomin, Lokshtanov, Misra, and Saurabh [FOCS\u2712] showed that when F contains a planar graph, an instance (G,k) can be reduced in polynomial time to an equivalent one of size k^{O(1)}. In this work we focus on structural measures of the complexity of an instance, with the aim of giving nontrivial preprocessing guarantees for instances whose solutions are large. Motivated by several impossibility results, we parameterize the F-Deletion problem by the size of a vertex modulator whose removal results in a graph of constant treedepth eta. We prove that for each set F of connected graphs and constant eta, the F-Deletion problem parameterized by the size of a treedepth-eta modulator has a polynomial kernel. Our kernelization is fully explicit and does not depend on protrusion reduction or well-quasi-ordering, which are sources of algorithmic non-constructivity in earlier works on F-Deletion. Our main technical contribution is to analyze how models of a forbidden minor in a graph G with modulator X, interact with the various connected components of G-X. Using the language of labeled minors, we analyze the fragments of potential forbidden minor models that can remain after removing an optimal F-Deletion solution from a single connected component of G-X. By bounding the number of different types of behavior that can occur by a polynomial in |X|, we obtain a polynomial kernel using a recursive preprocessing strategy. Our results extend earlier work for specific instances of F-Deletion such as Vertex Cover and Feedback Vertex Set. It also generalizes earlier preprocessing results for F-Deletion parameterized by a vertex cover, which is a treedepth-one modulator

    Reduction of Coxiella burnetii prevalence by vaccination of goats and sheep, the Netherlands

    Get PDF
    Recently, the number of human Q fever cases in the Netherlands increased dramatically. In response to this increase, dairy goats and dairy sheep were vaccinated against Coxiella burnetii. All pregnant dairy goats and dairy sheep in herds positive for Q fever were culled. We identified the effect of vaccination on bacterial shedding by small ruminants. On the day of culling, samples of uterine fluid, vaginal mucus, and milk were obtained from 957 pregnant animals in 13 herds. Prevalence and bacterial load were reduced in vaccinated animals compared with unvaccinated animals. These effects were most pronounced in animals during their first pregnancy. Results indicate that vaccination may reduce bacterial load in the environment and human exposure to C. burnetii

    Coarse Grained Molecular Dynamics Simulations of the Fusion of Vesicles Incorporating Water Channels

    Get PDF
    As the dynamics of the cell membrane and the working mechanisms of proteins cannot be readily asserted at a molecular level, many different hypotheses exist that try to predict and explain these processes, for instance vesicle fusion. Therefore, we use coarse grained molecular dynamics simulations to elucidate the fusion mechanism of vesicles. The implementation of this method with hydrophilic and hydrophobic particles is known for its valid representation of bilayers. With a minimalistic approach, using only 3 atom types, 12 atoms per two-tailed phospholipids and incorporating only a bond potential and Lennard-Jones potential, phospholipid bilayers and vesicles can be simulated exhibiting authentic dynamics. We have simulated the spontaneous full fusion of both tiny (6 nm diameter) and larger (13 nm diameter) vesicles. We showed that, without applying constraints to the vesicles, the initial contact between two fusing vesicles, the stalk, is initiated by a bridging lipid tail that extends from the membrane spontaneously. Subsequently it is observed that the evolution of the stalk can proceed via two pathways, anisotropic and radial expansion, which is in accordance with literature. Contrary to the spherical vesicles of in vitro experiments, the fused vesicles remain tubular since the internal volume of these vesicles is too small compared to their membrane area. While the lipid bilayer has some permeability for water, it is not high enough to allow for the large flux required to equilibrate the vesicle content in the time accessible to our simulations. To increase the membrane permeability, we incorporate proteinaceous water channels, by applying the coarse grained technique to aquaporin. Even though incorporating water channels in the vesicles does significantly increase water permeability, the vesicles do not become spherical. Presumably the lipids have to be redistributed as well

    Review of applications of SIMDEUM, a stochastic drinking water demand model with small temporal and spatial scale

    Get PDF
    Many researchers have developed drinking water demand models with various temporal and spatial scales. A limited number of models are available at a temporal scale of one second and a spatial scale of a single home. Reasons for building these models were described in the papers in which the models were introduced, along with a discussion on potential applications. However, the predicted applications are seldom re-examined. As SIMDEUM, a stochastic end-use model for drinking water demand, has often been applied in research and practice since it was developed, we are reexamining its applications in this paper. SIMDEUM’s original purpose was to calculate maximum demands in order to be able to design self-cleaning networks. Yet, the model has been useful in many more applications. This paper gives an overview of the many fields of application of SIMDEUM and shows where this type of demand model is indispensable and where it has limited practical value. This overview also leads to an understanding of requirements on demand models in various applications

    Review of applications for SIMDEUM, a stochastic drinking water demand model with a small temporal and spatial scale

    Get PDF
    Many researchers have developed drinking water demand models with various temporal and spatial scales. A limited number of models is available at a temporal scale of 1 s and a spatial scale of a single home. The reasons for building these models were described in the papers in which the models were introduced, along with a discussion on their potential applications. However, the predicted applications are seldom re-examined. SIMDEUM, a stochastic end-use model for drinking water demand, has often been applied in research and practice since it was developed. We are therefore re-examining its applications in this paper. SIMDEUM's original purpose was to calculate maximum demands in order to design self-cleaning networks. Yet, the model has been useful in many more applications. This paper gives an overview of the many fields of application for SIMDEUM and shows where this type of demand model is indispensable and where it has limited practical value. This overview also leads to an understanding of the requirements for demand models in various applications
    • …
    corecore