615 research outputs found

    Parameterizing by the Number of Numbers

    Full text link
    The usefulness of parameterized algorithmics has often depended on what Niedermeier has called, "the art of problem parameterization". In this paper we introduce and explore a novel but general form of parameterization: the number of numbers. Several classic numerical problems, such as Subset Sum, Partition, 3-Partition, Numerical 3-Dimensional Matching, and Numerical Matching with Target Sums, have multisets of integers as input. We initiate the study of parameterizing these problems by the number of distinct integers in the input. We rely on an FPT result for ILPF to show that all the above-mentioned problems are fixed-parameter tractable when parameterized in this way. In various applied settings, problem inputs often consist in part of multisets of integers or multisets of weighted objects (such as edges in a graph, or jobs to be scheduled). Such number-of-numbers parameterized problems often reduce to subproblems about transition systems of various kinds, parameterized by the size of the system description. We consider several core problems of this kind relevant to number-of-numbers parameterization. Our main hardness result considers the problem: given a non-deterministic Mealy machine M (a finite state automaton outputting a letter on each transition), an input word x, and a census requirement c for the output word specifying how many times each letter of the output alphabet should be written, decide whether there exists a computation of M reading x that outputs a word y that meets the requirement c. We show that this problem is hard for W[1]. If the question is whether there exists an input word x such that a computation of M on x outputs a word that meets c, the problem becomes fixed-parameter tractable

    Band gap bowing in NixMg1-xO.

    Get PDF
    Epitaxial transparent oxide NixMg1-xO (0 ≤ x ≤ 1) thin films were grown on MgO(100) substrates by pulsed laser deposition. High-resolution synchrotron X-ray diffraction and high-resolution transmission electron microscopy analysis indicate that the thin films are compositionally and structurally homogeneous, forming a completely miscible solid solution. Nevertheless, the composition dependence of the NixMg1-xO optical band gap shows a strong non-parabolic bowing with a discontinuity at dilute NiO concentrations of x  0.074 and account for the anomalously large band gap narrowing in the NixMg1-xO solid solution system

    On the (non-)existence of polynomial kernels for Pl-free edge modification problems

    Full text link
    Given a graph G = (V,E) and an integer k, an edge modification problem for a graph property P consists in deciding whether there exists a set of edges F of size at most k such that the graph H = (V,E \vartriangle F) satisfies the property P. In the P edge-completion problem, the set F of edges is constrained to be disjoint from E; in the P edge-deletion problem, F is a subset of E; no constraint is imposed on F in the P edge-edition problem. A number of optimization problems can be expressed in terms of graph modification problems which have been extensively studied in the context of parameterized complexity. When parameterized by the size k of the edge set F, it has been proved that if P is an hereditary property characterized by a finite set of forbidden induced subgraphs, then the three P edge-modification problems are FPT. It was then natural to ask whether these problems also admit a polynomial size kernel. Using recent lower bound techniques, Kratsch and Wahlstrom answered this question negatively. However, the problem remains open on many natural graph classes characterized by forbidden induced subgraphs. Kratsch and Wahlstrom asked whether the result holds when the forbidden subgraphs are paths or cycles and pointed out that the problem is already open in the case of P4-free graphs (i.e. cographs). This paper provides positive and negative results in that line of research. We prove that parameterized cograph edge modification problems have cubic vertex kernels whereas polynomial kernels are unlikely to exist for the Pl-free and Cl-free edge-deletion problems for large enough l

    Towards Work-Efficient Parallel Parameterized Algorithms

    Full text link
    Parallel parameterized complexity theory studies how fixed-parameter tractable (fpt) problems can be solved in parallel. Previous theoretical work focused on parallel algorithms that are very fast in principle, but did not take into account that when we only have a small number of processors (between 2 and, say, 1024), it is more important that the parallel algorithms are work-efficient. In the present paper we investigate how work-efficient fpt algorithms can be designed. We review standard methods from fpt theory, like kernelization, search trees, and interleaving, and prove trade-offs for them between work efficiency and runtime improvements. This results in a toolbox for developing work-efficient parallel fpt algorithms.Comment: Prior full version of the paper that will appear in Proceedings of the 13th International Conference and Workshops on Algorithms and Computation (WALCOM 2019), February 27 - March 02, 2019, Guwahati, India. The final authenticated version is available online at https://doi.org/10.1007/978-3-030-10564-8_2

    COMPRESSIVE BEHAVIOR OF CONCRETE COLUMNS AXIALLYLOADED BEFORE CFRP-WRAPPING. REMARKS BY EXPERIMENTALNUMERICAL INVESTIGATION

    Get PDF
    Strengthening of existing concrete columns with Fiber Reinforced Polymers (FRP) results generally in a satisfactory structural member improvement in terms of load and strain capacity. A reliable prediction of the capacity obtained by these reinforcement strategies requests a proper knowledge of the load-strain response of the confined concrete elements. However, so far, the available design methods and technical codes do not consider the effect of the possible presence of service loads at the moment of application of the reinforcement, and therefore, the compressive behavior of the concrete confined under preload is still unclear. In this paper, the effect of sustained loads on the compressive behavior of concrete columns CFRP-confined while preloaded is analyzed. Experimental tests were performed on circular concrete columns confined under low, medium and high preload levels before wrapping ad subsequently loaded until failure, observing the differences respect to the standard compressive stress-strain response of FRP-confined concrete. A finite element (FE) model is also developed by using ABAQUS software to simulate the physical scheme of the experimental tests. The accuracy of the model is validated through comparing with the experimental results

    A New Lower Bound on the Maximum Number of Satisfied Clauses in Max-SAT and its Algorithmic Applications

    Full text link
    A pair of unit clauses is called conflicting if it is of the form (x)(x), (xˉ)(\bar{x}). A CNF formula is unit-conflict free (UCF) if it contains no pair of conflicting unit clauses. Lieberherr and Specker (J. ACM 28, 1981) showed that for each UCF CNF formula with mm clauses we can simultaneously satisfy at least \pp m clauses, where \pp =(\sqrt{5}-1)/2. We improve the Lieberherr-Specker bound by showing that for each UCF CNF formula FF with mm clauses we can find, in polynomial time, a subformula FF' with mm' clauses such that we can simultaneously satisfy at least \pp m+(1-\pp)m'+(2-3\pp)n"/2 clauses (in FF), where n"n" is the number of variables in FF which are not in FF'. We consider two parameterized versions of MAX-SAT, where the parameter is the number of satisfied clauses above the bounds m/2m/2 and m(51)/2m(\sqrt{5}-1)/2. The former bound is tight for general formulas, and the later is tight for UCF formulas. Mahajan and Raman (J. Algorithms 31, 1999) showed that every instance of the first parameterized problem can be transformed, in polynomial time, into an equivalent one with at most 6k+36k+3 variables and 10k10k clauses. We improve this to 4k4k variables and (25+4)k(2\sqrt{5}+4)k clauses. Mahajan and Raman conjectured that the second parameterized problem is fixed-parameter tractable (FPT). We show that the problem is indeed FPT by describing a polynomial-time algorithm that transforms any problem instance into an equivalent one with at most (7+35)k(7+3\sqrt{5})k variables. Our results are obtained using our improvement of the Lieberherr-Specker bound above

    Cell wall characteristics during sexual reproduction of Mougeotia sp. (Zygnematophyceae) revealed by electron microscopy, glycan microarrays and RAMAN spectroscopy

    Get PDF
    Mougeotia spp. collected from field samples were investigated for their conjugation morphology by light-, fluorescence-, scanning- and transmission electron microscopy. During a scalarifom conjugation, the extragametangial zygospores were initially surrounded by a thin cell wall that developed into a multi-layered zygospore wall. Maturing zygospores turned dark brown and were filled with storage compounds such as lipids and starch. While M. parvula had a smooth surface, M. disjuncta had a punctated surface structure and a prominent suture. The zygospore wall consisted of a polysaccharide rich endospore, followed by a thin layer with a lipid-like appaerance, a massive electron dense mesospore and a very thin exospore composed of polysaccharides. Glycan microarray analysis of zygospores of different developmental stages revealed the occurrence of pectins and hemicelluloses, mostly composed of homogalacturonan (HG), xyloglucans, xylans, arabino-galactan proteins and extensins. In situ localization by the probe OG7-13AF 488 labelled HG in young zygospore walls, vegetative filaments and most prominently in conjugation tubes and cross walls. Raman imaging showed the distribution of proteins, lipids, carbohydrates and aromatic components of the mature zygospore with a spatial resolution of ~ 250 nm. The carbohydrate nature of the endo- and exospore was confirmed and in-between an enrichment of lipids and aromatic components, probably algaenan or a sporopollenin-like material. Taken together, these results indicate that during zygospore formation, reorganizations of the cell walls occured, leading to a resistant and protective structure

    Improved FPT algorithms for weighted independent set in bull-free graphs

    Full text link
    Very recently, Thomass\'e, Trotignon and Vuskovic [WG 2014] have given an FPT algorithm for Weighted Independent Set in bull-free graphs parameterized by the weight of the solution, running in time 2O(k5)n92^{O(k^5)} \cdot n^9. In this article we improve this running time to 2O(k2)n72^{O(k^2)} \cdot n^7. As a byproduct, we also improve the previous Turing-kernel for this problem from O(k5)O(k^5) to O(k2)O(k^2). Furthermore, for the subclass of bull-free graphs without holes of length at most 2p12p-1 for p3p \geq 3, we speed up the running time to 2O(kk1p1)n72^{O(k \cdot k^{\frac{1}{p-1}})} \cdot n^7. As pp grows, this running time is asymptotically tight in terms of kk, since we prove that for each integer p3p \geq 3, Weighted Independent Set cannot be solved in time 2o(k)nO(1)2^{o(k)} \cdot n^{O(1)} in the class of {bull,C4,,C2p1}\{bull,C_4,\ldots,C_{2p-1}\}-free graphs unless the ETH fails.Comment: 15 page

    Vertex Cover Kernelization Revisited: Upper and Lower Bounds for a Refined Parameter

    Get PDF
    An important result in the study of polynomial-time preprocessing shows that there is an algorithm which given an instance (G,k) of Vertex Cover outputs an equivalent instance (G',k') in polynomial time with the guarantee that G' has at most 2k' vertices (and thus O((k')^2) edges) with k' <= k. Using the terminology of parameterized complexity we say that k-Vertex Cover has a kernel with 2k vertices. There is complexity-theoretic evidence that both 2k vertices and Theta(k^2) edges are optimal for the kernel size. In this paper we consider the Vertex Cover problem with a different parameter, the size fvs(G) of a minimum feedback vertex set for G. This refined parameter is structurally smaller than the parameter k associated to the vertex covering number vc(G) since fvs(G) <= vc(G) and the difference can be arbitrarily large. We give a kernel for Vertex Cover with a number of vertices that is cubic in fvs(G): an instance (G,X,k) of Vertex Cover, where X is a feedback vertex set for G, can be transformed in polynomial time into an equivalent instance (G',X',k') such that |V(G')| <= 2k and |V(G')| <= O(|X'|^3). A similar result holds when the feedback vertex set X is not given along with the input. In sharp contrast we show that the Weighted Vertex Cover problem does not have a polynomial kernel when parameterized by the cardinality of a given vertex cover of the graph unless NP is in coNP/poly and the polynomial hierarchy collapses to the third level.Comment: Published in "Theory of Computing Systems" as an Open Access publicatio

    Results and recommendations from an intercomparison of six Hygroscopicity-TDMA systems

    Get PDF
    The performance of six custom-built Hygrocopicity-Tandem Differential Mobility Analyser (H-TDMA) systems was investigated in the frame of an international calibration and intercomparison workshop held in Leipzig, February 2006. The goal of the workshop was to harmonise H-TDMA measurements and develop recommendations for atmospheric measurements and their data evaluation. The H-TDMA systems were compared in terms of the sizing of dry particles, relative humidity (RH) uncertainty, and consistency in determination of number fractions of different hygroscopic particle groups. The experiments were performed in an air-conditioned laboratory using ammonium sulphate particles or an external mixture of ammonium sulphate and soot particles. The sizing of dry particles of the six H-TDMA systems was within 0.2 to 4.2% of the selected particle diameter depending on investigated size and individual system. Measurements of ammonium sulphate aerosol found deviations equivalent to 4.5% RH from the set point of 90% RH compared to results from previous experiments in the literature. Evaluation of the number fraction of particles within the clearly separated growth factor modes of a laboratory generated externally mixed aerosol was done. The data from the H-TDMAs was analysed with a single fitting routine to investigate differences caused by the different data evaluation procedures used for each H-TDMA. The differences between the H-TDMAs were reduced from +12/-13% to +8/-6% when the same analysis routine was applied. We conclude that a common data evaluation procedure to determine number fractions of externally mixed aerosols will improve the comparability of H-TDMA measurements. It is recommended to ensure proper calibration of all flow, temperature and RH sensors in the systems. It is most important to thermally insulate the aerosol humidification unit and the second DMA and to monitor these temperatures to an accuracy of 0.2 degrees C. For the correct determination of external mixtures, it is necessary to take into account size-dependent losses due to diffusion in the plumbing between the DMAs and in the aerosol humidification unit.Peer reviewe
    corecore