3,421 research outputs found

    Solving MaxSAT and #SAT on structured CNF formulas

    Full text link
    In this paper we propose a structural parameter of CNF formulas and use it to identify instances of weighted MaxSAT and #SAT that can be solved in polynomial time. Given a CNF formula we say that a set of clauses is precisely satisfiable if there is some complete assignment satisfying these clauses only. Let the ps-value of the formula be the number of precisely satisfiable sets of clauses. Applying the notion of branch decompositions to CNF formulas and using ps-value as cut function, we define the ps-width of a formula. For a formula given with a decomposition of polynomial ps-width we show dynamic programming algorithms solving weighted MaxSAT and #SAT in polynomial time. Combining with results of 'Belmonte and Vatshelle, Graph classes with structured neighborhoods and algorithmic applications, Theor. Comput. Sci. 511: 54-65 (2013)' we get polynomial-time algorithms solving weighted MaxSAT and #SAT for some classes of structured CNF formulas. For example, we get O(m2(m+n)s)O(m^2(m + n)s) algorithms for formulas FF of mm clauses and nn variables and size ss, if FF has a linear ordering of the variables and clauses such that for any variable xx occurring in clause CC, if xx appears before CC then any variable between them also occurs in CC, and if CC appears before xx then xx occurs also in any clause between them. Note that the class of incidence graphs of such formulas do not have bounded clique-width

    Efficient Decomposed Learning for Structured Prediction

    Full text link
    Structured prediction is the cornerstone of several machine learning applications. Unfortunately, in structured prediction settings with expressive inter-variable interactions, exact inference-based learning algorithms, e.g. Structural SVM, are often intractable. We present a new way, Decomposed Learning (DecL), which performs efficient learning by restricting the inference step to a limited part of the structured spaces. We provide characterizations based on the structure, target parameters, and gold labels, under which DecL is equivalent to exact learning. We then show that in real world settings, where our theoretical assumptions may not completely hold, DecL-based algorithms are significantly more efficient and as accurate as exact learning.Comment: ICML201

    Approximating Minimum Independent Dominating Sets in Wireless Networks

    Get PDF
    We present the first polynomial-time approximation scheme (PTAS) for the Minimum Independent Dominating Set problem in graphs of polynomially bounded growth. Graphs of bounded growth are used to characterize wireless communication networks, and this class of graph includes many models known from the literature, e.g. (Quasi) Unit Disk Graphs. An independent dominating set is a dominating set in a graph that is also independent. It thus combines the advantages of both structures, and there are many applications that rely on these two structures e.g. in the area of wireless ad hoc networks. The presented approach yields a robust algorithm, that is, the algorithm accepts any undirected graph as input, and returns a (1+")- pproximate minimum dominating set, or a certificate showing that the input graph does not reflect a wireless network

    Learning parametric dictionaries for graph signals

    Get PDF
    In sparse signal representation, the choice of a dictionary often involves a tradeoff between two desirable properties -- the ability to adapt to specific signal data and a fast implementation of the dictionary. To sparsely represent signals residing on weighted graphs, an additional design challenge is to incorporate the intrinsic geometric structure of the irregular data domain into the atoms of the dictionary. In this work, we propose a parametric dictionary learning algorithm to design data-adapted, structured dictionaries that sparsely represent graph signals. In particular, we model graph signals as combinations of overlapping local patterns. We impose the constraint that each dictionary is a concatenation of subdictionaries, with each subdictionary being a polynomial of the graph Laplacian matrix, representing a single pattern translated to different areas of the graph. The learning algorithm adapts the patterns to a training set of graph signals. Experimental results on both synthetic and real datasets demonstrate that the dictionaries learned by the proposed algorithm are competitive with and often better than unstructured dictionaries learned by state-of-the-art numerical learning algorithms in terms of sparse approximation of graph signals. In contrast to the unstructured dictionaries, however, the dictionaries learned by the proposed algorithm feature localized atoms and can be implemented in a computationally efficient manner in signal processing tasks such as compression, denoising, and classification
    • …
    corecore