6,434 research outputs found

    Systems of Linear Equations over F2\mathbb{F}_2 and Problems Parameterized Above Average

    Full text link
    In the problem Max Lin, we are given a system Az=bAz=b of mm linear equations with nn variables over F2\mathbb{F}_2 in which each equation is assigned a positive weight and we wish to find an assignment of values to the variables that maximizes the excess, which is the total weight of satisfied equations minus the total weight of falsified equations. Using an algebraic approach, we obtain a lower bound for the maximum excess. Max Lin Above Average (Max Lin AA) is a parameterized version of Max Lin introduced by Mahajan et al. (Proc. IWPEC'06 and J. Comput. Syst. Sci. 75, 2009). In Max Lin AA all weights are integral and we are to decide whether the maximum excess is at least kk, where kk is the parameter. It is not hard to see that we may assume that no two equations in Az=bAz=b have the same left-hand side and n=rankAn={\rm rank A}. Using our maximum excess results, we prove that, under these assumptions, Max Lin AA is fixed-parameter tractable for a wide special case: m2p(n)m\le 2^{p(n)} for an arbitrary fixed function p(n)=o(n)p(n)=o(n). Max rr-Lin AA is a special case of Max Lin AA, where each equation has at most rr variables. In Max Exact rr-SAT AA we are given a multiset of mm clauses on nn variables such that each clause has rr variables and asked whether there is a truth assignment to the nn variables that satisfies at least (12r)m+k2r(1-2^{-r})m + k2^{-r} clauses. Using our maximum excess results, we prove that for each fixed r2r\ge 2, Max rr-Lin AA and Max Exact rr-SAT AA can be solved in time 2O(klogk)+mO(1).2^{O(k \log k)}+m^{O(1)}. This improves 2O(k2)+mO(1)2^{O(k^2)}+m^{O(1)}-time algorithms for the two problems obtained by Gutin et al. (IWPEC 2009) and Alon et al. (SODA 2010), respectively

    Parameterized Study of the Test Cover Problem

    Full text link
    We carry out a systematic study of a natural covering problem, used for identification across several areas, in the realm of parameterized complexity. In the {\sc Test Cover} problem we are given a set [n]={1,...,n}[n]=\{1,...,n\} of items together with a collection, T\cal T, of distinct subsets of these items called tests. We assume that T\cal T is a test cover, i.e., for each pair of items there is a test in T\cal T containing exactly one of these items. The objective is to find a minimum size subcollection of T\cal T, which is still a test cover. The generic parameterized version of {\sc Test Cover} is denoted by p(k,n,T)p(k,n,|{\cal T}|)-{\sc Test Cover}. Here, we are given ([n],T)([n],\cal{T}) and a positive integer parameter kk as input and the objective is to decide whether there is a test cover of size at most p(k,n,T)p(k,n,|{\cal T}|). We study four parameterizations for {\sc Test Cover} and obtain the following: (a) kk-{\sc Test Cover}, and (nk)(n-k)-{\sc Test Cover} are fixed-parameter tractable (FPT). (b) (Tk)(|{\cal T}|-k)-{\sc Test Cover} and (logn+k)(\log n+k)-{\sc Test Cover} are W[1]-hard. Thus, it is unlikely that these problems are FPT

    Kernels for Below-Upper-Bound Parameterizations of the Hitting Set and Directed Dominating Set Problems

    Get PDF
    In the {\sc Hitting Set} problem, we are given a collection F\cal F of subsets of a ground set VV and an integer pp, and asked whether VV has a pp-element subset that intersects each set in F\cal F. We consider two parameterizations of {\sc Hitting Set} below tight upper bounds: p=mkp=m-k and p=nkp=n-k. In both cases kk is the parameter. We prove that the first parameterization is fixed-parameter tractable, but has no polynomial kernel unless coNP\subseteqNP/poly. The second parameterization is W[1]-complete, but the introduction of an additional parameter, the degeneracy of the hypergraph H=(V,F)H=(V,{\cal F}), makes the problem not only fixed-parameter tractable, but also one with a linear kernel. Here the degeneracy of H=(V,F)H=(V,{\cal F}) is the minimum integer dd such that for each XVX\subset V the hypergraph with vertex set VXV\setminus X and edge set containing all edges of F\cal F without vertices in XX, has a vertex of degree at most d.d. In {\sc Nonblocker} ({\sc Directed Nonblocker}), we are given an undirected graph (a directed graph) GG on nn vertices and an integer kk, and asked whether GG has a set XX of nkn-k vertices such that for each vertex y∉Xy\not\in X there is an edge (arc) from a vertex in XX to yy. {\sc Nonblocker} can be viewed as a special case of {\sc Directed Nonblocker} (replace an undirected graph by a symmetric digraph). Dehne et al. (Proc. SOFSEM 2006) proved that {\sc Nonblocker} has a linear-order kernel. We obtain a linear-order kernel for {\sc Directed Nonblocker}

    A Very Brief Introduction to Machine Learning With Applications to Communication Systems

    Get PDF
    Given the unprecedented availability of data and computing resources, there is widespread renewed interest in applying data-driven machine learning methods to problems for which the development of conventional engineering solutions is challenged by modelling or algorithmic deficiencies. This tutorial-style paper starts by addressing the questions of why and when such techniques can be useful. It then provides a high-level introduction to the basics of supervised and unsupervised learning. For both supervised and unsupervised learning, exemplifying applications to communication networks are discussed by distinguishing tasks carried out at the edge and at the cloud segments of the network at different layers of the protocol stack

    Compression via Matroids: A Randomized Polynomial Kernel for Odd Cycle Transversal

    Full text link
    The Odd Cycle Transversal problem (OCT) asks whether a given graph can be made bipartite by deleting at most kk of its vertices. In a breakthrough result Reed, Smith, and Vetta (Operations Research Letters, 2004) gave a \BigOh(4^kkmn) time algorithm for it, the first algorithm with polynomial runtime of uniform degree for every fixed kk. It is known that this implies a polynomial-time compression algorithm that turns OCT instances into equivalent instances of size at most \BigOh(4^k), a so-called kernelization. Since then the existence of a polynomial kernel for OCT, i.e., a kernelization with size bounded polynomially in kk, has turned into one of the main open questions in the study of kernelization. This work provides the first (randomized) polynomial kernelization for OCT. We introduce a novel kernelization approach based on matroid theory, where we encode all relevant information about a problem instance into a matroid with a representation of size polynomial in kk. For OCT, the matroid is built to allow us to simulate the computation of the iterative compression step of the algorithm of Reed, Smith, and Vetta, applied (for only one round) to an approximate odd cycle transversal which it is aiming to shrink to size kk. The process is randomized with one-sided error exponentially small in kk, where the result can contain false positives but no false negatives, and the size guarantee is cubic in the size of the approximate solution. Combined with an \BigOh(\sqrt{\log n})-approximation (Agarwal et al., STOC 2005), we get a reduction of the instance to size \BigOh(k^{4.5}), implying a randomized polynomial kernelization.Comment: Minor changes to agree with SODA 2012 version of the pape
    corecore