17 research outputs found

    New complexity results for the k-covers problem

    Get PDF
    The k-covers problem (kCP) asks us to compute a minimum cardinality set of stringsof given length k > 1 that covers a given string. It was shown in a recent paper, by reduction to 3-SAT, that the k-covers problem is NP-complete. In this paper we introduce a new problem, that we call the k-Bounded Relaxed Vertex Cover Problem (RVCPk), which we show is equivalent to k-Bounded Set Cover (SCPk). We show further that kCP is a special case of RVCPk restricted to certain classes Gx,k of graphs that represent all strings x. Thus a minimum k-cover can be approximated to within a factor k in polynomial time. We discuss approximate solutions of kCP, and we state a number of conjectures and open problems related to kCP and Gx,k

    Fast and Deterministic Approximations for k-Cut

    Get PDF
    In an undirected graph, a k-cut is a set of edges whose removal breaks the graph into at least k connected components. The minimum weight k-cut can be computed in n^O(k) time, but when k is treated as part of the input, computing the minimum weight k-cut is NP-Hard [Goldschmidt and Hochbaum, 1994]. For poly(m,n,k)-time algorithms, the best possible approximation factor is essentially 2 under the small set expansion hypothesis [Manurangsi, 2017]. Saran and Vazirani [1995] showed that a (2 - 2/k)-approximately minimum weight k-cut can be computed via O(k) minimum cuts, which implies a O~(km) randomized running time via the nearly linear time randomized min-cut algorithm of Karger [2000]. Nagamochi and Kamidoi [2007] showed that a (2 - 2/k)-approximately minimum weight k-cut can be computed deterministically in O(mn + n^2 log n) time. These results prompt two basic questions. The first concerns the role of randomization. Is there a deterministic algorithm for 2-approximate k-cuts matching the randomized running time of O~(km)? The second question qualitatively compares minimum cut to 2-approximate minimum k-cut. Can 2-approximate k-cuts be computed as fast as the minimum cut - in O~(m) randomized time? We give a deterministic approximation algorithm that computes (2 + eps)-minimum k-cuts in O(m log^3 n / eps^2) time, via a (1 + eps)-approximation for an LP relaxation of k-cut

    Efficient Progressive Minimum k-Core Search

    Get PDF
    As one of the most representative cohesive subgraph models, k-core model has recently received significant attention in the literature. In this paper, we investigate the problem of the minimum k-core search: given a graph G, an integer k and a set of query vertices Q = q, we aim to find the smallest k-core subgraph containing every query vertex q ∈ Q. It has been shown that this problem is NP-hard with a huge search space, and it is very challenging to find the optimal solution. There are several heuristic algorithms for this problem, but they rely on simple scoring functions and there is no guarantee as to the size of the resulting subgraph, compared with the optimal solution. Our empirical study also indicates that the size of their resulting subgraphs may be large in practice. In this paper, we develop an effective and efficient progressive algorithm, namely PSA, to provide a good trade-off between the quality of the result and the search time. Novel lower and upper bound techniques for the minimum k-core search are designed. Our extensive experiments on 12 real-life graphs demonstrate the effectiveness and efficiency of the new techniques

    Minimum Scan Cover and Variants - Theory and Experiments

    Get PDF
    We consider a spectrum of geometric optimization problems motivated by contexts such as satellite communication and astrophysics. In the problem Minimum Scan Cover with Angular Costs, we are given a graph G that is embedded in Euclidean space. The edges of G need to be scanned, i.e., probed from both of their vertices. In order to scan their edge, two vertices need to face each other; changing the heading of a vertex incurs some cost in terms of energy or rotation time that is proportional to the corresponding rotation angle. Our goal is to compute schedules that minimize the following objective functions: (i) in Minimum Makespan Scan Cover (MSC-MS), this is the time until all edges are scanned; (ii) in Minimum Total Energy Scan Cover (MSC-TE), the sum of all rotation angles; (iii) in Minimum Bottleneck Energy Scan Cover (MSC-BE), the maximum total rotation angle at one vertex. Previous theoretical work on MSC-MS revealed a close connection to graph coloring and the cut cover problem, leading to hardness and approximability results. In this paper, we present polynomial-time algorithms for 1D instances of MSC-TE and MSC-BE, but NP-hardness proofs for bipartite 2D instances. For bipartite graphs in 2D, we also give 2-approximation algorithms for both MSC-TE and MSC-BE. Most importantly, we provide a comprehensive study of practical methods for all three problems. We compare three different mixed-integer programming and two constraint programming approaches, and show how to compute provably optimal solutions for geometric instances with up to 300 edges. Additionally, we compare the performance of different meta-heuristics for even larger instances

    Learning-Based Approaches for Graph Problems: A Survey

    Full text link
    Over the years, many graph problems specifically those in NP-complete are studied by a wide range of researchers. Some famous examples include graph colouring, travelling salesman problem and subgraph isomorphism. Most of these problems are typically addressed by exact algorithms, approximate algorithms and heuristics. There are however some drawback for each of these methods. Recent studies have employed learning-based frameworks such as machine learning techniques in solving these problems, given that they are useful in discovering new patterns in structured data that can be represented using graphs. This research direction has successfully attracted a considerable amount of attention. In this survey, we provide a systematic review mainly on classic graph problems in which learning-based approaches have been proposed in addressing the problems. We discuss the overview of each framework, and provide analyses based on the design and performance of the framework. Some potential research questions are also suggested. Ultimately, this survey gives a clearer insight and can be used as a stepping stone to the research community in studying problems in this field.Comment: v1: 41 pages; v2: 40 page

    Hitting Subgraphs in Sparse Graphs and Geometric Intersection Graphs

    Full text link
    We investigate a fundamental vertex-deletion problem called (Induced) Subgraph Hitting: given a graph GG and a set F\mathcal{F} of forbidden graphs, the goal is to compute a minimum-sized set SS of vertices of GG such that GSG-S does not contain any graph in F\mathcal{F} as an (induced) subgraph. This is a generic problem that encompasses many well-known problems that were extensively studied on their own, particularly (but not only) from the perspectives of both approximation and parameterization. We focus on the design of efficient approximation schemes, i.e., with running time f(ε,F)nO(1)f(\varepsilon,\mathcal{F}) \cdot n^{O(1)}, which are also of significant interest to both communities. Technically, our main contribution is a linear-time approximation-preserving reduction from (Induced) Subgraph Hitting on any graph class G\mathcal{G} of bounded expansion to the same problem on bounded degree graphs within G\mathcal{G}. This yields a novel algorithmic technique to design (efficient) approximation schemes for the problem on very broad graph classes, well beyond the state-of-the-art. Specifically, applying this reduction, we derive approximation schemes with (almost) linear running time for the problem on any graph classes that have strongly sublinear separators and many important classes of geometric intersection graphs (such as fat-object graphs, pseudo-disk graphs, etc.). Our proofs introduce novel concepts and combinatorial observations that may be of independent interest (and, which we believe, will find other uses) for studies of approximation algorithms, parameterized complexity, sparse graph classes, and geometric intersection graphs. As a byproduct, we also obtain the first robust algorithm for kk-Subgraph Isomorphism on intersection graphs of fat objects and pseudo-disks, with running time f(k)nlogn+O(m)f(k) \cdot n \log n + O(m).Comment: 60 pages, abstract shortened to fulfill the length limi

    Heuristics for Sparsest Cut Approximations in Network Flow Applications

    Get PDF
    The Maximum Concurrent Flow Problem (MCFP) is a polynomially bounded problem that has been used over the years in a variety of applications. Sometimes it is used to attempt to find the Sparsest Cut, an NP-hard problem, and other times to find communities in Social Network Analysis (SNA) in its hierarchical formulation, the HMCFP. Though it is polynomially bounded, the MCFP quickly grows in space utilization, rendering it useful on only small problems. When it was defined, only a few hundred nodes could be solved, where a few decades later, graphs of one to two thousand nodes can still be too much for modern commodity hardware to handle. This dissertation covers three approaches to heuristics to the MCFP that run significantly faster in practice than the LP formulation with far less memory utilization. The first two approaches are based on the Maximum Adjacency Search (MAS) and apply to both the MCFP and the HMCFP used for community detection. We compare the three approaches to the LP performance in terms of accuracy, runtime, and memory utilization on several classes of synthetic graphs representing potential real-world applications. We find that the heuristics are often correct, and run using orders of magnitude less memory and time

    Interference Modeling And Control In Wireless Networks

    Get PDF
    With the successful commercialization of IEEE802.11 standard, wireless networks have become a tight-knit of our daily life. As wireless networks are increasingly applied to real- time and mission-critical tasks, how to ensuring real-time, reliable data delivery emerges as an important problem. However, wireless communication is subject to various dynamics and uncertainties due to the broadcast nature of wireless signal. In particular, co-channel interfer- ence not only reduces the reliability and throughput of wireless networks, it also increases the variability and uncertainty in data communication [64, 80, 77]. A basis of interference control is the interference model which \emph{predicts} whether a set of concurrent transmissions may interfere with one another. Two commonly used models, the \textit{SINR model} and the \textit{radio-K model}, are thoroughly studied in our work. To address the limitations of those models, we propose the physical-ratio-K(PRK) interference model as a reliablility-oriented instantiation of the ratio-K model, where the link-specific choice of K adapts to network and environmental conditions as well as application QoS requirements to ensure certain minimum reliability of every link. On the other hand, the interference among the transmissions, limits the number of con- current transmissions. We formulate the concept of \emph{interference budget} that, given a set of scheduled transmissions in a time slot, characterizes the additional interference power that can be tolerated by all the receivers without violating the application requirement on link reliability. We propose the scheduling algorithm \emph{iOrder} that optimizes link ordering by considering both interference budget and queue length in scheduling. Through both simulation and real-world experiments, we observe that optimizing link ordering can improve the performance of existing algorithms by a significant. Based on the strong preliminary research result on interference modeling and control, we will extend our method into distributed protocol designs. One future work will focus on imple- menting the \textit{PRK model} in a distributed protocols. We will also explore the benefits of using multiple channels in the interference control

    グラフ上の分割問題と被覆問題:計算量解析とアルゴリズム設計

    Get PDF
    This dissertation studies four combinatorial optimization problems on graphs: (1) Minimum Block Transfer problem (MBT for short), (2) Maximum k-Path Vertex Cover problem (MaxPkVC for short), (3) k-Path Vertex Cover Reconfiguration problem (k- PVCR for short), and (4) Minimum (Maximum) Weighted Path Cover problem (MinPC (MaxPC) for short). This dissertation provides hardness results, such as NP-hardness and inapproximabilities, and polynomial-time algorithms for each problem. In Chapter 2, we study MBT. Let G = (V, A) be a simple directed acyclic graph, i.e., G does not include any cycles, any multiple arcs, or any self-loops, with a node set V and an arc set A. Given a DAG G and a block size B, the objective of MBT is to find a partition of its node set such that it satisfies the following two conditions: (i) Each element (called a block) of the partition has a size which is at most B, and (ii) the maximum number of external arcs among directed paths from the roots to the leaves is minimized. The number of external arcs is defined as the number of arcs connecting two distinct blocks, that is, the number denotes the number of block transfers. The height of a DAG is defined as the length of the longest directed paths from its roots to the leaves. Let us consider the two-level I/O model for data transfers between an external memory with a large space and an internal memory with a limited space. Assume that the external memory is divided into fixed contiguous blocks of size B, and one query or modification transfers one block of B objects from the external memory to the internal one. Then, with our MBT problem, we can consider the efficient way to store data in the external memory such that the maximum number of data transfers between the external memory and the internal one is minimized. We first revisit the previous, naive bottom-up packing algorithm for MBT and show that its approximation ratio is 2 if B = 2. Additionally, we show that the approximation ratio of that algorithm is at least B if B gets larger. Next, we explicitly show that MBT is NP-hard even if each block size B is at most two and the height of DAGs is three, and maximum indegree and outdegree of a node are two and three, respectively. Our proof of the NP-hardness also shows that, if B = 2 and P 6= NP, MBT does not admit any polynomial-time (3=2 - ε)- approximation ((4/3 - ε)-approximation, resp.) algorithm for any ε > 0 even if the input is restricted to DAGs of height at most five (at least six, resp.). Fortunately, however, we can obtain a linear time exact algorithm if the height of DAGs is bounded above by two. Also, for MBT with B = 2, we provide the following linear-time algorithms: A simple 2-approximation algorithm and improved (2 - ε)-approximation algorithms, where ε = 2/h and ε = 2/(h + 1) for the case where the height of the input DAGs is even and odd, respectively. If h = 3, the last algorithm achieves a 3/2-approximation ratio, matching the inapproximability. In Chapter 3, we study MaxPkVC. Let G = (V, E) be a simple undirected graph, where V and E denote the set of vertices and the set of edges, respectively. A path of length k - 1 is called a k-path. If a k-path Pk contains a vertex v in a vertex set S, then we say that the vertex v or the set S covers Pk. Given a graph G and an integer s, the goal of MaxPkVC is to find a vertex subset S of size at most s such that the number of k-paths covered by S is maximized. Given a graph G, MinPkVC problem, a minimization version of MaxPkVC, is to find a minimum vertex subset of G such that it covers all the k-paths of G. A great focus has been on MinPkVC since it was introduced in 2011, and it is known that MinPkVC has an application for maintaining the security of a network. MinVC is a classical, very famous problem in this field such that it seeks to find a minimum vertex subset to cover all the 2-paths, i.e., the edges of the graph. Also, its maximization version, MaxVC, is well studied. One can see that MaxPkVC is a generalized problem of MaxVC since MaxVC is a special case of MaxPkVC, in the case where k = 2. MaxPkVC, for example, has an application when we would like to cover as many areas as possible with a restricted amount of budget. First, we show that MaxP3VC (MaxP4VC, resp.) is NP-hard on split graphs (chordal graphs, resp.). Then, we show that MaxP3VC is in FPT with respect to the combined parameter s + tw, where s and tw are the prescribed size of 3-path vertex cover and treewidth parameter, respectively. Treewidth is a well-known graph parameter, and it defines a tree-likeness of a graph; see Chapter 3. Our algorithm runs in O((s + 1)2tw+4 ・ 4tw・n)-time, where |V| = n. In Chapter 4, we discuss k-PVCR. Let G = (V, E) be a simple graph. In a reconfiguration setting, two feasible solutions of a computational problem are given, along with a reconfiguration rule that describes an adjacency relation between solutions. A reconfiguration problem asks if one feasible solution can be transformed into the other via a sequence of adjacent feasible solutions where each intermediate member is obtained from its predecessor by applying the given reconfiguration rule exactly once. Such a sequence is called a reconfiguration sequence, if it exists. For any fixed integers k ≥ 2, given two distinct k-path vertex covers I and J of a graph G and a single reconfiguration rule, the goal of k-PVCR is to determine if there is a reconfiguration sequence between I and J. For the reconfiguration rule, we consider the following three well-known rules: Token Sliding (TS), Token Jumping (TJ), and Token Addition or Removal (TAR). For the precise descriptions of each rule, refer to Chapter 4. The reconfiguration variant of MinVC (called VCR) has been well studied; the goal of our study is to find the difference between VCR and k-PVCR, such as the difference of the computational complexity on graph subclasses, and to design polynomial-time algorithms. We can again see that k-PVCR is a generalized problem of VCR, since VCR is a special case of k-PVCR if k = 2. First, we confirm that several hardness results for VCR remain true for k-PVCR; we show the PSPACE-completeness of k-PVCR on general graphs under each rule TS, TJ, and TAR using a reduction from a variant of VCR. As our reduction preserves some nice graph properties, we claim that the hardness results for VCR on several graphs (planar graphs, bounded bandwidth graphs, chordal graphs, bipartite graphs) can be converted into those for k-PVCR. Using another reduction, we moreover show that k-PVCR remains PSPACE-complete even on planar graphs of bounded bandwith and maximum degree 3. On the other hand, we design polynomial-time algorithms for k-PVCR on trees (under each of TJ and TAR), paths and cycles (under each reconfiguration rule). Furthermore, on paths, our algorithm constructs a shortest reconfiguration sequence. In Chapter 5, we investigate MinPC (MaxPC), especially the (in)tractabilities of MinPC. Given a graph G = (V, E), a collection P of vertex disjoint paths is called a path cover on G if every vertex v ⋲ V is in exactly one path of P. The goal of path cover problem (PC for short) is to find a path cover with the minimum number of paths on G. As a generalized variant of PC, we introduce MinPC (MaxPC) as follows: Let U = {0, 1,...,n-1} denote a set of path lengths. Given a graph G = (V, E) and a cost (profit) function f : U → R ⋃ {+∞, -∞}, which defines a cost (profit) for each path in its length, find a path cover P of G such that the total cost (profit) of the paths in P is minimized (maximized). Let L be a subset of U. We denote the set of paths of length l ⋲ L as PL. We, especially, consider MinPC whose cost function is f(l) = 1 if l ⋲ L; otherwise f(l) = 0. The problem is denoted by MinPLPC and is to find a path cover with the minimum number of paths with length l ⋲ L. We can also define the problem MaxPLPC with f(l) = l + 1, if l ⋲ L, and f(l) = 0, otherwise. Note that several classical problems can be seen as special cases of MinPC or MaxPC. For example, Hamiltonian Path Problem (to seek a single path visiting every vertex exactly once) and Maximum Matching Problem are equivalent to MinP{n-1}PC and MaxP{1}PC, respectively. It is known that MinP{0}PC and MinP{0, 1}PC with the same cost function as ours can be solved in polynomial time. First, we show that MinP{0, 1, 2}PC is NP-hard on planar bipartite graphs with maximum degree three, reduced from Planar 3-SAT. Our reduction also shows that there exist no approximation algorithms for MinP{0, 1, 2}PC unless P = NP. As a positive result, we show that MinP{0,...,k}PC for any fixed integers k can be solved in polynomial time on graphs with bounded treewidth. Specifically, our algorithm runs in O(42W ・W2W+2 ・ (k + 2)2W+2 ・ n)-time, assuming we are given an n-vertex graph of width at most W with its tree decomposition. Finally, a conclusion of this dissertation and open problems are given in Chapter 6.九州工業大学博士学位論文 学位記番号:情工博甲第355号 学位授与年月日:令和3年3月25日1 Introduction|2 Minimum Block Transfer problem|3 Maximum k-Path Vertex Cover problem|4 k-Path Vertex Cover Reconfiguration problem|5 Minimum (Maximum) Weighted Path Cover problem|6 Conclusion and Open Problems九州工業大学令和2年

    Proceedings of the 8th Cologne-Twente Workshop on Graphs and Combinatorial Optimization

    No full text
    International audienceThe Cologne-Twente Workshop (CTW) on Graphs and Combinatorial Optimization started off as a series of workshops organized bi-annually by either Köln University or Twente University. As its importance grew over time, it re-centered its geographical focus by including northern Italy (CTW04 in Menaggio, on the lake Como and CTW08 in Gargnano, on the Garda lake). This year, CTW (in its eighth edition) will be staged in France for the first time: more precisely in the heart of Paris, at the Conservatoire National d’Arts et Métiers (CNAM), between 2nd and 4th June 2009, by a mixed organizing committee with members from LIX, Ecole Polytechnique and CEDRIC, CNAM
    corecore