1,646 research outputs found

    Data Reduction for Graph Coloring Problems

    Full text link
    This paper studies the kernelization complexity of graph coloring problems with respect to certain structural parameterizations of the input instances. We are interested in how well polynomial-time data reduction can provably shrink instances of coloring problems, in terms of the chosen parameter. It is well known that deciding 3-colorability is already NP-complete, hence parameterizing by the requested number of colors is not fruitful. Instead, we pick up on a research thread initiated by Cai (DAM, 2003) who studied coloring problems parameterized by the modification distance of the input graph to a graph class on which coloring is polynomial-time solvable; for example parameterizing by the number k of vertex-deletions needed to make the graph chordal. We obtain various upper and lower bounds for kernels of such parameterizations of q-Coloring, complementing Cai's study of the time complexity with respect to these parameters. Our results show that the existence of polynomial kernels for q-Coloring parameterized by the vertex-deletion distance to a graph class F is strongly related to the existence of a function f(q) which bounds the number of vertices which are needed to preserve the NO-answer to an instance of q-List-Coloring on F.Comment: Author-accepted manuscript of the article that will appear in the FCT 2011 special issue of Information & Computatio

    Enhancing Program Soft Error Resilience through Algorithmic Approaches

    Get PDF
    The rising count and shrinking feature size of transistors within modern computers is making them increasingly vulnerable to various types of soft faults. This problem is especially acute in high-performance computing (HPC) systems used for scientific computing, because these systems include many thousands of compute cores and nodes, all of which may be utilized in a single large-scale run. The increasing vulnerability of HPC applications to errors induced by soft faults is motivating extensive work on techniques to make these applications more resilient to such faults, ranging from generic techniques such as replication or checkpoint/restart to algorithm-specific error detection and tolerance techniques. Effective use of such techniques requires a detailed understanding of how a given application is affected by soft faults to ensure that (i) efforts to improve application resilience are spent in the code regions most vulnerable to faults, (ii) the appropriate resilience techniques is applied to each code region, and (iii) the understanding be obtained in an efficient manner. This thesis presents two tools: FaultTelescope helps application developers view the routine and application vulnerability to soft errors while ErrorSight helps perform modular fault characteristics analysis for more complex applications. This thesis also illustrates how these tools can be used in the context of representative applications and kernels. In addition to providing actionable insights into application behavior, the tools automatically selects the number of fault injection experiments required to efficiently generation error profiles of an application, ensuring that the information is statistically well-grounded without performing unnecessary experiments

    Computing Petaflops over Terabytes of Data: The Case of Genome-Wide Association Studies

    Full text link
    In many scientific and engineering applications, one has to solve not one but a sequence of instances of the same problem. Often times, the problems in the sequence are linked in a way that allows intermediate results to be reused. A characteristic example for this class of applications is given by the Genome-Wide Association Studies (GWAS), a widely spread tool in computational biology. GWAS entails the solution of up to trillions (101210^{12}) of correlated generalized least-squares problems, posing a daunting challenge: the performance of petaflops (101510^{15} floating-point operations) over terabytes of data. In this paper, we design an algorithm for performing GWAS on multi-core architectures. This is accomplished in three steps. First, we show how to exploit the relation among successive problems, thus reducing the overall computational complexity. Then, through an analysis of the required data transfers, we identify how to eliminate any overhead due to input/output operations. Finally, we study how to decompose computation into tasks to be distributed among the available cores, to attain high performance and scalability. With our algorithm, a GWAS that currently requires the use of a supercomputer may now be performed in matter of hours on a single multi-core node. The discussion centers around the methodology to develop the algorithm rather than the specific application. We believe the paper contributes valuable guidelines of general applicability for computational scientists on how to develop and optimize numerical algorithms

    Computational study on planar dominating set problem

    Get PDF
    AbstractRecently, there has been significant theoretical progress towards fixed-parameter algorithms for the DOMINATING SET problem of planar graphs. It is known that the problem on a planar graph with n vertices and dominating number k can be solved in O(2O(k)n) time using tree/branch-decomposition based algorithms. In this paper, we report computational results of Fomin and Thilikos algorithm which uses the branch-decomposition based approach. The computational results show that the algorithm can solve the DOMINATING SET problem of large planar graphs in a practical time and memory space for the class of graphs with small branchwidth. For the class of graphs with large branchwidth, the size of instances that can be solved by the algorithm in practice is limited to about one thousand edges due to a memory space bottleneck. The practical performances of the algorithm coincide with the theoretical analysis of the algorithm. The results of this paper suggest that the branch-decomposition based algorithms can be practical for some applications on planar graphs

    Hitting forbidden minors: Approximation and Kernelization

    Get PDF
    We study a general class of problems called F-deletion problems. In an F-deletion problem, we are asked whether a subset of at most kk vertices can be deleted from a graph GG such that the resulting graph does not contain as a minor any graph from the family F of forbidden minors. We obtain a number of algorithmic results on the F-deletion problem when F contains a planar graph. We give (1) a linear vertex kernel on graphs excluding tt-claw K1,tK_{1,t}, the star with tt leves, as an induced subgraph, where tt is a fixed integer. (2) an approximation algorithm achieving an approximation ratio of O(log3/2OPT)O(\log^{3/2} OPT), where OPTOPT is the size of an optimal solution on general undirected graphs. Finally, we obtain polynomial kernels for the case when F contains graph θc\theta_c as a minor for a fixed integer cc. The graph θc\theta_c consists of two vertices connected by cc parallel edges. Even though this may appear to be a very restricted class of problems it already encompasses well-studied problems such as {\sc Vertex Cover}, {\sc Feedback Vertex Set} and Diamond Hitting Set. The generic kernelization algorithm is based on a non-trivial application of protrusion techniques, previously used only for problems on topological graph classes
    corecore