7 research outputs found

    Signed double Roman domination on cubic graphs

    Full text link
    The signed double Roman domination problem is a combinatorial optimization problem on a graph asking to assign a label from {±1,2,3}\{\pm{}1,2,3\} to each vertex feasibly, such that the total sum of assigned labels is minimized. Here feasibility is given whenever (i) vertices labeled ±1\pm{}1 have at least one neighbor with label in {2,3}\{2,3\}; (ii) each vertex labeled 1-1 has one 33-labeled neighbor or at least two 22-labeled neighbors; and (iii) the sum of labels over the closed neighborhood of any vertex is positive. The cumulative weight of an optimal labeling is called signed double Roman domination number (SDRDN). In this work, we first consider the problem on general cubic graphs of order nn for which we present a sharp n/2+Θ(1)n/2+\Theta(1) lower bound for the SDRDN by means of the discharging method. Moreover, we derive a new best upper bound. Observing that we are often able to minimize the SDRDN over the class of cubic graphs of a fixed order, we then study in this context generalized Petersen graphs for independent interest, for which we propose a constraint programming guided proof. We then use these insights to determine the SDRDNs of subcubic 2×m2\times m grid graphs, among other results

    Exact and Heuristic Approaches for Solving String Problems from Bioinformatics

    No full text
    Arbeit an der Bibliothek noch nicht eingelangt - Daten nicht geprüftAbweichender Titel nach Übersetzung der Verfasserin/des VerfassersThis thesis provides several new algorithms for solving prominent string problems from the literature, most of these being variants of the well-known longest common subsequence (LCS) problem. Given a set of input strings, a longest common subsequence is a string of maximum length that can be obtained from each input string by removing letters, i.e., which is a common subsequence of all input strings. The problem is known to be NP–hard and challenging to solve in practice for the general case of an arbitrary set of input strings. Besides the basic LCS problem variant, we consider here the following important variants: the longest common palindromic subsequence problem, the arc-preserving LCS problem, the longest common square subsequence problem, the repetition-free LCS problem, and the constrained LCS problem. These problems provide a range of important measures which serve for detecting similarities between molecules of various structures. Concerning heuristic approaches, we propose a general beam search framework in which many previously described methods from literature can be expressed. In particular, new state-of-the-art results were obtained on various benchmark sets utilizing a novel heuristic guidance that approximates the expected solution length of three different string problems. For solving the longest common square subsequence problem, a hybrid of a Reduced Variable Neighborhood Search method and a Beam search technique has been proposed. Concerning exact techniques, two kinds of methods are proposed: (i) pure exact methods based on A∗ search and (ii) anytime algorithms that build upon the A∗ search framework. Experimental results indicate that this A∗ search is also able to outperform all previously published more specific exact algorithms for the longest common palindromic subsequence and the constrained longest common subsequence problems with two input strings. Concerning anytime algorithms, we first make use of the derived A∗ search framework such that classical A∗ iterations are interleaved with beam search runs. Later, another anytime algorithm variant is proposed in which the beam search part is replaced by a major iteration of the Anytime Column Search. New state-of-the-art results are produced and better final optimality gaps were obtained by the latter hybrid, in comparison to a several state-of-the-art anytime algorithms from the literature. As an alternative exact approach, we further consider the transformation of LCS problem instances into Maximum Clique (MC) problem instance on the basis of so-called conflict graphs. In this way, state-of-the-art MC solvers can be utilized for solving the LCS problem instances. Further, an effective conflict graph reduction based on suboptimality checks is proposed. In conjunction with the general-purpose mixed integer linear programming solver Cplex, new state-of-the-art results are obtained on a wide range of benchmark instances.22

    Application of A <sup>∗</sup> to the Generalized Constrained Longest Common Subsequence Problem with Many Pattern Strings

    No full text
    This paper considers the constrained longest common subsequence problem with an arbitrary set of input strings and an arbitrary set of pattern strings as input. The problem has applications, for example, in computational biology, serving as a measure of similarity among different molecules that are characterized by common putative structures. We develop an exact A ∗ search to solve it. Our A ∗ search is compared to the only existing competitor from the literature, an Automaton approach. The results show that A ∗ is very efficient for real-world benchmarks, finding provenly optimal solutions in run times that are an order of magnitude lower than the ones of the competitor. Even some of the large-scale real-world instances were solved to optimality by A ∗ search.Peer reviewe

    Solving the Longest Common Subsequence Problem Concerning Non-Uniform Distributions of Letters in Input Strings

    No full text
    The longest common subsequence (LCS) problem is a prominent NP–hard optimization problem where, given an arbitrary set of input strings, the aim is to find a longest subsequence, which is common to all input strings. This problem has a variety of applications in bioinformatics, molecular biology and file plagiarism checking, among others. All previous approaches from the literature are dedicated to solving LCS instances sampled from uniform or near-to-uniform probability distributions of letters in the input strings. In this paper, we introduce an approach that is able to effectively deal with more general cases, where the occurrence of letters in the input strings follows a non-uniform distribution such as a multinomial distribution. The proposed approach makes use of a time-restricted beam search, guided by a novel heuristic named Gmpsum. This heuristic combines two complementary scoring functions in the form of a convex combination. Furthermore, apart from the close-to-uniform benchmark sets from the related literature, we introduce three new benchmark sets that differ in terms of their statistical properties. One of these sets concerns a case study in the context of text analysis. We provide a comprehensive empirical evaluation in two distinctive settings: (1) short-time execution with fixed beam size in order to evaluate the guidance abilities of the compared search heuristics; and (2) long-time executions with fixed target duration times in order to obtain high-quality solutions. In both settings, the newly proposed approach performs comparably to state-of-the-art techniques in the context of close-to-uniform instances and outperforms state-of-the-art approaches for non-uniform instances

    Solving the Longest Common Subsequence Problem Concerning Non-Uniform Distributions of Letters in Input Strings

    No full text
    The longest common subsequence (LCS) problem is a prominent NP–hard optimization problem where, given an arbitrary set of input strings, the aim is to find a longest subsequence, which is common to all input strings. This problem has a variety of applications in bioinformatics, molecular biology and file plagiarism checking, among others. All previous approaches from the literature are dedicated to solving LCS instances sampled from uniform or near-to-uniform probability distributions of letters in the input strings. In this paper, we introduce an approach that is able to effectively deal with more general cases, where the occurrence of letters in the input strings follows a non-uniform distribution such as a multinomial distribution. The proposed approach makes use of a time-restricted beam search, guided by a novel heuristic named Gmpsum. This heuristic combines two complementary scoring functions in the form of a convex combination.&nbsp;Furthermore, apart from the close-to-uniform benchmark sets from the related literature, we introduce three new benchmark sets that differ in terms of their statistical properties. One of these sets concerns a case study in the context of text analysis. We provide a comprehensive empirical evaluation in two distinctive settings: (1) short-time execution with fixed beam size in order to evaluate the guidance abilities of the compared search heuristics; and (2) long-time executions with fixed target duration times in order to obtain high-quality solutions. In both settings, the newly proposed approach performs comparably to state-of-the-art techniques in the context of close-to-uniform instances and outperforms state-of-the-art approaches for non-uniform instances

    Solving longest common subsequence problems via a transformation to the maximum clique problem

    No full text
    International audienceLongest common subsequence problems find various applications in bioinformatics, data compression and text editing, just to name a few. Even though numerous heuristic approaches were published in the related literature for many of the considered problem variants during the last decades, solving these problems to optimality remains an important challenge. This is particularly the case when the number and the length of the input strings grows. In this work we define a new way to transform instances of the classical longest common subsequence problem and of some of its variants into instances of the maximum clique problem. Moreover, we propose a technique to reduce the size of the resulting graphs. Finally, a comprehensive experimental evaluation using recent exact and heuristic maximum clique solvers is presented. Numerous, so-far unsolved problem instances from benchmark sets taken from the literature were solved to optimality in this way. (C) 2020 Elsevier Ltd. All rights reserved
    corecore