11 research outputs found

    A novel evolutionary formulation of the maximum independent set problem

    Full text link
    We introduce a novel evolutionary formulation of the problem of finding a maximum independent set of a graph. The new formulation is based on the relationship that exists between a graph's independence number and its acyclic orientations. It views such orientations as individuals and evolves them with the aid of evolutionary operators that are very heavily based on the structure of the graph and its acyclic orientations. The resulting heuristic has been tested on some of the Second DIMACS Implementation Challenge benchmark graphs, and has been found to be competitive when compared to several of the other heuristics that have also been tested on those graphs

    On Maximum Weight Clique Algorithms, and How They Are Evaluated

    Get PDF
    Maximum weight clique and maximum weight independent set solvers are often benchmarked using maximum clique problem instances, with weights allocated to vertices by taking the vertex number mod 200 plus 1. For constraint programming approaches, this rule has clear implications, favouring weight-based rather than degree-based heuristics. We show that similar implications hold for dedicated algorithms, and that additionally, weight distributions affect whether certain inference rules are cost-effective. We look at other families of benchmark instances for the maximum weight clique problem, coming from winner determination problems, graph colouring, and error-correcting codes, and introduce two new families of instances, based upon kidney exchange and the Research Excellence Framework. In each case the weights carry much more interesting structure, and do not in any way resemble the 200 rule. We make these instances available in the hopes of improving the quality of future experiments

    The maximum kk-colorable subgraph problem and related problems

    Get PDF
    The maximum kk-colorable subgraph (MkkCS) problem is to find an induced kk-colorable subgraph with maximum cardinality in a given graph. This paper is an in-depth analysis of the MkkCS problem that considers various semidefinite programming relaxations including their theoretical and numerical comparisons. To simplify these relaxations we exploit the symmetry arising from permuting the colors, as well as the symmetry of the given graphs when applicable. We also show how to exploit invariance under permutations of the subsets for other partition problems and how to use the MkkCS problem to derive bounds on the chromatic number of a graph. Our numerical results verify that the proposed relaxations provide strong bounds for the MkkCS problem, and that those outperform existing bounds for most of the test instances

    Evolutionary computation for software testing

    Get PDF
    A variety of products undergo a transformation from a pure mechanical design to more and more software and electronic components. A polarized example are watches. Several decades ago they have been purely mechanical. Modern smart watches are almost completely electronic devices which heavily rely on software. Further, a smart watch offers a lot more features than just the information about the current time. This change had a crucial impact on how software is being developed. A first attempt to control the rising complexity was to move to agile development practices such as extreme programming or scrum. This rise in complexity is not only affecting the development process but also quality assurance and software testing. If a product contains more and more features then this leads to a higher number of tests necessary to ensure quality standards. Furthermore agile development practices work in an iterative manner which leads to repetitive testing that puts more effort on the testing team. We aimed within the thesis to ease the pain of testing. Thereby we examined a series of subproblems that arise. A key complexity is the number of test cases. We intended to reduce the number of test cases before they are executed manually or implemented as automated tests. Thereby we examined the test specification and based on the requirements coverage of the individual tests, we were able to identify redundant tests. We relied on a novel metaheuristic called GCAIS which we improved upon iteratively. Another task is to control the remaining complexity. Testing is often time crucial and an appropriate subset of the available tests must be chosen in order to get a quick insight into the status of the device under test. We examined this challenge in two different testing scenarios. The first scenario is located in semi-automated testing where engineers execute a set of automated tests locally and closely observe the behaviour of the system under test. We extended GCAIS to compute test suites that satisfy different criteria if provided with sufficient search time. The second use case is located in fully automated testing in a continuous integration (CI) setting. CI focuses on frequent software build cycles which also include testing. These builds contain a testing stage which greatly emphasizes speed. Thus there we also have to compute crucial tests. However, due to the nature of the process we have to continuously recompute a test suite for each build as the software and maybe even the test cases at hand have changed. Hence it is hard to compute the test suite ahead of time and these tests have to be determined as part of the CI execution. Thus we switched to a computational lightweight learning classifier system (LCS) to prioritize and select test cases. We integrated a series of innovations we made into an LCS known as XCSF such as continuous priorities, experience replay and transfer learning. This enabled us to outperform a state of the art artificial neural network which is used by companies such as Netflix. We further investigated how LCS can be made faster using parallelism. We developed generic approaches which may run on any multicore computing device. This is of interest for our CI use case as the build server's architecture is unknown. However, the methods are also independent of the concrete LCS and are not linked to our testing problem. We identified that many of the challenges that need to be faced in the CI use case have been tackled by Organic Computing (OC), for example the need to adapt to an ever changing environment. Hence we relied on OC design principles to create a system architecture which wraps the LCS developed and integrates it into existing CI processes. The final system is robust and highly autonomous. A side-effect of the high degree of autonomy is a high level of automatization which fits CI well. We also gave insight on the usability and delivery of the full system to our industrial partner. Test engineers can easily integrate it with a few lines of code and need no knowledge about LCS and OC in order to use it. Another implication of the developed system is that OC's ideas and design principles can also be employed outside the field of embedded systems. This shows that OC has a greater level of generality. The process of testing and correcting found errors is still only partially automated. We make a first step into automating the entire process and thereby take an analogy to the concept of self-healing of OC. As a first proof of concept of this school of thought we take a look at touch interfaces. There we can automatically manipulate the software to fulfill the specified behaviour. Thus only a minimalistic amount of manual work is required

    A new polyhedral approach to combinatorial designs

    Get PDF
    We consider combinatorial t-design problems as discrete optimization problems. Our motivation is that only a few studies have been done on the use of exact optimization techniques in designs, and that classical methods in design theory have still left many open existence questions. Roughly defined, t-designs are pairs of discrete sets that are related following some strict properties of size, balance, and replication. These highly structured relationships provide optimal solutions to a variety of problems in computer science like error-correcting codes, secure communications, network interconnection, design of hardware; and are applicable to other areas like statistics, scheduling, games, among others. We give a new approach to combinatorial t-designs that is useful in constructing t-designs by polyhedral methods. The first contribution of our work is a new result of equivalence of t-design problems with a graph theory problem. This equivalence leads to a novel integer programming formulation for t-designs, which we call GDP. We analyze the polyhedral properties of GDP and conclude, among other results, the associated polyhedron dimension. We generate new classes of valid inequalities to aim at approximating this integer program by a linear program that has the same optimal solution. Some new classes of valid inequalities are generated as Chv´atal-Gomory cuts, other classes are generated by graph complements and combinatorial arguments, and others are generated by the use of incidence substructures in a t-design. In particular, we found a class of valid inequalities that we call stable-set class that represents an alternative graph equivalence for the problem of finding a t-design. We analyze and give results on the strength of these new classes of valid inequalities. We propose a separation problem and give its integer programming formulation as a maximum (or minimum) edge-weight biclique subgraph problem. We implement a pure cutting-plane algorithm using one of the stronger classes of valid inequalities derived. Several instances of t-designs were solved efficiently by this algorithm at the root node of the search tree. Also, we implement a branch-and-cut algorithm and solve several instances of 2-designs trying different base formulations. Computational results are included

    Solving hard subgraph problems in parallel

    Get PDF
    This thesis improves the state of the art in exact, practical algorithms for finding subgraphs. We study maximum clique, subgraph isomorphism, and maximum common subgraph problems. These are widely applicable: within computing science, subgraph problems arise in document clustering, computer vision, the design of communication protocols, model checking, compiler code generation, malware detection, cryptography, and robotics; beyond, applications occur in biochemistry, electrical engineering, mathematics, law enforcement, fraud detection, fault diagnosis, manufacturing, and sociology. We therefore consider both the ``pure'' forms of these problems, and variants with labels and other domain-specific constraints. Although subgraph-finding should theoretically be hard, the constraint-based search algorithms we discuss can easily solve real-world instances involving graphs with thousands of vertices, and millions of edges. We therefore ask: is it possible to generate ``really hard'' instances for these problems, and if so, what can we learn? By extending research into combinatorial phase transition phenomena, we develop a better understanding of branching heuristics, as well as highlighting a serious flaw in the design of graph database systems. This thesis also demonstrates how to exploit two of the kinds of parallelism offered by current computer hardware. Bit parallelism allows us to carry out operations on whole sets of vertices in a single instruction---this is largely routine. Thread parallelism, to make use of the multiple cores offered by all modern processors, is more complex. We suggest three desirable performance characteristics that we would like when introducing thread parallelism: lack of risk (parallel cannot be exponentially slower than sequential), scalability (adding more processing cores cannot make runtimes worse), and reproducibility (the same instance on the same hardware will take roughly the same time every time it is run). We then detail the difficulties in guaranteeing these characteristics when using modern algorithmic techniques. Besides ensuring that parallelism cannot make things worse, we also increase the likelihood of it making things better. We compare randomised work stealing to new tailored strategies, and perform experiments to identify the factors contributing to good speedups. We show that whilst load balancing is difficult, the primary factor influencing the results is the interaction between branching heuristics and parallelism. By using parallelism to explicitly offset the commitment made to weak early branching choices, we obtain parallel subgraph solvers which are substantially and consistently better than the best sequential algorithms

    Das Partial Set Covering Problem und Erweiterungen: Modellierung und Lösungsverfahren

    Get PDF
    In this thesis, we study the Partial Set Covering Problem (PSCP) as well as some new extensions of the PSCP. We present a new extension of the PSCP which is called the Multiple Coverage Partial Set Covering Problem (MCPSCP). The model combines the aspect of multiple coverage with the PSCP. Heuristic and approximative algorithms are proposed. Here, the focus lies on the PSCP and the MCPSCP for which several local search and Langrangean-based algorithms are presented. The heuristics are tested on a wide variety of benchmark problems. Furthermore, we report about an application of the PSCP and the MCPSCP in railway networks. The models are used to find optimal positions for vehicle testing stations

    Solving Hard Set Covering Problems

    No full text
    corecore