13 research outputs found

    Advances and Novel Approaches in Discrete Optimization

    Get PDF
    Discrete optimization is an important area of Applied Mathematics with a broad spectrum of applications in many fields. This book results from a Special Issue in the journal Mathematics entitled ‘Advances and Novel Approaches in Discrete Optimization’. It contains 17 articles covering a broad spectrum of subjects which have been selected from 43 submitted papers after a thorough refereeing process. Among other topics, it includes seven articles dealing with scheduling problems, e.g., online scheduling, batching, dual and inverse scheduling problems, or uncertain scheduling problems. Other subjects are graphs and applications, evacuation planning, the max-cut problem, capacitated lot-sizing, and packing algorithms

    Towards Visualization of Discrete Optimization Problems and Search Algorithms

    Get PDF
    Diskrete Optimierung beschĂ€ftigt sich mit dem Identifizieren einer Kombination oder Permutation von Elementen, die im Hinblick auf ein gegebenes quantitatives Kriterium optimal ist. Anwendungen dafĂŒr entstehen aus Problemen in der Wirtschaft, der industriellen Fertigung, den Ingenieursdisziplinen, der Mathematik und Informatik. Dazu gehören unter anderem maschinelles Lernen, die Planung der Reihenfolge und Terminierung von Fertigungsprozessen oder das Layout von integrierten Schaltkreisen. HĂ€ufig sind diskrete Optimierungsprobleme NP-hart. Dadurch kommt der Erforschung effizienter, heuristischer Suchalgorithmen eine große Bedeutung zu, um fĂŒr mittlere und große Probleminstanzen ĂŒberhaupt gute Lösungen finden zu können. Dabei wird die Entwicklung von Algorithmen dadurch erschwert, dass Eigenschaften der Probleminstanzen aufgrund von deren GrĂ¶ĂŸe und KomplexitĂ€t hĂ€ufig schwer zu identifizieren sind. Ebenso herausfordernd ist die Analyse und Evaluierung von gegebenen Algorithmen, da das Suchverhalten hĂ€ufig schwer zu charakterisieren ist. Das trifft besonders im Fall von emergentem Verhalten zu, wie es in der Forschung der Schwarmintelligenz vorkommt. Visualisierung zielt auf das Nutzen des menschlichen Sehens zur Datenverarbeitung ab. Das Gehirn hat enorme FĂ€higkeiten optische Reize von den Sehnerven zu analysieren, Formen und Muster darin zu erkennen, ihnen Bedeutung zu verleihen und dadurch ein intuitives Verstehen des Gesehenen zu ermöglichen. Diese FĂ€higkeit kann im Speziellen genutzt werden, um Hypothesen ĂŒber komplexe Daten zu generieren, indem man sie in einem Bild reprĂ€sentiert und so dem visuellen System des Betrachters zugĂ€nglich macht. Bisher wurde Visualisierung kaum genutzt um speziell die Forschung in diskreter Optimierung zu unterstĂŒtzen. Mit dieser Dissertation soll ein Ausgangspunkt geschaffen werden, um den vermehrten Einsatz von Visualisierung bei der Entwicklung von Suchheuristiken zu ermöglichen. Dazu werden zunĂ€chst die zentralen Fragen in der Algorithmenentwicklung diskutiert und daraus folgende Anforderungen an Visualisierungssysteme abgeleitet. Mögliche Forschungsrichtungen in der Visualisierung, die konkreten Nutzen fĂŒr die Forschung in der Optimierung ergeben, werden vorgestellt. Darauf aufbauend werden drei Visualisierungssysteme und eine Analysemethode fĂŒr die Erforschung diskreter Suche vorgestellt. Drei wichtige Aufgaben von Algorithmendesignern werden dabei adressiert. ZunĂ€chst wird ein System fĂŒr den detaillierten Vergleich von Algorithmen vorgestellt. Auf der Basis von Zwischenergebnissen der Algorithmen auf einer Probleminstanz wird der Suchverlauf der Algorithmen dargestellt. Der Fokus liegt dabei dem Verlauf der QualitĂ€t der Lösungen ĂŒber die Zeit, wobei die Darstellung durch den Experten mit zusĂ€tzlichem Wissen oder Klassifizierungen angereichert werden kann. Als zweites wird ein System fĂŒr die Analyse von Suchlandschaften vorgestellt. Auf Basis von Pfaden und AbstĂ€nden in der Landschaft wird eine Karte der Probleminstanz gezeichnet, die strukturelle Merkmale intuitiv erfassbar macht. Der zweite Teil der Dissertation beschĂ€ftigt sich mit der topologischen Analyse von Suchlandschaften, aufbauend auf einer Schwellwertanalyse. Ein Visualisierungssystem wird vorgestellt, dass ein topologisch equivalentes Höhenprofil der Suchlandschaft darstellt, um die topologische Struktur begreifbar zu machen. Dieses System ermöglicht zudem, den Suchverlauf eines Algorithmus direkt in der Suchlandschaft zu beobachten, was insbesondere bei der Untersuchung von Schwarmintelligenzalgorithmen interessant ist. Die Berechnung der topologischen Struktur setzt eine vollstĂ€ndige AufzĂ€hlung aller Lösungen voraus, was aufgrund der GrĂ¶ĂŸe der Suchlandschaften im allgemeinen nicht möglich ist. Um eine Anwendbarkeit der Analyse auf grĂ¶ĂŸere Probleminstanzen zu ermöglichen, wird eine Methode zur AbschĂ€tzung der Topologie vorgestellt. Die Methode erlaubt eine schrittweise Verfeinerung der topologischen Struktur und lĂ€sst sich heuristisch steuern. Dadurch können Wissen und Hypothesen des Experten einfließen um eine möglichst hohe QualitĂ€t der AnnĂ€herung zu erreichen bei gleichzeitig ĂŒberschaubarem Berechnungsaufwand.Discrete optimization deals with the identification of combinations or permutations of elements that are optimal with regard to a specific, quantitative criterion. Applications arise from problems in economy, manufacturing, engineering, mathematics and computer sciences. Among them are machine learning, scheduling of production processes, and the layout of integrated electrical circuits. Typically, discrete optimization problems are NP hard. Thus, the investigation of efficient, heuristic search algorithms is of high relevance in order to find good solutions for medium- and large-sized problem instances, at all. The development of such algorithms is complicated, because the properties of problem instances are often hard to identify due to the size and complexity of the instances. Likewise, the analysis and evaluation of given algorithms is challenging, because the search behavior of an algorithm is hard to characterize, especially in case of emergent behavior as investigated in swarm intelligence research. Visualization targets taking advantage of human vision in order to do data processing. The visual brain possesses tremendous capabilities to analyse optical stimulation through the visual nerves, perceive shapes and patterns, assign meaning to them and thus facilitate an intuitive understanding of the seen. In particular, this can be used to generate hypotheses about complex data by representing them in a well-designed depiction and making it accessible to the visual system of the viewer. So far, there is only little use of visualization to support the discrete optimization research. This thesis is meant as a starting point to allow for an increased application of visualization throughout the process of developing discrete search heuristics. For this, we discuss the central questions that arise from the development of heuristics as well as the resulting requirements on visualization systems. Possible directions of research for visualization are described that yield a specific benefit for optimization research. Based on this, three visualization systems and one analysis method are presented. These address three important tasks of algorithm designers. First, a system for the fine-grained comparison of algorithms is introduced. Based on the intermediate results of algorithm runs on a given problem instance the search process is visualized. The focus is on the progress of the solution quality over time while allowing the algorithm expert to augment the depiction with additional domain knowledge and classification of individual solutions. Second, a system for the analysis of search landscapes is presented. Based on paths and distances in the landscape, a map of the problem instance is drawn that facilitates an intuitive cognition of structural properties. The second part of this thesis focuses on the topological analysis of search landscapes, based on barriers. A visualization system is presented that shows a topological equivalent height profile of the search landscape. Further, the system facilitates to observe the search process of an algorithm directly within the search landscape. This is of particular interest when researching swarm intelligence algorithms. The computation of topological structure requires a complete enumeration of all solutions which is not possible in the general case due to the size of the search landscapes. In order to enable an application to larger problem instances, we introduce a method to approximate the topological structure. The method allows for an incremental refinement of the topological approximation that can be controlled using a heuristic. Thus, the domain expert can introduce her knowledge and also hypotheses about the problem instance into the analysis so that an approximation of good quality is achieved with reasonable computational effort

    Automating Black-Box Property Based Testing

    Get PDF
    <p>Black-box property based testing tools like QuickCheck allow developers to write elegant logical specifications of their programs, while still permitting unrestricted use of the same language features and libraries that simplify writing the programs themselves. This is an improvement over unit testing because a single property can replace a large collection of test cases, and over more heavy-weight white-box testing frameworks that impose restrictions on how properties and tested code are written. In most cases the developer only needs to write a function returning a boolean, something any developer is capable of without additional training. </p> <p> This thesis aims to further lower the threshold for using property based testing by automating some problematic areas, most notably generating test data for user defined data types. Writing procedures for random test data generation by hand is time consuming and error prone, and most fully automatic algorithms give very poor random distributions for practical cases. </p><p> Several fully automatic algorithms for generating test data are presented in this thesis, along with implementations as Haskell libraries. These algorithms all fit nicely within a framework called sized functors, allowing re-usable generator definitions to be constructed automatically or by hand using a few simple combinators. </p><p> Test quality is another difficulty with property based testing. When a property fails to find a counterexample there is always some uncertainty in the strength of the property as a specification. To address this problem we introduce a black-box variant of mutation testing. Usually mutation testing involves automatically introducing errors (mutations) in the source code of a tested program to see if a test suite can detect it. Using higher order functions, we mutate functions without accessing their source code. The result is a very light-weight mutation testing procedure that automatically estimates property strength for QuickCheck properties. </p

    ICAPS 2012. Proceedings of the third Workshop on the International Planning Competition

    Get PDF
    22nd International Conference on Automated Planning and Scheduling. June 25-29, 2012, Atibaia, Sao Paulo (Brazil). Proceedings of the 3rd the International Planning CompetitionThe Academic Advising Planning Domain / Joshua T. Guerin, Josiah P. Hanna, Libby Ferland, Nicholas Mattei, and Judy Goldsmith. -- Leveraging Classical Planners through Translations / Ronen I. Brafman, Guy Shani, and Ran Taig. -- Advances in BDD Search: Filtering, Partitioning, and Bidirectionally Blind / Stefan Edelkamp, Peter Kissmann, and Álvaro Torralba. -- A Multi-Agent Extension of PDDL3.1 / Daniel L. Kovacs. -- Mining IPC-2011 Results / Isabel Cenamor, TomĂĄs de la Rosa, and Fernando FernĂĄndez. -- How Good is the Performance of the Best Portfolio in IPC-2011? / Sergio Nuñez, Daniel Borrajo, and Carlos Linares LĂłpez. -- “Type Problem in Domain Description!” or, Outsiders’ Suggestions for PDDL Improvement / Robert P. Goldman and Peter KellerEn prens

    New techniques for graph algorithms

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 181-192).The growing need to deal efficiently with massive computing tasks prompts us to consider the following question: How well can we solve fundamental optimization problems if our algorithms have to run really quickly? The motivation for the research presented in this thesis stems from addressing the above question in the context of algorithmic graph theory. To pursue this direction, we develop a toolkit that combines a diverse set of modern algorithmic techniques, including sparsification, low-stretch spanning trees, the multiplicative-weights-update method, dynamic graph algorithms, fast Laplacian system solvers, and tools of spectral graph theory. Using this toolkit, we obtain improved algorithms for several basic graph problems including: -- The Maximum s-t Flow and Minimum s-t Cut Problems. We develop a new approach to computing (1 - [epsilon])-approximately maximum s-t flow and (1 + [epsilon])-approximately minimum s-t cut in undirected graphs that gives the fastest known algorithms for these tasks. These algorithms are the first ones to improve the long-standing bound of O(n3/2') running time on sparse graphs; -- Multicommodity Flow Problems. We set forth a new method of speeding up the existing approximation algorithms for multicommodity flow problems, and use it to obtain the fastest-known (1 - [epsilon])-approximation algorithms for these problems. These results improve upon the best previously known bounds by a factor of roughly [omega](m/n), and make the resulting running times essentially match the [omega](mn) "flow-decomposition barrier" that is a natural obstacle to all the existing approaches; -- " Undirected (Multi-)Cut-Based Minimization Problems. We develop a general framework for designing fast approximation algorithms for (multi-)cutbased minimization problems in undirected graphs. Applying this framework leads to the first algorithms for several fundamental graph partitioning primitives, such as the (generalized) sparsest cut problem and the balanced separator problem, that run in close to linear time while still providing polylogarithmic approximation guarantees; -- The Asymmetric Traveling Salesman Problem. We design an O( )- approximation algorithm for the classical problem of combinatorial optimization: the asymmetric traveling salesman problem. This is the first asymptotic improvement over the long-standing approximation barrier of e(log n) for this problem; -- Random Spanning Tree Generation. We improve the bound on the time needed to generate an uniform random spanning tree of an undirected graph.by Aleksander Mądry.Ph.D

    LIPIcs, Volume 248, ISAAC 2022, Complete Volume

    Get PDF
    LIPIcs, Volume 248, ISAAC 2022, Complete Volum

    Coding for storage and testing

    Get PDF
    The problem of reconstructing strings from substring information has found many applications due to its importance in genomic data sequencing and DNA- and polymer-based data storage. Motivated by platforms that use chains of binary synthetic polymers as the recording media and read the content via tandem mass spectrometers, we propose new a family of codes that allows for both unique string reconstruction and correction of multiple mass errors. We first consider the paradigm where the masses of substrings of the input string form the evidence set. We consider two approaches: The first approach pertains to asymmetric errors and the error-correction is achieved by introducing redundancy that scales linearly with the number of errors and logarithmically with the length of the string. The proposed construction allows for the string to be uniquely reconstructed based only on its erroneous substring composition multiset. The asymptotic code rate of the scheme is one, and decoding is accomplished via a simplified version of the Backtracking algorithm used for the Turnpike problem. For symmetric errors, we use a polynomial characterization of the mass information and adapt polynomial evaluation code constructions for this setting. In the process, we develop new efficient decoding algorithms for a constant number of composition errors. The second part of this dissertation addresses a practical paradigm that requires reconstructing mixtures of strings based on the union of compositions of their prefixes and suffixes, generated by mass spectrometry devices. We describe new coding methods that allow for unique joint reconstruction of subsets of strings selected from a code and provide upper and lower bounds on the asymptotic rate of the underlying codebooks. Our code constructions combine properties of binary BhB_h and Dyck strings and can be extended to accommodate missing substrings in the pool. In the final chapter of this dissertation, we focus on group testing. We begin with a review of the gold-standard testing protocol for Covid-19, real-time, reverse transcription PCR, and its properties and associated measurement data such as amplification curves that can guide the development of appropriate and accurate adaptive group testing protocols. We then proceed to examine various off-the-shelf group testing methods for Covid-19, and identify their strengths and weaknesses for the application at hand. Finally, we present a collection of new analytical results for adaptive semiquantitative group testing with combinatorial priors, including performance bounds, algorithmic solutions, and noisy testing protocols. The worst-case paradigm extends and improves upon prior work on semiquantitative group testing with and without specialized PCR noise models
    corecore