3 research outputs found

    What Do We Learn from Experimental Algorithmics?

    No full text
    Experimental Algorithmics is concerned with the design, implementation, tuning, debugging and performance analysis of computer programs for solving algorithmic problems. It provides methodologies and tools for designing, developing and experimentally analyzing efficient algorithmic codes and aims at integrating and reinforcing traditional theoretical approaches for the design and analysis of algorithms and data structures. In this paper we survey some relevant contributions to the field of Experimental Algorithmics and we discuss significant examples where the experimental approach helped in developing new ideas, in assessing heuristics and techniques, and in gaining a deeper insight about existing algorithms.

    Scalable String and Suffix Sorting: Algorithms, Techniques, and Tools

    Get PDF
    This dissertation focuses on two fundamental sorting problems: string sorting and suffix sorting. The first part considers parallel string sorting on shared-memory multi-core machines, the second part external memory suffix sorting using the induced sorting principle, and the third part distributed external memory suffix sorting with a new distributed algorithmic big data framework named Thrill.Comment: 396 pages, dissertation, Karlsruher Instituts f\"ur Technologie (2018). arXiv admin note: text overlap with arXiv:1101.3448 by other author

    New experimentalism applied to evolutionary computation

    Get PDF
    This thesis develops a solid statistical methodology to analyze search heuristics such as evolutionary algorithm s based on the concept of the new experimentalism. The new experimentalism is an influential discipline in the modern philosophy of science. The new experimentalists are seeking for a relatively secure basis for science, not in theory or observation, but in experiment. Deborah Mayo - one of its prominent proponents - developed a detailed way in which sc ientific claims can be validated by experiment. First, the concept of the new experimentalism for computer experiments is introduced. The difference between significant and meani ngful results is detailed. Following Mayo, a re-interpretation of the Neyman-Pearson theory of testing for computer experiments is given. Since statistical tests can be used as l earning tools, they provide means to extend widely accepted popperian paradigms. Models are characterized as central elements of science. We claim that experiment dominates theor y. Many, even conflicting, theories can co-exist independently for one unique experimental result. Maybe there is no theory applying to every phenomenon, but many simple theories describing what happens from case to case. Basic definitions from computational statistics, classical design of experiments (DOE), and modern design of computer experiments (DAC E) are explained to provide the reader with the required background information from statistics. An elevator group control model, which has been developed in cooperation with one of the world's leading elevator manufacturers, is introduced as an example for complex real-world optimization problems. It is used to illustrate the difference between art ificial functions from test suites and real-world problems. Problems related to these commonly used test-suites are discussed. Experimenters have to decide where to place sample points. Classical and modern experimental designs are compared to describe the difference between space-filling designs and designs that place experimental points at the boundari es of the experimental region. In many situations, it might be beneficial to generate the design points not at once, but sequentially. A sequential design, which provides a basis for a parameter tuning method, is developed. Exogenous strategy parameters, which have to be specified before an optimization algorithm can be started, are presented for determi nistic and stochastic search algorithms. The discussion of the concept of optimization provides the foundation to define performance measures for search heuristics. Optimization relies on a number of very restrictive assumptions that are not met in many real-world settings. Efficiency and effectivity are introduced with respect to these problems as two i mportant categories to classify performance measures. As the pre-requisites have been introduced, experiments can be performed and analyzed in framework of the new experimentalis m. A classical approach, based on DOE, is presented first. Then, sequential parameter optimization (SPO) is developed as a modern methodology to improve ('tune') and co mpare the performance of algorithms. It is demonstrated how the tuning process, which requires only a relatively small number of experiments, can improve the algorithm's per formance significantly. Even more, the new experimentalism, as introduced and applied in this thesis, provides means to understand the algorithm's performance. Various schem es for selection under noise are introduced to demonstrate this feature. To give an example, it is demonstrated how threshold selection can improve the local and global performan ce of search heuristics under noise. Threshold selection can be characterized as a smart and simple heuristic that performs relatively good in certain environments. These heurist ics are interpreted in Herbert Simon's framework of bounded rationality. Finally, a commonly accepted model that describes the relation between experiment and theory is rev ised and enhanced
    corecore