125,222 research outputs found

    Memoizing a monadic mixin DSL

    Get PDF
    Modular extensibility is a highly desirable property of a domain-specific language (DSL): the ability to add new features without affecting the implementation of existing features. Functional mixins (also known as open recursion) are very suitable for this purpose. We study the use of mixins in Haskell for a modular DSL for search heuristics used in systematic solvers for combinatorial problems, that generate optimized C++ code from a high-level specification. We show how to apply memoization techniques to tackle performance issues and code explosion due to the high recursion inherent to the semantics of combinatorial search. As such heuristics are conventionally implemented as highly entangled imperative algorithms, our Haskell mixins are monadic. Memoization of monadic components causes further complications for us to deal with

    Survey on Combinatorial Register Allocation and Instruction Scheduling

    Full text link
    Register allocation (mapping variables to processor registers or memory) and instruction scheduling (reordering instructions to increase instruction-level parallelism) are essential tasks for generating efficient assembly code in a compiler. In the last three decades, combinatorial optimization has emerged as an alternative to traditional, heuristic algorithms for these two tasks. Combinatorial optimization approaches can deliver optimal solutions according to a model, can precisely capture trade-offs between conflicting decisions, and are more flexible at the expense of increased compilation time. This paper provides an exhaustive literature review and a classification of combinatorial optimization approaches to register allocation and instruction scheduling, with a focus on the techniques that are most applied in this context: integer programming, constraint programming, partitioned Boolean quadratic programming, and enumeration. Researchers in compilers and combinatorial optimization can benefit from identifying developments, trends, and challenges in the area; compiler practitioners may discern opportunities and grasp the potential benefit of applying combinatorial optimization

    Combinatorial batch codes

    Get PDF
    In this paper, we study batch codes, which were introduced by Ishai, Kushilevitz, Ostrovsky and Sahai in [4]. A batch code specifies a method to distribute a database of [n] items among [m] devices (servers) in such a way that any [k] items can be retrieved by reading at most [t] items from each of the servers. It is of interest to devise batch codes that minimize the total storage, denoted by [N] , over all [m] servers. We restrict out attention to batch codes in which every server stores a subset of the items. This is purely a combinatorial problem, so we call this kind of batch code a ''combinatorial batch code''. We only study the special case [t=1] , where, for various parameter situations, we are able to present batch codes that are optimal with respect to the storage requirement, [N] . We also study uniform codes, where every item is stored in precisely [c] of the [m] servers (such a code is said to have rate [1/c] ). Interesting new results are presented in the cases [c = 2, k-2] and [k-1] . In addition, we obtain improved existence results for arbitrary fixed [c] using the probabilistic method

    Hyperplane Neural Codes and the Polar Complex

    Full text link
    Hyperplane codes are a class of convex codes that arise as the output of a one layer feed-forward neural network. Here we establish several natural properties of stable hyperplane codes in terms of the {\it polar complex} of the code, a simplicial complex associated to any combinatorial code. We prove that the polar complex of a stable hyperplane code is shellable and show that most currently known properties of the hyperplane codes follow from the shellability of the appropriate polar complex.Comment: 23 pages, 5 figures. To appear in Proceedings of the Abel Symposiu
    corecore