13 research outputs found

    A Survey of Binary Covering Arrays

    Get PDF
    Binary covering arrays of strength t are 0–1 matrices having the property that for each t columns and each of the possible 2[superscript t] sequences of t 0's and 1's, there exists a row having that sequence in that set of t columns. Covering arrays are an important tool in certain applications, for example, in software testing. In these applications, the number of columns of the matrix is dictated by the application, and it is desirable to have a covering array with a small number of rows. Here we survey some of what is known about the existence of binary covering arrays and methods of producing them, including both explicit constructions and search techniques

    OLLIE: Derivation-based Tensor Program Optimizer

    Full text link
    Boosting the runtime performance of deep neural networks (DNNs) is critical due to their wide adoption in real-world tasks. Existing approaches to optimizing the tensor algebra expression of a DNN only consider expressions representable by a fixed set of predefined operators, missing possible optimization opportunities between general expressions. We propose OLLIE, the first derivation-based tensor program optimizer. OLLIE optimizes tensor programs by leveraging transformations between general tensor algebra expressions, enabling a significantly larger expression search space that includes those supported by prior work as special cases. OLLIE uses a hybrid derivation-based optimizer that effectively combines explorative and guided derivations to quickly discover highly optimized expressions. Evaluation on seven DNNs shows that OLLIE can outperform existing optimizers by up to 2.73×\times (1.46×\times on average) on an A100 GPU and up to 2.68×\times (1.51×\times) on a V100 GPU, respectively

    The Traveling Salesman Problem

    Get PDF
    This paper presents a self-contained introduction into algorithmic and computational aspects of the traveling salesman problem and of related problems, along with their theoretical prerequisites as seen from the point of view of an operations researcher who wants to solve practical problem instances. Extensive computational results are reported on most of the algorithms described. Optimal solutions are reported for instances with sizes up to several thousand nodes as well as heuristic solutions with provably very high quality for larger instances

    Algorithmic Solutions for Combinatorial Problems in Resource Management of Manufacturing Environments

    Get PDF
    This thesis studies the use of heuristic algorithms in a number of combinatorial problems that occur in various resource constrained environments. Such problems occur, for example, in manufacturing, where a restricted number of resources (tools, machines, feeder slots) are needed to perform some operations. Many of these problems turn out to be computationally intractable, and heuristic algorithms are used to provide efficient, yet sub-optimal solutions. The main goal of the present study is to build upon existing methods to create new heuristics that provide improved solutions for some of these problems. All of these problems occur in practice, and one of the motivations of our study was the request for improvements from industrial sources. We approach three different resource constrained problems. The first is the tool switching and loading problem, and occurs especially in the assembly of printed circuit boards. This problem has to be solved when an efficient, yet small primary storage is used to access resources (tools) from a less efficient (but unlimited) secondary storage area. We study various forms of the problem and provide improved heuristics for its solution. Second, the nozzle assignment problem is concerned with selecting a suitable set of vacuum nozzles for the arms of a robotic assembly machine. It turns out that this is a specialized formulation of the MINMAX resource allocation formulation of the apportionment problem and it can be solved efficiently and optimally. We construct an exact algorithm specialized for the nozzle selection and provide a proof of its optimality. Third, the problem of feeder assignment and component tape construction occurs when electronic components are inserted and certain component types cause tape movement delays that can significantly impact the efficiency of printed circuit board assembly. Here, careful selection of component slots in the feeder improves the tape movement speed. We provide a formal proof that this problem is of the same complexity as the turnpike problem (a well studied geometric optimization problem), and provide a heuristic algorithm for this problem.Siirretty Doriast

    Algorithms for Circuit Sizing in VLSI Design

    Get PDF
    One of the key problems in the physical design of computer chips, also known as integrated circuits, consists of choosing a  physical layout  for the logic gates and memory circuits (registers) on the chip. The layouts have a high influence on the power consumption and area of the chip and the delay of signal paths.  A discrete set of predefined layouts  for each logic function and register type with different physical properties is given by a library. One of the most influential characteristics of a circuit defined by the layout is its size. In this thesis we present new algorithms for the problem of choosing sizes for the circuits and its continuous relaxation,  and  evaluate these in theory and practice. A popular approach is based on Lagrangian relaxation and projected subgradient methods. We show that seemingly heuristic modifications that have been proposed for this approach can be theoretically justified by applying the well-known multiplicative weights algorithm. Subsequently, we propose a new model for the sizing problem as a min-max resource sharing problem. In our context, power consumption and signal delays are represented by resources that are distributed to customers. Under certain assumptions we obtain a polynomial time approximation for the continuous relaxation of the sizing problem that improves over the Lagrangian relaxation based approach. The new resource sharing algorithm has been implemented as part of the BonnTools software package which is developed at the Research Institute for Discrete Mathematics at the University of Bonn in cooperation with IBM. Our experiments on the ISPD 2013 benchmarks and state-of-the-art microprocessor designs provided by IBM illustrate that the new algorithm exhibits more stable convergence behavior compared to a Lagrangian relaxation based algorithm. Additionally, better timing and reduced power consumption was achieved on almost all instances. A subproblem of the new algorithm consists of finding sizes minimizing a weighted sum of power consumption and signal delays. We describe a method that approximates the continuous relaxation of this problem in polynomial time under certain assumptions. For the discrete problem we provide a fully polynomial approximation scheme under certain assumptions on the topology of the chip. Finally, we present a new algorithm for timing-driven optimization of registers. Their sizes and locations on a chip are usually determined during the clock network design phase, and remain mostly unchanged afterwards although the timing criticalities on which they were based can change. Our algorithm permutes register positions and sizes within so-called  clusters  without impairing the clock network such that it can be applied late in a design flow. Under mild assumptions, our algorithm finds an optimal solution which maximizes the worst cluster slack. It is implemented as part of the BonnTools and improves timing of registers on state-of-the-art microprocessor designs by up to 7.8% of design cycle time. </div

    Decomposition techniques for large scale stochastic linear programs

    Get PDF
    Stochastic linear programming is an effective and often used technique for incorporating uncertainties about future events into decision making processes. Stochastic linear programs tend to be significantly larger than other types of linear programs and generally require sophisticated decomposition solution procedures. Detailed algorithms based uponDantzig-Wolfe and L-Shaped decomposition are developed and implemented. These algorithms allow for solutions to within an arbitrary tolerance on the gap between the lower and upper bounds on a problem\u27s objective function value. Special procedures and implementation strategies are presented that enable many multi-period stochastic linear programs to be solved with two-stage, instead of nested, decomposition techniques. Consequently, abroad class of large scale problems, with tens of millions of constraints and variables, can be solved on a personal computer. Myopic decomposition algorithms based upon a shortsighted view of the future are also developed. Although unable to guarantee an arbitrary solution tolerance, myopic decomposition algorithms may yield very good solutions in a fraction of the time required by Dantzig-Wolfe/L-Shaped decomposition based algorithms.In addition, derivations are given for statistics, based upon Mahalanobis squared distances,that can be used to provide measures for a random sample\u27s effectiveness in approximating a parent distribution. Results and analyses are provided for the applications of the decomposition procedures and sample effectiveness measures to a multi-period market investment model

    TOPEX/POSEIDON Science Investigations Plan

    Get PDF
    TOPEX/POSEIDON is a satellite mission that will use the technique of radar altimetry to make precise measurement of sea level with a primary goal of studying the global ocean circulation. The mission represents the culmination of the development of satellite altimetry over the past two decades. The major thrust of the mission is a commitment to measuring seal level with an unprecedented accuracy such that the small-amplitude, basinwide sea level changes that bear significant effects on global change can be detected. The mission will be conducted jointly by the United States National Aeronautics and Space Administration and the French space agency, Centre National d'Etudes Spatiales. The 3- to 5-year mission will study the long-term mean and variability of ocean circulation. This document provides brief descriptions of the planned investigations as well as a summary of the major elements of the mission

    1974/1975 UCI General Catalogue

    Get PDF
    General catalogue for the academic year 1974-1975
    corecore