2,078 research outputs found

    LPSPEC: A Language for Representing Linear Programs

    Get PDF
    Information Systems Working Papers Serie

    COMPOSITION RULES FOR BUILDING LINEAR PROGRAMMING MODELS FROM COMPONENT MODELS

    Get PDF
    This paper describes some rules for combining component models into complete linear programs. The objective is to lay the foundations for systems that give users flexibility in designing new models and reusing old ones, while at the same time, providing better documentation and better diagnostics than currently available. The results presented here rely on two different sets of properties of LP models: first, the syntactic relationships among indices that define the rows and columns of the LP, and second, the meanings attached to these indices. These two kinds of information allow us to build a complete algebraic statement of a model from a collection of components provided by the model builder.Information Systems Working Papers Serie

    THE SCIENCE AND ART OF FORMULATING LINEAR PROGRAMS

    Get PDF
    This paper describes the philosophy underlying the development of an intelligent system to assist in the formulation of large linear programs. The LPFORM system allows users to state their problem using a graphical rather than an algebraic representation. A major objective of the system is to automate the bookkeeping involved in the development of large systems. It has expertise related to the structure of many of the common forms of linear programs (e.g. transportation, product-mix and blending problems) and of how these prototypes may be combined into more complex systems. Our approach involves characterizing the common forms of LP problems according to whether they are transformations in place, time or form. We show how LPFORM uses knowledge about the structure and meaning of linear programs to construct a correct tableau. Using the symbolic capabilities of artificial intelligence languages, we can manipulate and analyze some properties of the LP prior to actually generating a matrix.Information Systems Working Papers Serie

    Design of a Graphics Interface for Linear Programming

    Get PDF
    Information Systems Working Papers Serie

    Superfluid, Mott-Insulator, and Mass-Density-Wave Phases in the One-Dimensional Extended Bose-Hubbard Model

    Get PDF
    We use the finite-size density-matrix-renormalization-group (FSDMRG) method to obtain the phase diagram of the one-dimensional (d=1d = 1) extended Bose-Hubbard model for density ρ=1\rho = 1 in the UVU-V plane, where UU and VV are, respectively, onsite and nearest-neighbor interactions. The phase diagram comprises three phases: Superfluid (SF), Mott Insulator (MI) and Mass Density Wave (MDW). For small values of UU and VV, we get a reentrant SF-MI-SF phase transition. For intermediate values of interactions the SF phase is sandwiched between MI and MDW phases with continuous SF-MI and SF-MDW transitions. We show, by a detailed finite-size scaling analysis, that the MI-SF transition is of Kosterlitz-Thouless (KT) type whereas the MDW-SF transition has both KT and two-dimensional-Ising characters. For large values of UU and VV we get a direct, first-order, MI-MDW transition. The MI-SF, MDW-SF and MI-MDW phase boundaries join at a bicritical point at (U,V)=(8.5±0.05,4.75±0.05)U, V) = (8.5 \pm 0.05, 4.75 \pm 0.05).Comment: 10 pages, 15 figure

    White-Box Transformers via Sparse Rate Reduction

    Full text link
    In this paper, we contend that the objective of representation learning is to compress and transform the distribution of the data, say sets of tokens, towards a mixture of low-dimensional Gaussian distributions supported on incoherent subspaces. The quality of the final representation can be measured by a unified objective function called sparse rate reduction. From this perspective, popular deep networks such as transformers can be naturally viewed as realizing iterative schemes to optimize this objective incrementally. Particularly, we show that the standard transformer block can be derived from alternating optimization on complementary parts of this objective: the multi-head self-attention operator can be viewed as a gradient descent step to compress the token sets by minimizing their lossy coding rate, and the subsequent multi-layer perceptron can be viewed as attempting to sparsify the representation of the tokens. This leads to a family of white-box transformer-like deep network architectures which are mathematically fully interpretable. Despite their simplicity, experiments show that these networks indeed learn to optimize the designed objective: they compress and sparsify representations of large-scale real-world vision datasets such as ImageNet, and achieve performance very close to thoroughly engineered transformers such as ViT. Code is at \url{https://github.com/Ma-Lab-Berkeley/CRATE}.Comment: 33 pages, 11 figure
    corecore