2,444 research outputs found
LPSPEC: A Language for Representing Linear Programs
Information Systems Working Papers Serie
COMPOSITION RULES FOR BUILDING LINEAR PROGRAMMING MODELS FROM COMPONENT MODELS
This paper describes some rules for combining component models
into complete linear programs. The objective is to lay the
foundations for systems that give users flexibility in designing
new models and reusing old ones, while at the same time,
providing better documentation and better diagnostics than
currently available. The results presented here rely on two
different sets of properties of LP models: first, the syntactic
relationships among indices that define the rows and columns of
the LP, and second, the meanings attached to these indices.
These two kinds of information allow us to build a complete
algebraic statement of a model from a collection of components
provided by the model builder.Information Systems Working Papers Serie
THE SCIENCE AND ART OF FORMULATING LINEAR PROGRAMS
This paper describes the philosophy underlying the development of an intelligent
system to assist in the formulation of large linear programs. The LPFORM system allows
users to state their problem using a graphical rather than an algebraic representation.
A major objective of the system is to automate the bookkeeping involved in the
development of large systems. It has expertise related to the structure of many of the
common forms of linear programs (e.g. transportation, product-mix and blending
problems) and of how these prototypes may be combined into more complex systems.
Our approach involves characterizing the common forms of LP problems according to
whether they are transformations in place, time or form. We show how LPFORM uses
knowledge about the structure and meaning of linear programs to construct a correct
tableau. Using the symbolic capabilities of artificial intelligence languages, we can
manipulate and analyze some properties of the LP prior to actually generating a matrix.Information Systems Working Papers Serie
Design of a Graphics Interface for Linear Programming
Information Systems Working Papers Serie
Representation Learning via Manifold Flattening and Reconstruction
This work proposes an algorithm for explicitly constructing a pair of neural
networks that linearize and reconstruct an embedded submanifold, from finite
samples of this manifold. Our such-generated neural networks, called Flattening
Networks (FlatNet), are theoretically interpretable, computationally feasible
at scale, and generalize well to test data, a balance not typically found in
manifold-based learning methods. We present empirical results and comparisons
to other models on synthetic high-dimensional manifold data and 2D image data.
Our code is publicly available.Comment: 44 pages, 19 figure
Superfluid, Mott-Insulator, and Mass-Density-Wave Phases in the One-Dimensional Extended Bose-Hubbard Model
We use the finite-size density-matrix-renormalization-group (FSDMRG) method
to obtain the phase diagram of the one-dimensional () extended
Bose-Hubbard model for density in the plane, where and
are, respectively, onsite and nearest-neighbor interactions. The phase diagram
comprises three phases: Superfluid (SF), Mott Insulator (MI) and Mass Density
Wave (MDW). For small values of and , we get a reentrant SF-MI-SF phase
transition. For intermediate values of interactions the SF phase is sandwiched
between MI and MDW phases with continuous SF-MI and SF-MDW transitions. We
show, by a detailed finite-size scaling analysis, that the MI-SF transition is
of Kosterlitz-Thouless (KT) type whereas the MDW-SF transition has both KT and
two-dimensional-Ising characters. For large values of and we get a
direct, first-order, MI-MDW transition. The MI-SF, MDW-SF and MI-MDW phase
boundaries join at a bicritical point at (.Comment: 10 pages, 15 figure
The BCG World Atlas: A Database of Global BCG Vaccination Policies and Practices
Madhu Pai and colleagues introduce the BCG World Atlas, an open access, user
friendly Web site for TB clinicians to discern global BCG vaccination policies
and practices and improve the care of their patients
White-Box Transformers via Sparse Rate Reduction
In this paper, we contend that the objective of representation learning is to
compress and transform the distribution of the data, say sets of tokens,
towards a mixture of low-dimensional Gaussian distributions supported on
incoherent subspaces. The quality of the final representation can be measured
by a unified objective function called sparse rate reduction. From this
perspective, popular deep networks such as transformers can be naturally viewed
as realizing iterative schemes to optimize this objective incrementally.
Particularly, we show that the standard transformer block can be derived from
alternating optimization on complementary parts of this objective: the
multi-head self-attention operator can be viewed as a gradient descent step to
compress the token sets by minimizing their lossy coding rate, and the
subsequent multi-layer perceptron can be viewed as attempting to sparsify the
representation of the tokens. This leads to a family of white-box
transformer-like deep network architectures which are mathematically fully
interpretable. Despite their simplicity, experiments show that these networks
indeed learn to optimize the designed objective: they compress and sparsify
representations of large-scale real-world vision datasets such as ImageNet, and
achieve performance very close to thoroughly engineered transformers such as
ViT. Code is at \url{https://github.com/Ma-Lab-Berkeley/CRATE}.Comment: 33 pages, 11 figure
- …