1,951 research outputs found
Constraint Generation for the Jeeves Privacy Language
Our goal is to present a completed, semantic formalization of the Jeeves privacy language evaluation engine, based on the original Jeeves constraint semantics defined by Yang et al at POPL12, but sufficiently strong to support a first complete implementation thereof. Specifically, we present and implement a syntactically and semantically completed concrete syntax for Jeeves that meets the example criteria given in the paper. We also present and implement the associated translation to J, but here formulated by a completed and decompositional operational semantic formulation. Finally, we present an enhanced and decompositional, non-substitutional operational semantic formulation and implementation of the J evaluation engine (the dynamic semantics) with privacy constraints. In particular, we show how implementing the constraints can be defined as a monad, and evaluation can be defined as monadic operation on the constraint environment. The implementations are all completed in Haskell, utilizing its almost one-to-one capability to transparently reflect the underlying semantic reasoning when formalized this way. In practice, we have applied the "literate" program facility of Haskell to this report, a feature that enables the source LATEX to also serve as the source code for the implementation (skipping the report-parts as comment regions). The implementation is published as a github project
Machine-learning-aided warm-start of constraint generation methods for online mixed-integer optimization
Mixed Integer Linear Programs (MILP) are well known to be NP-hard problems in
general. Even though pure optimization-based methods, such as constraint
generation, are guaranteed to provide an optimal solution if enough time is
given, their use in online applications is still a great challenge due to their
usual excessive time requirements. To alleviate their computational burden,
some machine learning techniques have been proposed in the literature, using
the information provided by previously solved MILP instances. Unfortunately,
these techniques report a non-negligible percentage of infeasible or suboptimal
instances.
By linking mathematical optimization and machine learning, this paper
proposes a novel approach that speeds up the traditional constraint generation
method, preserving feasibility and optimality guarantees. In particular, we
first identify offline the so-called invariant constraint set of past MILP
instances. We then train (also offline) a machine learning method to learn an
invariant constraint set as a function of the problem parameters of each
instance. Next, we predict online an invariant constraint set of the new unseen
MILP application and use it to initialize the constraint generation method.
This warm-started strategy significantly reduces the number of iterations to
reach optimality, and therefore, the computational burden to solve online each
MILP problem is significantly reduced. Very importantly, the proposed
methodology inherits the feasibility and optimality guarantees of the
traditional constraint generation method. The computational performance of the
proposed approach is quantified through synthetic and real-life MILP
applications
A Conditional Random Field for Multiple-Instance Learning
We present MI-CRF, a conditional random field (CRF) model for multiple instance learning (MIL). MI-CRF models bags as nodes in a CRF with instances as their states. It combines discriminative unary instance classifiers and pairwise dissimilarity measures. We show that both forces improve the classification performance. Unlike other approaches, MI-CRF considers all bags jointly during training as well as during testing. This makes it possible to classify test bags in an imputation setup. The parameters of MI-CRF are learned using constraint generation. Furthermore, we show that MI-CRF can incorporate previous MIL algorithms to improve on their results. MI-CRF obtains competitive results on five standard MIL datasets. 1
Constraint Generation Algorithm for the Minimum Connectivity Inference Problem
Given a hypergraph , the Minimum Connectivity Inference problem asks for a
graph on the same vertex set as with the minimum number of edges such that
the subgraph induced by every hyperedge of is connected. This problem has
received a lot of attention these recent years, both from a theoretical and
practical perspective, leading to several implemented approximation, greedy and
heuristic algorithms. Concerning exact algorithms, only Mixed Integer Linear
Programming (MILP) formulations have been experimented, all representing
connectivity constraints by the means of graph flows. In this work, we
investigate the efficiency of a constraint generation algorithm, where we
iteratively add cut constraints to a simple ILP until a feasible (and optimal)
solution is found. It turns out that our method is faster than the previous
best flow-based MILP algorithm on random generated instances, which suggests
that a constraint generation approach might be also useful for other
optimization problems dealing with connectivity constraints. At last, we
present the results of an enumeration algorithm for the problem.Comment: 16 pages, 4 tables, 1 figur
Type-Directed Program Synthesis and Constraint Generation for Library Portability
Fast numerical libraries have been a cornerstone of scientific computing for
decades, but this comes at a price. Programs may be tied to vendor specific
software ecosystems resulting in polluted, non-portable code. As we enter an
era of heterogeneous computing, there is an explosion in the number of
accelerator libraries required to harness specialized hardware. We need a
system that allows developers to exploit ever-changing accelerator libraries,
without over-specializing their code.
As we cannot know the behavior of future libraries ahead of time, this paper
develops a scheme that assists developers in matching their code to new
libraries, without requiring the source code for these libraries.
Furthermore, it can recover equivalent code from programs that use existing
libraries and automatically port them to new interfaces. It first uses program
synthesis to determine the meaning of a library, then maps the synthesized
description into generalized constraints which are used to search the program
for replacement opportunities to present to the developer.
We applied this approach to existing large applications from the scientific
computing and deep learning domains. Using our approach, we show speedups
ranging from 1.1 to over 10 on end to end performance when
using accelerator libraries.Comment: Accepted to PACT 201
- …