4,417 research outputs found
An Enhanced Features Extractor for a Portfolio of Constraint Solvers
Recent research has shown that a single arbitrarily efficient solver can be
significantly outperformed by a portfolio of possibly slower on-average
solvers. The solver selection is usually done by means of (un)supervised
learning techniques which exploit features extracted from the problem
specification. In this paper we present an useful and flexible framework that
is able to extract an extensive set of features from a Constraint
(Satisfaction/Optimization) Problem defined in possibly different modeling
languages: MiniZinc, FlatZinc or XCSP. We also report some empirical results
showing that the performances that can be obtained using these features are
effective and competitive with state of the art CSP portfolio techniques
Geometry of Policy Improvement
We investigate the geometry of optimal memoryless time independent decision
making in relation to the amount of information that the acting agent has about
the state of the system. We show that the expected long term reward, discounted
or per time step, is maximized by policies that randomize among at most
actions whenever at most world states are consistent with the agent's
observation. Moreover, we show that the expected reward per time step can be
studied in terms of the expected discounted reward. Our main tool is a
geometric version of the policy improvement lemma, which identifies a
polyhedral cone of policy changes in which the state value function increases
for all states.Comment: 8 page
Numerical modelling of disintegration of basin-scale internal waves in a tank filled with stratified water
We present the results of numerical experiments performed with the use of a fully non-linear non-hydrostatic numerical model to study the baroclinic response of a long narrow tank filled with stratified water to an initially tilted interface. Upon release, the system starts to oscillate with an eigen frequency corresponding to basin-scale baroclinic gravitational seiches. Field observations suggest that the disintegration of basin-scale internal waves into packets of solitary waves, shear instabilities, billows and spots of mixed water are important mechanisms for the transfer of energy within stratified lakes. Laboratory experiments performed by D. A. Horn, J. Imberger and G. N. Ivey (JFM, 2001) reproduced several regimes, which include damped linear waves and solitary waves. The generation of billows and shear instabilities induced by the basin-scale wave was, however, not sufficiently studied. The developed numerical model computes a variety of flows, which were not observed with the experimental set-up. In particular, the model results showed that under conditions of low dissipation, the regimes of billows and supercritical flows may transform into a solitary wave regime. The obtained results can help in the interpretation of numerous observations of mixing processes in real lakes
Self-Modification of Policy and Utility Function in Rational Agents
Any agent that is part of the environment it interacts with and has versatile
actuators (such as arms and fingers), will in principle have the ability to
self-modify -- for example by changing its own source code. As we continue to
create more and more intelligent agents, chances increase that they will learn
about this ability. The question is: will they want to use it? For example,
highly intelligent systems may find ways to change their goals to something
more easily achievable, thereby `escaping' the control of their designers. In
an important paper, Omohundro (2008) argued that goal preservation is a
fundamental drive of any intelligent system, since a goal is more likely to be
achieved if future versions of the agent strive towards the same goal. In this
paper, we formalise this argument in general reinforcement learning, and
explore situations where it fails. Our conclusion is that the self-modification
possibility is harmless if and only if the value function of the agent
anticipates the consequences of self-modifications and use the current utility
function when evaluating the future.Comment: Artificial General Intelligence (AGI) 201
Revisiting the Core Ontology and Problem in Requirements Engineering
In their seminal paper in the ACM Transactions on Software Engineering and
Methodology, Zave and Jackson established a core ontology for Requirements
Engineering (RE) and used it to formulate the "requirements problem", thereby
defining what it means to successfully complete RE. Given that stakeholders of
the system-to-be communicate the information needed to perform RE, we show that
Zave and Jackson's ontology is incomplete. It does not cover all types of basic
concerns that the stakeholders communicate. These include beliefs, desires,
intentions, and attitudes. In response, we propose a core ontology that covers
these concerns and is grounded in sound conceptual foundations resting on a
foundational ontology. The new core ontology for RE leads to a new formulation
of the requirements problem that extends Zave and Jackson's formulation. We
thereby establish new standards for what minimum information should be
represented in RE languages and new criteria for determining whether RE has
been successfully completed.Comment: Appears in the proceedings of the 16th IEEE International
Requirements Engineering Conference, 2008 (RE'08). Best paper awar
OpenML Benchmarking Suites
Machine learning research depends on objectively interpretable, comparable,
and reproducible algorithm benchmarks. Therefore, we advocate the use of
curated, comprehensive suites of machine learning tasks to standardize the
setup, execution, and reporting of benchmarks. We enable this through software
tools that help to create and leverage these benchmarking suites. These are
seamlessly integrated into the OpenML platform, and accessible through
interfaces in Python, Java, and R. OpenML benchmarking suites are (a) easy to
use through standardized data formats, APIs, and client libraries; (b)
machine-readable, with extensive meta-information on the included datasets; and
(c) allow benchmarks to be shared and reused in future studies. We also present
a first, carefully curated and practical benchmarking suite for classification:
the OpenML Curated Classification benchmarking suite 2018 (OpenML-CC18)
Multi-Instanton Effects in QCD Sum Rules for the Pion
Multi-instanton contributions to QCD sum rules for the pion are investigated
within a framework which models the QCD vacuum as an instanton liquid. It is
shown that in singular gauge the sum of planar diagrams in leading order of the
expansion provides similar results as the effective single-instanton
contribution. These effects are also analysed in regular gauge. Our findings
confirm that at large distances the correlator functions are more adequately
described in the singular gauge rather than in the regular one.Comment: 11 pages RevTeX is use
Meta-surrogate benchmarking for hyperparameter optimization
Despite the recent progress in hyperparameter optimization (HPO), available
benchmarks that resemble real-world scenarios consist of a few and very large
problem instances that are expensive to solve. This blocks researchers and
practitioners not only from systematically running large-scale comparisons that
are needed to draw statistically significant results but also from reproducing
experiments that were conducted before. This work proposes a method to
alleviate these issues by means of a meta-surrogate model for HPO tasks trained
on off-line generated data. The model combines a probabilistic encoder with a
multi-task model such that it can generate inexpensive and realistic tasks of
the class of problems of interest. We demonstrate that benchmarking HPO methods
on samples of the generative model allows us to draw more coherent and
statistically significant conclusions that can be reached orders of magnitude
faster than using the original tasks. We provide evidence of our findings for
various HPO methods on a wide class of problems
- …