3,928 research outputs found
On the Minimal Revision Problem of Specification Automata
As robots are being integrated into our daily lives, it becomes necessary to
provide guarantees on the safe and provably correct operation. Such guarantees
can be provided using automata theoretic task and mission planning where the
requirements are expressed as temporal logic specifications. However, in
real-life scenarios, it is to be expected that not all user task requirements
can be realized by the robot. In such cases, the robot must provide feedback to
the user on why it cannot accomplish a given task. Moreover, the robot should
indicate what tasks it can accomplish which are as "close" as possible to the
initial user intent. This paper establishes that the latter problem, which is
referred to as the minimal specification revision problem, is NP complete. A
heuristic algorithm is presented that can compute good approximations to the
Minimal Revision Problem (MRP) in polynomial time. The experimental study of
the algorithm demonstrates that in most problem instances the heuristic
algorithm actually returns the optimal solution. Finally, some cases where the
algorithm does not return the optimal solution are presented.Comment: 23 pages, 16 figures, 2 tables, International Joural of Robotics
Research 2014 Major Revision (submitted
Budget Feasible Mechanisms for Experimental Design
In the classical experimental design setting, an experimenter E has access to
a population of potential experiment subjects , each
associated with a vector of features . Conducting an experiment
with subject reveals an unknown value to E. E typically assumes
some hypothetical relationship between 's and 's, e.g., , and estimates from experiments, e.g., through linear
regression. As a proxy for various practical constraints, E may select only a
subset of subjects on which to conduct the experiment.
We initiate the study of budgeted mechanisms for experimental design. In this
setting, E has a budget . Each subject declares an associated cost to be part of the experiment, and must be paid at least her cost. In
particular, the Experimental Design Problem (EDP) is to find a set of
subjects for the experiment that maximizes V(S) = \log\det(I_d+\sum_{i\in
S}x_i\T{x_i}) under the constraint ; our objective
function corresponds to the information gain in parameter that is
learned through linear regression methods, and is related to the so-called
-optimality criterion. Further, the subjects are strategic and may lie about
their costs.
We present a deterministic, polynomial time, budget feasible mechanism
scheme, that is approximately truthful and yields a constant factor
approximation to EDP. In particular, for any small and , we can construct a (12.98, )-approximate mechanism that is
-truthful and runs in polynomial time in both and
. We also establish that no truthful,
budget-feasible algorithms is possible within a factor 2 approximation, and
show how to generalize our approach to a wide class of learning problems,
beyond linear regression
Cleaning Denial Constraint Violations through Relaxation
Data cleaning is a time-consuming process that depends on the data analysis
that users perform. Existing solutions treat data cleaning as a separate
offline process that takes place before analysis begins. Applying data cleaning
before analysis assumes a priori knowledge of the inconsistencies and the query
workload, thereby requiring effort on understanding and cleaning the data that
is unnecessary for the analysis. We propose an approach that performs
probabilistic repair of denial constraint violations on-demand, driven by the
exploratory analysis that users perform. We introduce Daisy, a system that
seamlessly integrates data cleaning into the analysis by relaxing query
results. Daisy executes analytical query-workloads over dirty data by weaving
cleaning operators into the query plan. Our evaluation shows that Daisy adapts
to the workload and outperforms traditional offline cleaning on both synthetic
and real-world workloads.Comment: To appear in SIGMOD 2020 proceeding
Sound ranking algorithms for XML search
Ranking algorithms for XML should reflect the actual combined content and structure constraints of queries, while at the same time producing equal rankings for queries that are semantically equal. Ranking algorithms that produce different rankings for queries that are semantically equal are easily detected by tests on large databases: We call such algorithms not sound. We report the behavior of different approaches to ranking content-and-structure queries on pairs of queries for which we expect equal ranking results from the query semantics. We show that most of these approaches are not sound. Of the remaining approaches, only 3 adhere to the W3C XQuery Full-Text standard
Towards Intelligent Databases
This article is a presentation of the objectives and techniques
of deductive databases. The deductive approach to databases aims at extending
with intensional definitions other database paradigms that describe
applications extensionaUy. We first show how constructive specifications can
be expressed with deduction rules, and how normative conditions can be defined
using integrity constraints. We outline the principles of bottom-up and
top-down query answering procedures and present the techniques used for
integrity checking. We then argue that it is often desirable to manage with
a database system not only database applications, but also specifications of
system components. We present such meta-level specifications and discuss
their advantages over conventional approaches
Non-monotone Submodular Maximization with Nearly Optimal Adaptivity and Query Complexity
Submodular maximization is a general optimization problem with a wide range
of applications in machine learning (e.g., active learning, clustering, and
feature selection). In large-scale optimization, the parallel running time of
an algorithm is governed by its adaptivity, which measures the number of
sequential rounds needed if the algorithm can execute polynomially-many
independent oracle queries in parallel. While low adaptivity is ideal, it is
not sufficient for an algorithm to be efficient in practice---there are many
applications of distributed submodular optimization where the number of
function evaluations becomes prohibitively expensive. Motivated by these
applications, we study the adaptivity and query complexity of submodular
maximization. In this paper, we give the first constant-factor approximation
algorithm for maximizing a non-monotone submodular function subject to a
cardinality constraint that runs in adaptive rounds and makes
oracle queries in expectation. In our empirical study, we use
three real-world applications to compare our algorithm with several benchmarks
for non-monotone submodular maximization. The results demonstrate that our
algorithm finds competitive solutions using significantly fewer rounds and
queries.Comment: 12 pages, 8 figure
- …