602,364 research outputs found
Efficient Value of Information Computation
One of the most useful sensitivity analysis techniques of decision analysis
is the computation of value of information (or clairvoyance), the difference in
value obtained by changing the decisions by which some of the uncertainties are
observed. In this paper, some simple but powerful extensions to previous
algorithms are introduced which allow an efficient value of information
calculation on the rooted cluster tree (or strong junction tree) used to solve
the original decision problem.Comment: Appears in Proceedings of the Fifteenth Conference on Uncertainty in
Artificial Intelligence (UAI1999
Learning to select computations
The efficient use of limited computational resources is an essential
ingredient of intelligence. Selecting computations optimally according to
rational metareasoning would achieve this, but this is computationally
intractable. Inspired by psychology and neuroscience, we propose the first
concrete and domain-general learning algorithm for approximating the optimal
selection of computations: Bayesian metalevel policy search (BMPS). We derive
this general, sample-efficient search algorithm for a computation-selecting
metalevel policy based on the insight that the value of information lies
between the myopic value of information and the value of perfect information.
We evaluate BMPS on three increasingly difficult metareasoning problems: when
to terminate computation, how to allocate computation between competing
options, and planning. Across all three domains, BMPS achieved near-optimal
performance and compared favorably to previously proposed metareasoning
heuristics. Finally, we demonstrate the practical utility of BMPS in an
emergency management scenario, even accounting for the overhead of
metareasoning
Sequential Information Elicitation in Multi-Agent Systems
We introduce the study of sequential information elicitation in strategic
multi-agent systems. In an information elicitation setup a center attempts to
compute the value of a function based on private information (a-k-a secrets)
accessible to a set of agents. We consider the classical multi-party
computation setup where each agent is interested in knowing the result of the
function. However, in our setting each agent is strategic,and since acquiring
information is costly, an agent may be tempted not spending the efforts of
obtaining the information, free-riding on other agents' computations. A
mechanism which elicits agents' secrets and performs the desired computation
defines a game. A mechanism is 'appropriate' if there exists an equilibrium in
which it is able to elicit (sufficiently many) agents' secrets and perform the
computation, for all possible secret vectors.We characterize a general
efficient procedure for determining an appropriate mechanism, if such mechanism
exists. Moreover, we also address the existence problem, providing a polynomial
algorithm for verifying the existence of an appropriate mechanism.Comment: Appears in Proceedings of the Twentieth Conference on Uncertainty in
Artificial Intelligence (UAI2004
On the Compositionality of Dynamic Leakage and Its Application to the Quantification Problem
Quantitative information flow (QIF) is traditionally defined as the expected
value of information leakage over all feasible program runs and it fails to
identify vulnerable programs where only limited number of runs leak large
amount of information. As discussed in Bielova (2016), a good notion for
dynamic leakage and an efficient way of computing the leakage are needed. To
address this problem, the authors have already proposed two notions for dynamic
leakage and a method of quantifying dynamic leakage based on model counting.
Inspired by the work of Kawamoto et. al. (2017), this paper proposes two
efficient methods for computing dynamic leakage, a compositional method along
with the sequential structure of a program and a parallel computation based on
the value domain decomposition. For the former, we also investigate both exact
and approximated calculations. From the perspective of implementation, we
utilize binary decision diagrams (BDDs) and deterministic decomposable negation
normal forms (d-DNNFs) to represent Boolean formulas in model counting.
Finally, we show experimental results on several examples.Comment: preprin
Max-value Entropy Search for Efficient Bayesian Optimization
Entropy Search (ES) and Predictive Entropy Search (PES) are popular and
empirically successful Bayesian Optimization techniques. Both rely on a
compelling information-theoretic motivation, and maximize the information
gained about the of the unknown function; yet, both are plagued by
the expensive computation for estimating entropies. We propose a new criterion,
Max-value Entropy Search (MES), that instead uses the information about the
maximum function value. We show relations of MES to other Bayesian optimization
methods, and establish a regret bound. We observe that MES maintains or
improves the good empirical performance of ES/PES, while tremendously
lightening the computational burden. In particular, MES is much more robust to
the number of samples used for computing the entropy, and hence more efficient
for higher dimensional problems.Comment: Proceedings of the 34th International Conference on Machine Learning,
Sydney, Australia, PMLR 70, 201
Block-based quantum-logic synthesis
In this paper, the problem of constructing an efficient quantum circuit for
the implementation of an arbitrary quantum computation is addressed. To this
end, a basic block based on the cosine-sine decomposition method is suggested
which contains qubits. In addition, a previously proposed quantum-logic
synthesis method based on quantum Shannon decomposition is recursively applied
to reach unitary gates over qubits. Then, the basic block is used and some
optimizations are applied to remove redundant gates. It is shown that the exact
value of affects the number of one-qubit and CNOT gates in the proposed
method. In comparison to the previous synthesis methods, the value of is
examined consequently to improve either the number of CNOT gates or the total
number of gates. The proposed approach is further analyzed by considering the
nearest neighbor limitation. According to our evaluation, the number of CNOT
gates is increased by at most a factor of if the nearest neighbor
interaction is applied.Comment: 15 pages, 8 figures, 5 tables, Quantum Information and Computation
(QIC) Journa
Myopic Policy Bounds for Information Acquisition POMDPs
This paper addresses the problem of optimal control of robotic sensing
systems aimed at autonomous information gathering in scenarios such as
environmental monitoring, search and rescue, and surveillance and
reconnaissance. The information gathering problem is formulated as a partially
observable Markov decision process (POMDP) with a reward function that captures
uncertainty reduction. Unlike the classical POMDP formulation, the resulting
reward structure is nonlinear in the belief state and the traditional
approaches do not apply directly. Instead of developing a new approximation
algorithm, we show that if attention is restricted to a class of problems with
certain structural properties, one can derive (often tight) upper and lower
bounds on the optimal policy via an efficient myopic computation. These policy
bounds can be applied in conjunction with an online branch-and-bound algorithm
to accelerate the computation of the optimal policy. We obtain informative
lower and upper policy bounds with low computational effort in a target
tracking domain. The performance of branch-and-bounding is demonstrated and
compared with exact value iteration.Comment: 8 pages, 3 figure
Efficient Estimation of the Value of Information in Monte Carlo Models
The expected value of information (EVI) is the most powerful measure of
sensitivity to uncertainty in a decision model: it measures the potential of
information to improve the decision, and hence measures the expected value of
outcome. Standard methods for computing EVI use discrete variables and are
computationally intractable for models that contain more than a few variables.
Monte Carlo simulation provides the basis for more tractable evaluation of
large predictive models with continuous and discrete variables, but so far
computation of EVI in a Monte Carlo setting also has appeared impractical. We
introduce an approximate approach based on pre-posterior analysis for
estimating EVI in Monte Carlo models. Our method uses a linear approximation to
the value function and multiple linear regression to estimate the linear model
from the samples. The approach is efficient and practical for extremely large
models. It allows easy estimation of EVI for perfect or partial information on
individual variables or on combinations of variables. We illustrate its
implementation within Demos (a decision modeling system), and its application
to a large model for crisis transportation planning.Comment: Appears in Proceedings of the Tenth Conference on Uncertainty in
Artificial Intelligence (UAI1994
A graph-theoretical approach for the computation of connected iso-surfaces based on volumetric data
The existing combinatorial methods for iso-surface computation are efficient
for pure visualization purposes, but it is known that the resulting
iso-surfaces can have holes, and topological problems like missing or wrong
connectivity can appear. To avoid such problems, we introduce a
graph-theoretical method for the computation of iso-surfaces on cuboid meshes
in . The method for the generation of iso-surfaces employs
labeled cuboid graphs such that is the set of vertices
of a cuboid , is the set of edges of and
. The nodes of are weighted by the
values of which represents the volumetric information, e.g.\ from
a Volume of Fluid method. Using a given iso-level , we first obtain
all iso-points, i.e.\ points where the value is attained by the
edge-interpolated -field. The iso-surface is then built from
iso-elements which are composed of triangles and are such that their polygonal
boundary has only iso-points as vertices. All vertices lie on the faces of a
single mesh cell.
We give a proof that the generated iso-surface is connected up to the
boundary of the domain and it can be decomposed into different oriented
components. Two different components may have discrete points or line segments
in common. The graph-theoretical method for the computation of iso-surfaces
developed in this paper enables to recover local information of the iso-surface
that can be used e.g.\ to compute discrete mean curvature and to solve surface
PDEs. Concerning the computational effort, the resulting algorithm is as
efficient as existing combinatorial methods
On Computation of Error Locations and Values in Hermitian Codes
We obtain a technique to reduce the computational complexity associated with
decoding of Hermitian codes. In particular, we propose a method to compute the
error locations and values using an uni-variate error locator and an
uni-variate error evaluator polynomial. To achieve this, we introduce the
notion of Semi-Erasure Decoding of Hermitian codes and prove that decoding of
Hermitian codes can always be performed using semi-erasure decoding. The
central results are:
* Searching for error locations require evaluating an univariate error
locator polynomial over points as in Chien search for Reed-Solomon codes.
* Forney's formula for error value computation in Reed-Solomon codes can
directly be applied to compute the error values in Hermitian codes.
The approach develops from the idea that transmitting a modified form of the
information may be more efficient that the information itself.Comment: 10 pages, Submitted to ITW 2008 (with some minor modifications
- β¦