61,618 research outputs found
Permutation and Grouping Methods for Sharpening Gaussian Process Approximations
Vecchia's approximate likelihood for Gaussian process parameters depends on
how the observations are ordered, which can be viewed as a deficiency because
the exact likelihood is permutation-invariant. This article takes the
alternative standpoint that the ordering of the observations can be tuned to
sharpen the approximations. Advantageously chosen orderings can drastically
improve the approximations, and in fact, completely random orderings often
produce far more accurate approximations than default coordinate-based
orderings do. In addition to the permutation results, automatic methods for
grouping calculations of components of the approximation are introduced, having
the result of simultaneously improving the quality of the approximation and
reducing its computational burden. In common settings, reordering combined with
grouping reduces Kullback-Leibler divergence from the target model by a factor
of 80 and computation time by a factor of 2 compared to ungrouped
approximations with default ordering. The claims are supported by theory and
numerical results with comparisons to other approximations, including tapered
covariances and stochastic partial differential equation approximations.
Computational details are provided, including efficiently finding the orderings
and ordered nearest neighbors, and profiling out linear mean parameters and
using the approximations for prediction and conditional simulation. An
application to space-time satellite data is presented
Density-operator evolution: Complete positivity and the Keldysh real-time expansion
We study the reduced time-evolution of open quantum systems by combining
quantum-information and statistical field theory. Inspired by prior work [EPL
102, 60001 (2013) and Phys. Rev. Lett. 111, 050402 (2013)] we establish the
explicit structure guaranteeing the complete positivity (CP) and
trace-preservation (TP) of the real-time evolution expansion in terms of the
microscopic system-environment coupling.
This reveals a fundamental two-stage structure of the coupling expansion:
Whereas the first stage defines the dissipative timescales of the system
--before having integrated out the environment completely-- the second stage
sums up elementary physical processes described by CP superoperators. This
allows us to establish the nontrivial relation between the (Nakajima-Zwanzig)
memory-kernel superoperator for the density operator and novel memory-kernel
operators that generate the Kraus operators of an operator-sum. Importantly,
this operational approach can be implemented in the existing Keldysh real-time
technique and allows approximations for general time-nonlocal quantum master
equations to be systematically compared and developed while keeping the CP and
TP structure explicit.
Our considerations build on the result that a Kraus operator for a physical
measurement process on the environment can be obtained by 'cutting' a group of
Keldysh real-time diagrams 'in half'. This naturally leads to Kraus operators
lifted to the system plus environment which have a diagrammatic expansion in
terms of time-nonlocal memory-kernel operators. These lifted Kraus operators
obey coupled time-evolution equations which constitute an unraveling of the
original Schr\"odinger equation for system plus environment. Whereas both
equations lead to the same reduced dynamics, only the former explicitly encodes
the operator-sum structure of the coupling expansion.Comment: Submission to SciPost Physics, 49 pages including 6 appendices, 13
figures. Significant improvement of introduction and conclusion, added
discussions, fixed typos, no results change
Exact Inference Techniques for the Analysis of Bayesian Attack Graphs
Attack graphs are a powerful tool for security risk assessment by analysing
network vulnerabilities and the paths attackers can use to compromise network
resources. The uncertainty about the attacker's behaviour makes Bayesian
networks suitable to model attack graphs to perform static and dynamic
analysis. Previous approaches have focused on the formalization of attack
graphs into a Bayesian model rather than proposing mechanisms for their
analysis. In this paper we propose to use efficient algorithms to make exact
inference in Bayesian attack graphs, enabling the static and dynamic network
risk assessments. To support the validity of our approach we have performed an
extensive experimental evaluation on synthetic Bayesian attack graphs with
different topologies, showing the computational advantages in terms of time and
memory use of the proposed techniques when compared to existing approaches.Comment: 14 pages, 15 figure
Synchronising C/C++ and POWER
Shared memory concurrency relies on synchronisation primitives: compare-and-swap, load-reserve/store-conditional (aka LL/SC), language-level mutexes, and so on. In a sequentially consistent setting, or even in the TSO setting of x86 and Sparc, these have well-understood semantics. But in the very relaxed settings of IBMÂź, POWERÂź, ARM, or C/C++, it remains surprisingly unclear exactly what the programmer can depend on.
This paper studies relaxed-memory synchronisation. On the hardware side, we give a clear semantic characterisation of the load-reserve/store-conditional primitives as provided by POWER multiprocessors, for the first time since they were introduced 20 years ago; we cover their interaction with relaxed loads, stores, barriers, and dependencies. Our model, while not officially sanctioned by the vendor, is validated by extensive testing, comparing actual implementation behaviour against an oracle generated from the model, and by detailed discussion with IBM staff. We believe the ARM semantics to be similar.
On the software side, we prove sound a proposed compilation scheme of the C/C++ synchronisation constructs to POWER, including C/C++ spinlock mutexes, fences, and read-modify-write operations, together with the simpler atomic operations for which soundness is already known from our previous work; this is a first step in verifying concurrent algorithms that use load-reserve/store-conditional with respect to a realistic semantics. We also build confidence in the C/C++ model in its own terms, fixing some omissions and contributing to the C standards committee adoption of the C++11 concurrency model
Exploiting Causal Independence in Bayesian Network Inference
A new method is proposed for exploiting causal independencies in exact
Bayesian network inference. A Bayesian network can be viewed as representing a
factorization of a joint probability into the multiplication of a set of
conditional probabilities. We present a notion of causal independence that
enables one to further factorize the conditional probabilities into a
combination of even smaller factors and consequently obtain a finer-grain
factorization of the joint probability. The new formulation of causal
independence lets us specify the conditional probability of a variable given
its parents in terms of an associative and commutative operator, such as
``or'', ``sum'' or ``max'', on the contribution of each parent. We start with a
simple algorithm VE for Bayesian network inference that, given evidence and a
query variable, uses the factorization to find the posterior distribution of
the query. We show how this algorithm can be extended to exploit causal
independence. Empirical studies, based on the CPCS networks for medical
diagnosis, show that this method is more efficient than previous methods and
allows for inference in larger networks than previous algorithms.Comment: See http://www.jair.org/ for any accompanying file
Universal lossless source coding with the Burrows Wheeler transform
The Burrows Wheeler transform (1994) is a reversible sequence transformation used in a variety of practical lossless source-coding algorithms. In each, the BWT is followed by a lossless source code that attempts to exploit the natural ordering of the BWT coefficients. BWT-based compression schemes are widely touted as low-complexity algorithms giving lossless coding rates better than those of the Ziv-Lempel codes (commonly known as LZ'77 and LZ'78) and almost as good as those achieved by prediction by partial matching (PPM) algorithms. To date, the coding performance claims have been made primarily on the basis of experimental results. This work gives a theoretical evaluation of BWT-based coding. The main results of this theoretical evaluation include: (1) statistical characterizations of the BWT output on both finite strings and sequences of length n â â, (2) a variety of very simple new techniques for BWT-based lossless source coding, and (3) proofs of the universality and bounds on the rates of convergence of both new and existing BWT-based codes for finite-memory and stationary ergodic sources. The end result is a theoretical justification and validation of the experimentally derived conclusions: BWT-based lossless source codes achieve universal lossless coding performance that converges to the optimal coding performance more quickly than the rate of convergence observed in Ziv-Lempel style codes and, for some BWT-based codes, within a constant factor of the optimal rate of convergence for finite-memory source
Gate-Level Simulation of Quantum Circuits
While thousands of experimental physicists and chemists are currently trying
to build scalable quantum computers, it appears that simulation of quantum
computation will be at least as critical as circuit simulation in classical
VLSI design. However, since the work of Richard Feynman in the early 1980s
little progress was made in practical quantum simulation. Most researchers
focused on polynomial-time simulation of restricted types of quantum circuits
that fall short of the full power of quantum computation. Simulating quantum
computing devices and useful quantum algorithms on classical hardware now
requires excessive computational resources, making many important simulation
tasks infeasible. In this work we propose a new technique for gate-level
simulation of quantum circuits which greatly reduces the difficulty and cost of
such simulations. The proposed technique is implemented in a simulation tool
called the Quantum Information Decision Diagram (QuIDD) and evaluated by
simulating Grover's quantum search algorithm. The back-end of our package,
QuIDD Pro, is based on Binary Decision Diagrams, well-known for their ability
to efficiently represent many seemingly intractable combinatorial structures.
This reliance on a well-established area of research allows us to take
advantage of existing software for BDD manipulation and achieve unparalleled
empirical results for quantum simulation
- âŠ