13,659 research outputs found
Bayesian Updating, Model Class Selection and Robust Stochastic Predictions of Structural Response
A fundamental issue when predicting structural response by using mathematical models is how to treat both modeling and excitation uncertainty. A general framework for this is presented which uses probability as a multi-valued
conditional logic for quantitative plausible reasoning in the presence of uncertainty due to incomplete information. The
fundamental probability models that represent the structure’s uncertain behavior are specified by the choice of a stochastic
system model class: a set of input-output probability models for the structure and a prior probability distribution over this set
that quantifies the relative plausibility of each model. A model class can be constructed from a parameterized deterministic
structural model by stochastic embedding utilizing Jaynes’ Principle of Maximum Information Entropy. Robust predictive
analyses use the entire model class with the probabilistic predictions of each model being weighted by its prior probability, or if
structural response data is available, by its posterior probability from Bayes’ Theorem for the model class. Additional robustness
to modeling uncertainty comes from combining the robust predictions of each model class in a set of competing candidates
weighted by the prior or posterior probability of the model class, the latter being computed from Bayes’ Theorem. This higherlevel application of Bayes’ Theorem automatically applies a quantitative Ockham razor that penalizes the data-fit of more
complex model classes that extract more information from the data. Robust predictive analyses involve integrals over highdimensional spaces that usually must be evaluated numerically. Published applications have used Laplace's method of
asymptotic approximation or Markov Chain Monte Carlo algorithms
Maximum entropy, fluctuations and priors
The method of maximum entropy (ME) is extended to address the following
problem: Once one accepts that the ME distribution is to be preferred over all
others, the question is to what extent are distributions with lower entropy
supposed to be ruled out. Two applications are given. The first is to the
theory of thermodynamic fluctuations. The formulation is exact, covariant under
changes of coordinates, and allows fluctuations of both the extensive and the
conjugate intensive variables. The second application is to the construction of
an objective prior for Bayesian inference. The prior obtained by following the
ME method to its inevitable conclusion turns out to be a special case of what
are currently known under the name of entropic priors.Comment: presented at MaxEnt 2000, the 20th International Workshop on Bayesian
Inference and Maximum Entropy Methods (July 8-13, Gif-sur-Yvette, France)
Quantum Probabilities as Behavioral Probabilities
We demonstrate that behavioral probabilities of human decision makers share
many common features with quantum probabilities. This does not imply that
humans are some quantum objects, but just shows that the mathematics of quantum
theory is applicable to the description of human decision making. The
applicability of quantum rules for describing decision making is connected with
the nontrivial process of making decisions in the case of composite prospects
under uncertainty. Such a process involves deliberations of a decision maker
when making a choice. In addition to the evaluation of the utilities of
considered prospects, real decision makers also appreciate their respective
attractiveness. Therefore, human choice is not based solely on the utility of
prospects, but includes the necessity of resolving the utility-attraction
duality. In order to justify that human consciousness really functions
similarly to the rules of quantum theory, we develop an approach defining human
behavioral probabilities as the probabilities determined by quantum rules. We
show that quantum behavioral probabilities of humans not merely explain
qualitatively how human decisions are made, but they predict quantitative
values of the behavioral probabilities. Analyzing a large set of empirical
data, we find good quantitative agreement between theoretical predictions and
observed experimental data.Comment: Latex file, 32 page
Lightweight Probabilistic Deep Networks
Even though probabilistic treatments of neural networks have a long history,
they have not found widespread use in practice. Sampling approaches are often
too slow already for simple networks. The size of the inputs and the depth of
typical CNN architectures in computer vision only compound this problem.
Uncertainty in neural networks has thus been largely ignored in practice,
despite the fact that it may provide important information about the
reliability of predictions and the inner workings of the network. In this
paper, we introduce two lightweight approaches to making supervised learning
with probabilistic deep networks practical: First, we suggest probabilistic
output layers for classification and regression that require only minimal
changes to existing networks. Second, we employ assumed density filtering and
show that activation uncertainties can be propagated in a practical fashion
through the entire network, again with minor changes. Both probabilistic
networks retain the predictive power of the deterministic counterpart, but
yield uncertainties that correlate well with the empirical error induced by
their predictions. Moreover, the robustness to adversarial examples is
significantly increased.Comment: To appear at CVPR 201
Statistical mechanics of error exponents for error-correcting codes
Error exponents characterize the exponential decay, when increasing message
length, of the probability of error of many error-correcting codes. To tackle
the long standing problem of computing them exactly, we introduce a general,
thermodynamic, formalism that we illustrate with maximum-likelihood decoding of
low-density parity-check (LDPC) codes on the binary erasure channel (BEC) and
the binary symmetric channel (BSC). In this formalism, we apply the cavity
method for large deviations to derive expressions for both the average and
typical error exponents, which differ by the procedure used to select the codes
from specified ensembles. When decreasing the noise intensity, we find that two
phase transitions take place, at two different levels: a glass to ferromagnetic
transition in the space of codewords, and a paramagnetic to glass transition in
the space of codes.Comment: 32 pages, 13 figure
Disentangling causal webs in the brain using functional Magnetic Resonance Imaging: A review of current approaches
In the past two decades, functional Magnetic Resonance Imaging has been used
to relate neuronal network activity to cognitive processing and behaviour.
Recently this approach has been augmented by algorithms that allow us to infer
causal links between component populations of neuronal networks. Multiple
inference procedures have been proposed to approach this research question but
so far, each method has limitations when it comes to establishing whole-brain
connectivity patterns. In this work, we discuss eight ways to infer causality
in fMRI research: Bayesian Nets, Dynamical Causal Modelling, Granger Causality,
Likelihood Ratios, LiNGAM, Patel's Tau, Structural Equation Modelling, and
Transfer Entropy. We finish with formulating some recommendations for the
future directions in this area
Automating Contract Negotiation
The automation of contract negotiation requires intelligent agents that can assimilate and use real-time information flows wisely. Electronic markets are information-rich with access to the Internet and the World Wide Web. A new breed o
- …