48,703 research outputs found
Updating beliefs with incomplete observations
Currently, there is renewed interest in the problem, raised by Shafer in
1985, of updating probabilities when observations are incomplete. This is a
fundamental problem in general, and of particular interest for Bayesian
networks. Recently, Grunwald and Halpern have shown that commonly used updating
strategies fail in this case, except under very special assumptions. In this
paper we propose a new method for updating probabilities with incomplete
observations. Our approach is deliberately conservative: we make no assumptions
about the so-called incompleteness mechanism that associates complete with
incomplete observations. We model our ignorance about this mechanism by a
vacuous lower prevision, a tool from the theory of imprecise probabilities, and
we use only coherence arguments to turn prior into posterior probabilities. In
general, this new approach to updating produces lower and upper posterior
probabilities and expectations, as well as partially determinate decisions.
This is a logical consequence of the existing ignorance about the
incompleteness mechanism. We apply the new approach to the problem of
classification of new evidence in probabilistic expert systems, where it leads
to a new, so-called conservative updating rule. In the special case of Bayesian
networks constructed using expert knowledge, we provide an exact algorithm for
classification based on our updating rule, which has linear-time complexity for
a class of networks wider than polytrees. This result is then extended to the
more general framework of credal networks, where computations are often much
harder than with Bayesian nets. Using an example, we show that our rule appears
to provide a solid basis for reliable updating with incomplete observations,
when no strong assumptions about the incompleteness mechanism are justified.Comment: Replaced with extended versio
Efficient, Near Complete and Often Sound Hybrid Dynamic Data Race Prediction (extended version)
Dynamic data race prediction aims to identify races based on a single program
run represented by a trace. The challenge is to remain efficient while being as
sound and as complete as possible. Efficient means a linear run-time as
otherwise the method unlikely scales for real-world programs. We introduce an
efficient, near complete and often sound dynamic data race prediction method
that combines the lockset method with several improvements made in the area of
happens-before methods. By near complete we mean that the method is complete in
theory but for efficiency reasons the implementation applies some optimizations
that may result in incompleteness. The method can be shown to be sound for two
threads but is unsound in general. We provide extensive experimental data that
shows that our method works well in practice.Comment: typos, appendi
Utility indifference pricing with market incompleteness
Utility indifference pricing and hedging theory is presented, showing
how it leads to linear or to non-linear pricing rules for contingent
claims. Convex duality is first used to derive probabilistic
representations for exponential utility-based prices, in a general
setting with locally bounded semi-martingale price processes. The
indifference price for a finite number of claims gives a non-linear
pricing rule, which reduces to a linear pricing rule as the number of
claims tends to zero, resulting in the so-called marginal
utility-based price of the claim. Applications to basis risk models
with lognormal price processes, under full and partial information
scenarios are then worked out in detail. In the full information case,
a claim on a non-traded asset is priced and hedged using a correlated
traded asset. The resulting hedge requires knowledge of the drift
parameters of the asset price processes, which are very difficult to
estimate with any precision. This leads naturally to a further
application, a partial information problem, with the drift parameters
assumed to be random variables whose values are revealed to the hedger
in a Bayesian fashion via a filtering algorithm. The indifference
price is given by the solution to a non-linear PDE, reducing to a
linear PDE for the marginal price when the number of claims becomes
infinitesimally small
The VIMOS-VLT Deep Survey. The dependence of clustering on galaxy stellar mass at z~1
Aims: We use the VVDS-Deep first-epoch data to measure the dependence of
galaxy clustering on galaxy stellar mass, at z~0.85.
Methods: We measure the projected correlation function wp(rp) for sub-samples
with 0.5<z<1.2 covering different mass ranges between 10^9 and 10^11 Msun. We
quantify in detail the observational selection biases using 40 mock catalogues
built from the Millennium run and semi-analytic models.
Results: Our simulations indicate that serious incompleteness in mass is
present only for log(M/Msun)<9.5. In the mass range log(M/Msun)=[9.0-9.5], the
photometric selection function of the VVDS misses 2/3rd of the galaxies. The
sample is virtually 100% complete above 10^10 Msun. We present the first direct
evidence for a clear dependence of clustering on the galaxy stellar mass at
z~0.85. The clustering length increases from r0 ~ 2.76 h^-1 Mpc for galaxies
with mass M>10^9 Msun to r0 ~ 4.28 h^-1 Mpc for galaxies more massive than
10^10.5 Msun. At the same time, the slope increases from ~ 1.67 to ~ 2.28.
A comparison of the observed wp(rp) to local measurements by the SDSS shows
that the evolution is faster for objects less massive than ~10^10.5 Msun. This
is interpreted as a higher dependence on redshift of the linear bias b_L for
the more massive objects. While for the most massive galaxies b_L decreases
from 1.5+/-0.2 at z~0.85 to 1.33+/-0.03 at z~0.15, the less massive population
maintains a virtually constant value b_L~1.3. This result is in agreement with
a scenario in which more massive galaxies formed at high redshift in the
highest peaks of the density field, while less massive objects form at later
epochs from the more general population of dark-matter halos.Comment: 13 pages, 10 figures, accepted in A&
Graph-Based Decoding Model for Functional Alignment of Unaligned fMRI Data
Aggregating multi-subject functional magnetic resonance imaging (fMRI) data
is indispensable for generating valid and general inferences from patterns
distributed across human brains. The disparities in anatomical structures and
functional topographies of human brains warrant aligning fMRI data across
subjects. However, the existing functional alignment methods cannot handle well
various kinds of fMRI datasets today, especially when they are not
temporally-aligned, i.e., some of the subjects probably lack the responses to
some stimuli, or different subjects might follow different sequences of
stimuli. In this paper, a cross-subject graph that depicts the
(dis)similarities between samples across subjects is used as a priori for
developing a more flexible framework that suits an assortment of fMRI datasets.
However, the high dimension of fMRI data and the use of multiple subjects makes
the crude framework time-consuming or unpractical. To address this issue, we
further regularize the framework, so that a novel feasible kernel-based
optimization, which permits nonlinear feature extraction, could be
theoretically developed. Specifically, a low-dimension assumption is imposed on
each new feature space to avoid overfitting caused by the
highspatial-low-temporal resolution of fMRI data. Experimental results on five
datasets suggest that the proposed method is not only superior to several
state-of-the-art methods on temporally-aligned fMRI data, but also suitable for
dealing `with temporally-unaligned fMRI data.Comment: 17 pages, 10 figures, Proceedings of the Association for the
Advancement of Artificial Intelligence (AAAI-20
- …