8,326 research outputs found
Expert Elicitation for Reliable System Design
This paper reviews the role of expert judgement to support reliability
assessments within the systems engineering design process. Generic design
processes are described to give the context and a discussion is given about the
nature of the reliability assessments required in the different systems
engineering phases. It is argued that, as far as meeting reliability
requirements is concerned, the whole design process is more akin to a
statistical control process than to a straightforward statistical problem of
assessing an unknown distribution. This leads to features of the expert
judgement problem in the design context which are substantially different from
those seen, for example, in risk assessment. In particular, the role of experts
in problem structuring and in developing failure mitigation options is much
more prominent, and there is a need to take into account the reliability
potential for future mitigation measures downstream in the system life cycle.
An overview is given of the stakeholders typically involved in large scale
systems engineering design projects, and this is used to argue the need for
methods that expose potential judgemental biases in order to generate analyses
that can be said to provide rational consensus about uncertainties. Finally, a
number of key points are developed with the aim of moving toward a framework
that provides a holistic method for tracking reliability assessment through the
design process.Comment: This paper commented in: [arXiv:0708.0285], [arXiv:0708.0287],
[arXiv:0708.0288]. Rejoinder in [arXiv:0708.0293]. Published at
http://dx.doi.org/10.1214/088342306000000510 in the Statistical Science
(http://www.imstat.org/sts/) by the Institute of Mathematical Statistics
(http://www.imstat.org
Active Collaborative Ensemble Tracking
A discriminative ensemble tracker employs multiple classifiers, each of which
casts a vote on all of the obtained samples. The votes are then aggregated in
an attempt to localize the target object. Such method relies on collective
competence and the diversity of the ensemble to approach the target/non-target
classification task from different views. However, by updating all of the
ensemble using a shared set of samples and their final labels, such diversity
is lost or reduced to the diversity provided by the underlying features or
internal classifiers' dynamics. Additionally, the classifiers do not exchange
information with each other while striving to serve the collective goal, i.e.,
better classification. In this study, we propose an active collaborative
information exchange scheme for ensemble tracking. This, not only orchestrates
different classifier towards a common goal but also provides an intelligent
update mechanism to keep the diversity of classifiers and to mitigate the
shortcomings of one with the others. The data exchange is optimized with regard
to an ensemble uncertainty utility function, and the ensemble is updated via
co-training. The evaluations demonstrate promising results realized by the
proposed algorithm for the real-world online tracking.Comment: AVSS 2017 Submissio
Generalized belief change with imprecise probabilities and graphical models
We provide a theoretical investigation of probabilistic belief revision in complex frameworks, under extended conditions of uncertainty, inconsistency and imprecision. We motivate our kinematical approach by specializing our discussion to probabilistic reasoning with graphical models, whose modular representation allows for efficient inference. Most results in this direction are derived from the relevant work of Chan and Darwiche (2005), that first proved the inter-reducibility of virtual and probabilistic evidence. Such forms of information, deeply distinct in their meaning, are extended to the conditional and imprecise frameworks, allowing further generalizations, e.g. to experts' qualitative assessments. Belief aggregation and iterated revision of a rational agent's belief are also explored
A general framework for quantifying the effects of land-use history on ecosystem dynamics
Land-use legacies are important for explaining present-day ecological patterns and processes. However, an overarching approach to quantify land-use history effects on ecosystem properties is lacking, mainly due to the scarcity of high-quality, complete and detailed data on past land use. We propose a general framework for quantifying the effects of land-use history on ecosystem properties, which is applicable (i) to different ecological processes in various ecosystem types and across trophic levels; and (ii) when historical data are incomplete or of variable quality.
The conceptual foundation of our framework is that past land use affects current (and future) ecosystem properties through altering the past values of resources and conditions that are the driving variables of ecosystem responses. We describe and illustrate how Markov chains can be applied to derive past time series of driving variables, and how these time series can be used to improve our understanding of present-day ecosystem properties.
We present our framework in a stepwise manner, elucidating its general nature. We illustrate its application through a case study on the importance of past light levels for the contemporary understorey composition of temperate deciduous forest. We found that the understorey shows legacies of past forest management: high past light availability lead to a low proportion of typical forest species in the understorey. Our framework can be a useful tool for quantifying the effect of past land use on ecological patterns and processes and enhancing our understanding of ecosystem dynamics by including legacy effects which have often been ignored
Iterative Amortized Inference
Inference models are a key component in scaling variational inference to deep
latent variable models, most notably as encoder networks in variational
auto-encoders (VAEs). By replacing conventional optimization-based inference
with a learned model, inference is amortized over data examples and therefore
more computationally efficient. However, standard inference models are
restricted to direct mappings from data to approximate posterior estimates. The
failure of these models to reach fully optimized approximate posterior
estimates results in an amortization gap. We aim toward closing this gap by
proposing iterative inference models, which learn to perform inference
optimization through repeatedly encoding gradients. Our approach generalizes
standard inference models in VAEs and provides insight into several empirical
findings, including top-down inference techniques. We demonstrate the inference
optimization capabilities of iterative inference models and show that they
outperform standard inference models on several benchmark data sets of images
and text.Comment: International Conference on Machine Learning (ICML) 201
A probabilistic reasoning and learning system based on Bayesian belief networks
SIGLEAvailable from British Library Document Supply Centre- DSC:DX173015 / BLDSC - British Library Document Supply CentreGBUnited Kingdo
- …