644 research outputs found
A Possibilistic and Probabilistic Approach to Precautionary Saving
This paper proposes two mixed models to study a consumer's optimal saving in
the presence of two types of risk.Comment: Panoeconomicus, 201
Induction of Interpretable Possibilistic Logic Theories from Relational Data
The field of Statistical Relational Learning (SRL) is concerned with learning
probabilistic models from relational data. Learned SRL models are typically
represented using some kind of weighted logical formulas, which make them
considerably more interpretable than those obtained by e.g. neural networks. In
practice, however, these models are often still difficult to interpret
correctly, as they can contain many formulas that interact in non-trivial ways
and weights do not always have an intuitive meaning. To address this, we
propose a new SRL method which uses possibilistic logic to encode relational
models. Learned models are then essentially stratified classical theories,
which explicitly encode what can be derived with a given level of certainty.
Compared to Markov Logic Networks (MLNs), our method is faster and produces
considerably more interpretable models.Comment: Longer version of a paper appearing in IJCAI 201
The Inflation Technique for Causal Inference with Latent Variables
The problem of causal inference is to determine if a given probability
distribution on observed variables is compatible with some causal structure.
The difficult case is when the causal structure includes latent variables. We
here introduce the for tackling this problem. An
inflation of a causal structure is a new causal structure that can contain
multiple copies of each of the original variables, but where the ancestry of
each copy mirrors that of the original. To every distribution of the observed
variables that is compatible with the original causal structure, we assign a
family of marginal distributions on certain subsets of the copies that are
compatible with the inflated causal structure. It follows that compatibility
constraints for the inflation can be translated into compatibility constraints
for the original causal structure. Even if the constraints at the level of
inflation are weak, such as observable statistical independences implied by
disjoint causal ancestry, the translated constraints can be strong. We apply
this method to derive new inequalities whose violation by a distribution
witnesses that distribution's incompatibility with the causal structure (of
which Bell inequalities and Pearl's instrumental inequality are prominent
examples). We describe an algorithm for deriving all such inequalities for the
original causal structure that follow from ancestral independences in the
inflation. For three observed binary variables with pairwise common causes, it
yields inequalities that are stronger in at least some aspects than those
obtainable by existing methods. We also describe an algorithm that derives a
weaker set of inequalities but is more efficient. Finally, we discuss which
inflations are such that the inequalities one obtains from them remain valid
even for quantum (and post-quantum) generalizations of the notion of a causal
model.Comment: Minor final corrections, updated to match the published version as
closely as possibl
Valid and efficient imprecise-probabilistic inference with partial priors, III. Marginalization
As Basu (1977) writes, "Eliminating nuisance parameters from a model is
universally recognized as a major problem of statistics," but after more than
50 years since Basu wrote these words, the two mainstream schools of thought in
statistics have yet to solve the problem. Fortunately, the two mainstream
frameworks aren't the only options. This series of papers rigorously develops a
new and very general inferential model (IM) framework for
imprecise-probabilistic statistical inference that is provably valid and
efficient, while simultaneously accommodating incomplete or partial prior
information about the relevant unknowns when it's available. The present paper,
Part III in the series, tackles the marginal inference problem. Part II showed
that, for parametric models, the likelihood function naturally plays a central
role and, here, when nuisance parameters are present, the same principles
suggest that the profile likelihood is the key player. When the likelihood
factors nicely, so that the interest and nuisance parameters are perfectly
separated, the valid and efficient profile-based marginal IM solution is
immediate. But even when the likelihood doesn't factor nicely, the same
profile-based solution remains valid and leads to efficiency gains. This is
demonstrated in several examples, including the famous Behrens--Fisher and
gamma mean problems, where I claim the proposed IM solution is the best
solution available. Remarkably, the same profiling-based construction offers
validity guarantees in the prediction and non-parametric inference problems.
Finally, I show how a broader view of this new IM construction can handle
non-parametric inference on risk minimizers and makes a connection between
non-parametric IMs and conformal prediction.Comment: Follow-up to arXiv:2211.14567. Feedback welcome at
https://researchers.one/articles/23.09.0000
A possibilistic framework for constraint-based metabolic flux analysis
<p>Abstract</p> <p>Background</p> <p>Constraint-based models allow the calculation of the metabolic flux states that can be exhibited by cells, standing out as a powerful analytical tool, but they do not determine which of these are likely to be existing under given circumstances. Typical methods to perform these predictions are (a) flux balance analysis, which is based on the assumption that cell behaviour is optimal, and (b) metabolic flux analysis, which combines the model with experimental measurements.</p> <p>Results</p> <p>Herein we discuss a possibilistic framework to perform metabolic flux estimations using a constraint-based model and a set of measurements. The methodology is able to handle inconsistencies, by considering sensors errors and model imprecision, to provide rich and reliable flux estimations. The methodology can be cast as linear programming problems, able to handle thousands of variables with efficiency, so it is suitable to deal with large-scale networks. Moreover, the possibilistic estimation does not attempt necessarily to predict the actual fluxes with precision, but rather to exploit the available data – even if those are scarce – to distinguish possible from impossible flux states in a gradual way.</p> <p>Conclusion</p> <p>We introduce a possibilistic framework for the estimation of metabolic fluxes, which is shown to be flexible, reliable, usable in scenarios lacking data and computationally efficient.</p
- …