33,453 research outputs found
Entropic criterion for model selection
Model or variable selection is usually achieved through ranking models
according to the increasing order of preference. One of methods is applying
Kullback-Leibler distance or relative entropy as a selection criterion. Yet
that will raise two questions, why uses this criterion and are there any other
criteria. Besides, conventional approaches require a reference prior, which is
usually difficult to get. Following the logic of inductive inference proposed
by Caticha, we show relative entropy to be a unique criterion, which requires
no prior information and can be applied to different fields. We examine this
criterion by considering a physical problem, simple fluids, and results are
promising.Comment: 10 pages. Accepted for publication in Physica A, 200
Extreme events in the Mediterranean area: A mixed deterministic-statistical approach
Statistical inference suffers for severe limitations when applied to extreme meteo-climatic events. A fundamental theorem proposes a constructive theory for a âuniversalâ distribution law (the Generalized Extreme Value distribution) of extremes. Use of this theorem and of its derivations is nowadays quite common. However, when applying it, the selected events should be real extremes. In practical applications a major source of errors is the fact that there is no strict criterion for selecting extremes and, in order to âfattenâ the statistical sample very âmildâ selection criteria are often used. The theorem in question applies to stationary processes. When a trend is introduced, inference becomes even more problematic. Experience shows that any available a priori knowledge concerning the system can play a fundamental role in the analysis, in particular if it lowers the dimensionality of the parameter space to be explored. The inference procedures serve, then, the purpose of testing the reliability of inductive hypothesis, rather than proving them. Within the above general context, analysis of the hypothesis that the frequency and/or intensity of extreme weather events in the Mediterranean area may be changing is
proposed. The analysis is based on a combined
deterministic-statistical approach: dynamical analysis of intense perturbations is combined with statistical techniques in order to try to formulate the problem in such a way that meaningful conclusion may be achieved
Entropic Inference
In this tutorial we review the essential arguments behing entropic inference.
We focus on the epistemological notion of information and its relation to the
Bayesian beliefs of rational agents. The problem of updating from a prior to a
posterior probability distribution is tackled through an eliminative induction
process that singles out the logarithmic relative entropy as the unique tool
for inference. The resulting method of Maximum relative Entropy (ME), includes
as special cases both MaxEnt and Bayes' rule, and therefore unifies the two
themes of these workshops -- the Maximum Entropy and the Bayesian methods --
into a single general inference scheme.Comment: Presented at MaxEnt 2010, the 30th International Workshop on Bayesian
Inference and Maximum Entropy Methods in Science and Engineering (July 4-9,
2010, Chamonix, France
Frequentist statistics as a theory of inductive inference
After some general remarks about the interrelation between philosophical and
statistical thinking, the discussion centres largely on significance tests.
These are defined as the calculation of -values rather than as formal
procedures for ``acceptance'' and ``rejection.'' A number of types of null
hypothesis are described and a principle for evidential interpretation set out
governing the implications of -values in the specific circumstances of each
application, as contrasted with a long-run interpretation. A variety of more
complicated situations are discussed in which modification of the simple
-value may be essential.Comment: Published at http://dx.doi.org/10.1214/074921706000000400 in the IMS
Lecture Notes--Monograph Series
(http://www.imstat.org/publications/lecnotes.htm) by the Institute of
Mathematical Statistics (http://www.imstat.org
On the role of explanatory and systematic power in scientific reasoning
The paper investigates measures of explanatory power and how to define the inference schema âInference to the Best Explanationâ. It argues that these measures can also be used to quantify the systematic power of a hypothesis and the inference schema âInference to the Best Systematizationâ is defined. It demonstrates that systematic power is a fruitful criterion for theory choice and IBS is truth-conducive. It also shows that even radical Bayesians must admit that systemic power is an integral component of Bayesian reasoning. Finally, the paper puts the achieved results in perspective with van Fraassenâs famous criticism of IB
Subjectivity in inductive inference
This paper examines circumstances under which subjectivity enhances the effectiveness of inductive reasoning. We consider agents facing a data generating process who are characterized by inference rules that may be purely objective (or data-based) or may incorporate subjective considerations. The basic intuition is that agents who invoke no subjective considerations are doomed to "overfit" the data and therefore engage in ineffective learning. The analysis places no computational or memory limitations on the agents|the role for subjectivity emerges in the presence of unlimited reasoning powers.Inductive inference, simplicity, prediction, learning
Where do statistical models come from? Revisiting the problem of specification
R. A. Fisher founded modern statistical inference in 1922 and identified its
fundamental problems to be: specification, estimation and distribution. Since
then the problem of statistical model specification has received scant
attention in the statistics literature. The paper traces the history of
statistical model specification, focusing primarily on pioneers like Fisher,
Neyman, and more recently Lehmann and Cox, and attempts a synthesis of their
views in the context of the Probabilistic Reduction (PR) approach. As argued by
Lehmann [11], a major stumbling block for a general approach to statistical
model specification has been the delineation of the appropriate role for
substantive subject matter information. The PR approach demarcates the
interrelated but complemenatry roles of substantive and statistical information
summarized ab initio in the form of a structural and a statistical model,
respectively. In an attempt to preserve the integrity of both sources of
information, as well as to ensure the reliability of their fusing, a purely
probabilistic construal of statistical models is advocated. This probabilistic
construal is then used to shed light on a number of issues relating to
specification, including the role of preliminary data analysis, structural vs.
statistical models, model specification vs. model selection, statistical vs.
substantive adequacy and model validation.Comment: Published at http://dx.doi.org/10.1214/074921706000000419 in the IMS
Lecture Notes--Monograph Series
(http://www.imstat.org/publications/lecnotes.htm) by the Institute of
Mathematical Statistics (http://www.imstat.org
A knowledge-based system with learning for computer communication network design
Computer communication network design is well-known as complex and hard. For that reason, the most effective methods used to solve it are heuristic. Weaknesses of these techniques are listed and a new approach based on artificial intelligence for solving this problem is presented. This approach is particularly recommended for large packet switched communication networks, in the sense that it permits a high degree of reliability and offers a very flexible environment dealing with many relevant design parameters such as link cost, link capacity, and message delay
Resolving the Raven Paradox: Simple Random Sampling, Stratified Random Sampling, and Inference to the Best Explanation
Simple random sampling resolutions of the raven paradox relevantly diverge from scientific practice. We develop a stratified random sampling model, yielding a better fit and apparently rehabilitating simple random sampling as a legitimate idealization. However, neither accommodates a second concern, the objection from potential bias. We develop a third model that crucially invokes causal considerations, yielding a novel resolution that handles both concerns. This approach resembles Inference to the Best Explanation (IBE) and relates the generalizationâs confirmation to confirmation of an associated law. We give it an objective Bayesian formalization and discuss the compatibility of Bayesianism and IBE
- âŠ