568 research outputs found
A Tutorial on Fisher Information
In many statistical applications that concern mathematical psychologists, the
concept of Fisher information plays an important role. In this tutorial we
clarify the concept of Fisher information as it manifests itself across three
different statistical paradigms. First, in the frequentist paradigm, Fisher
information is used to construct hypothesis tests and confidence intervals
using maximum likelihood estimators; second, in the Bayesian paradigm, Fisher
information is used to define a default prior; lastly, in the minimum
description length paradigm, Fisher information is used to measure model
complexity
Fitting the Cusp Catastrophe in R: A cusp Package Primer
Of the seven elementary catastrophes in catastrophe theory, the âÂÂcuspâ model is the most widely applied. Most applications are however qualitative. Quantitative techniques for catastrophe modeling have been developed, but so far the limited availability of flexible software has hindered quantitative assessment. We present a package that implements and extends the method of Cobb (Cobb and Watson'80; Cobb, Koppstein, and Chen'83), and makes it easy to quantitatively fit and compare different cusp catastrophe models in a statistically principled way. After a short introduction to the cusp catastrophe, we demonstrate the package with two instructive examples.
Evidential Calibration of Confidence Intervals
We present a novel and easy-to-use method for calibrating error-rate based
confidence intervals to evidence-based support intervals. Support intervals are
obtained from inverting Bayes factors based on a parameter estimate and its
standard error. A support interval can be interpreted as "the observed data
are at least times more likely under the included parameter values than
under a specified alternative". Support intervals depend on the specification
of prior distributions for the parameter under the alternative, and we present
several types that allow different forms of external knowledge to be encoded.
We also show how prior specification can to some extent be avoided by
considering a class of prior distributions and then computing so-called minimum
support intervals which, for a given class of priors, have a one-to-one mapping
with confidence intervals. We also illustrate how the sample size of a future
study can be determined based on the concept of support. Finally, we show how
the bound for the type I error rate of Bayes factors leads to a bound for the
coverage of support intervals. An application to data from a clinical trial
illustrates how support intervals can lead to inferences that are both
intuitive and informative
A Comparison of Reinforcement Learning Models for the Iowa Gambling Task Using Parameter Space Partitioning
The Iowa gambling task (IGT) is one of the most popular tasks used to study decisionmaking deficits in clinical populations. In order to decompose performance on the IGT in its constituent psychological processes, several cognitive models have been proposed (e.g., the Expectancy Valence (EV) and Prospect Valence Learning (PVL) models). Here we present a comparison of three models—the EV and PVL models, and a combination of these models (EV-PU)—based on the method of parameter space partitioning. This method allows us to assess the choice patterns predicted by the models across their entire parameter space. Our results show that the EV model is unable to account for a frequency-of-losses effect, whereas the PVL and EV-PU models are unable to account for a pronounced preference for the bad decks with many switches. All three models underrepresent pronounced choice patterns that are frequently seen in experiments. Overall, our results suggest that the search of an appropriate IGT model has not yet come to an end
- …