34 research outputs found

    An Introduction to Model Selection: Tools and Algorithms

    Get PDF
    Model selection is a complicated matter in science, and psychology is no exception. In particular, the high variance in the object of study (i.e., humans) prevents the use of Poppers falsification principle (which is the norm in other sciences). Therefore, the desirability of quantitative psychological models must be assessed by measuring the capacity of the model to fit empirical data. In the present paper, an error measure (likelihood), as well as five methods to compare model fits (the likelihood ratio test, Akaikes information criterion, the Bayesian information criterion, bootstrapping and cross-validation), are presented. The use of each method is illustrated by an example, and the advantages and weaknesses of each method are also discussed

    Understanding statistical power using noncentral probability distributions: Chi-squared, G-squared, and ANOVA

    Get PDF
    This paper presents a graphical way of interpreting effect sizes when more than two groups are involved in a statistical analysis. This method uses noncentral distributions to specify the alternative hypothesis, and the statistical power can thus be directly computed. This principle is illustrated using the chi-squared distribution and the F distribution. Examples of chi-squared and ANOVA statistical tests are provided to further illustrate the point. It is concluded that power analyses are an essential part of statistical analysis, and that using noncentral distributions provides an argument in favour of using a factorial ANOVA over multiple t tests

    The Role of Problem Representation in Producing Near-Optimal TSP Tours

    Get PDF
    Gestalt psychologists pointed out about 100 years ago that a key to solving difficult insight problems is to change the mental representation of the problem, as is the case, for example, with solving the six matches problem in 2D vs. 3D space. In this study we ask a different question, namely what representation is used when subjects solve search, rather than insight problems. Some search problems, such as the traveling salesman problem (TSP), are defined in the Euclidean plane on the computer monitor or on a piece of paper, and it seems natural to assume that subjects who solve a Euclidean TSP do so using a Euclidean representation. It is natural to make this assumption because the TSP task is defined in that space. We provide evidence that, on the contrary, subjects may produce TSP tours in the complex-log representation of the TSP city map. The complex-log map is a reasonable assumption here, because there is evidence suggesting that the retinal image is represented in the primary visual cortex as a complex-log transformation of the retina. It follows that the subject’s brain may be “solving” the TSP using complex-log maps. We conclude by pointing out that solving a Euclidean problem in a complex-log representation may be acceptable, even desirable, if the subject is looking for near-optimal, rather than optimal solutions

    Whole-brain R1 predicts manganese exposure and biological effects in welders

    Get PDF
    Manganese (Mn) is a neurotoxicant that, due to its paramagnetic property, also functions as a magnetic resonance imaging (MRI) T1 contrast agent. Previous studies in Mn toxicity have shown that Mn accumulates in the brain, which may lead to parkinsonian symptoms. In this article, we trained support vector machines (SVM) using whole-brain R1 (R1 = 1/T1) maps from 57 welders and 32 controls to classify subjects based on their air Mn concentration ([Mn]Air), Mn brain accumulation (ExMnBrain), gross motor dysfunction (UPDRS), thalamic GABA concentration (GABAThal), and total years welding. R1 was highly predictive of [Mn]Air above a threshold of 0.20 mg/m3 with an accuracy of 88.8% and recall of 88.9%. R1 was also predictive of subjects with GABAThal having less than or equal to 2.6 mM with an accuracy of 82% and recall of 78.9%. Finally, we used an SVM to predict age as a method of verifying that the results could be attributed to Mn exposure. We found that R1 was predictive of age below 48 years of age with accuracies ranging between 75 and 82% with recall between 94.7% and 76.9% but was not predictive above 48 years of age. Together, this suggests that lower levels of exposure (< 0.20 mg/m3 and < 18 years of welding on the job) do not produce discernable signatures, whereas higher air exposures and subjects with more total years welding produce signatures in the brain that are readily identifiable using SVM

    The Compositional Nature of Verb and Argument Representations in the Human Brain

    Get PDF
    How does the human brain represent simple compositions of objects, actors,and actions? We had subjects view action sequence videos during neuroimaging (fMRI) sessions and identified lexical descriptions of those videos by decoding (SVM) the brain representations based only on their fMRI activation patterns. As a precursor to this result, we had demonstrated that we could reliably and with high probability decode action labels corresponding to one of six action videos (dig, walk, etc.), again while subjects viewed the action sequence during scanning (fMRI). This result was replicated at two different brain imaging sites with common protocols but different subjects, showing common brain areas, including areas known for episodic memory (PHG, MTL, high level visual pathways, etc.,i.e. the 'what' and 'where' systems, and TPJ, i.e. 'theory of mind'). Given these results, we were also able to successfully show a key aspect of language compositionality based on simultaneous decoding of object class and actor identity. Finally, combining these novel steps in 'brain reading' allowed us to accurately estimate brain representations supporting compositional decoding of a complex event composed of an actor, a verb, a direction, and an object.Comment: 11 pages, 6 figure

    An Introduction to Model Selection: Tools and Algorithms

    No full text
    Model selection is a complicated matter in science, and psychology is no exception. In particular, the high variance in the object of study (i.e., humans) prevents the use of Popper&apos;s falsification principle (which is the norm in other sciences). Therefore, the desirability of quantitative psychological models must be assessed by measuring the capacity of the model to fit empirical data. In the present paper, an error measure (likelihood), as well as five methods to compare model fits (the likelihood ratio test, Akaike&apos;s information criterion, the Bayesian information criterion, bootstrapping and cross-validation), are presented. The use of each method is illustrated by an example, and the advantages and weaknesses of each method are also discussed. The main goal of scientific investigation is to explain and predict empirical phenomena. The former is usually handled by formulating theories which explain the observations using one or several abstract concepts that are causally related to the experience (Humes, 1888). However, explaining observed phenomena is not sufficient: the tentative explanation, expressed as a particular theory, must also account for new observations (prediction). This is often achieved by operationalizing the theory into a model. A model is a specification of a theory 1 which makes the prediction of new phenomena possible. As a result, the predictions of a given model can be empirically tested in order to assess its desirability. In particular, Karl Popper argued in favour of a simple way to determine the scientifi

    Practice and Preparation Time Facilitate System-Switching in Perceptual Categorization

    No full text
    Mounting evidence suggests that category learning is achieved using different psychological and biological systems. While existing multiple-system theories and models of categorization may disagree about the number or nature of the different systems, all assume that people can switch between systems seamlessly. However, little empirical data has been collected to test this assumption, and recent available data suggest that system-switching is difficult. The main goal of this article is to identify factors influencing the proportion of participants who successfully learn to switch between procedural and declarative systems on a trial-by-trial basis. Specifically, we tested the effects of preparation time and practice, two factors that have been useful in task-switching, in a system-switching experiment. The results suggest that practice and preparation time can be beneficial to system-switching (as calculated by a higher proportion of switchers and lower switch costs), especially when they are jointly present. However, this improved system-switching comes at the cost of a larger button-switch interference when changing the location of the response buttons. The article concludes with a discussion of the implications of these findings for empirical research on system-switching and theoretical work on multiple-systems of category learning

    Emergence of Bayesian Structures from Recurrent Networks

    No full text
    The problem of representational form has always limited the applicability of cognitive models: where symbolic representations have succeeded, distributed representations have failed, and vice-versa. Hybrid modeling is thus a promising venue, which however brings its share of new problems. For instance, it doubles the number of necessary assumptions. To counter this problem, we believe that one network should generate the other. This would require specific assumptions for only one network. In the present project, we plan to use a recurrent network to generate a Bayesian network. The former will be used to model lowlevel cognition while the latter will represent higher-level cognition. Moreover, both models will be active in every task and will need to communicate in order to generate a unique answer. General Problem In cognitive science, the problem of representational form is crucial. During the cognitive revolution, the computer metaphor was used to model human intelligence, which was thus seen as a set of symbol-manipulating syntactic processes (Turing, 1936). These processes were modeled as a series of conjunctive conditions and consequential actions (known as “IF-THEN ” rules). This modeling approach is referred to as the classical view (Russel &amp; Norvig, 1995). In the late seventies, another metaphor became increasingly popular for modeling cognitive processes, namely: the brain. The connectionist (or “neural”) networks proposed during this period were mostly unsupervised networks, either competitive (Grossberg, 1976; Kohonen
    corecore