17 research outputs found

    Answering research questions without calculating the mean

    Get PDF
    In an important theoretical article Speelman and McGann (2013) indicated that psychological researchers tend to use statistical procedures that involve calculating the mean of a variable in an uncritical manner. A typical procedure in psychological research consists of calculating the mean of some dependent variable in two or more samples and to present those means as summaries of the samples. The next step is to use some statistical technique (e.g., t -test, ANOVA) in order to be able to determine the probability of finding the observed differences between means in those samples given that the difference between the means of the populations from which the samples were extracted is zero. If this probability is very low (i.e., \u3c 0.05) the psychological researcher decides that the difference between the means of the populations of interest is not zero

    Memory behaviour requires knowledge structures, not memory stores

    Get PDF
    Since the inception of cognitive psychology dominant theories of memory behavior have used the storage metaphor. In the multi-store models (e.g., Broadbent, 1958; Atkinson and Shiffrin, 1968; Baddeley and Hitch, 1974) the memory system comprises one or more short-term memory (STM) stores and a long-term memory (LTM) store. These stores are places where information is located for varying periods of time (i.e., seconds in the STM stores, and minutes to lifetime in the LTM store) and they have varying capacity limits: large for the LTM store, very limited for the STM store—4 to 7 items, see Miller (1956), Broadbent (1958), and Cowan (2001)1. Expertise research has shown that experts are able to remember a large amount of information presented immediately before testing their memory (e.g., more than 80 items in Chase and Ericsson, 1982 and in Gobet and Simon, 1996), suggesting that they are superseding the normal capacity limits of the STM store. However, given that this effect only occurs with domain-specific material expertise theoreticians (e.g., Ericsson and Kintsch, 1995; Gobet and Simon, 1996) explained these results in terms of the use of retrieval structures (see explanation below), but they retained the partition between STM and LTM stores. In this article I adumbrate an alternative explanation that builds upon three sources: (i) the behaviorist conception of memory as behavior (Delaney and Austin, 1998); (ii) models of memory that exclude the STM store (e.g., Nairne, 1992; Fuster, 1997; Neath, 1998; Cowan, 1999; Oberauer, 2002; Conway et al., 2005; McClelland et al., 2010); (iii) Gobet and Simon\u27s (1996) and Ericsson and Kintsch\u27s (1995) emphasis on the role of expertise in memory, and their pioneer theoretical conceptualization of retrieval structures. In the remaining of the article I briefly discuss these three sources, and then I present the alternative explanation and draw some conclusions

    The relationship between personal financial wellness and financial wellbeing: A structural equation modelling approach

    Get PDF
    We examined the construct of financial wellness and its relationship to personal wellbeing, with a focus on the role of financial literacy. Gender comparisons are made using a structural equation modeling analysis including personal wellbeing, financial satisfaction, financial status, financial behavior, financial attitude, and financial knowledge. Males ranked higher in financial satisfaction and financial knowledge whereas females ranked higher in personal wellbeing. Joo’s (2008) concept of financial wellness as multidimensional is supported though the result is improved when a causal model of sub-components is estimated. The relationship of all variables to personal wellbeing is mediated by financial satisfaction, with gender differences: In females the main source of financial satisfaction is financial status whereas in males it is financial knowledge

    Editorial: Neural implementation of expertise

    Get PDF
    How the brain enables humans to reach an outstanding level of performance typical of expertise is of great interest to cognitive neuroscience, as demonstrated by the number and diversity of the articles in this Research Topic (RT). The RT presents a collection of 23 articles written by 80 authors on traditional expertise topics such as sport, board games, and music, but also on the expertise aspects of everyday skills, such as language and the perception of faces and objects. Just as the topics in the RT are diverse, so are the neuroimaging techniques employed and the article formats. Here we will briefly summarize the articles published in the RT

    Accounting for expert performance: the devil is in the details

    Get PDF
    The deliberate practice view has generated a great deal of scientific and popular interest in expert performance. At the same time, empirical evidence now indicates that deliberate practice, while certainly important, is not as important as Ericsson and colleagues have argued it is. In particular, we (Hambrick, Oswald, Altmann, Meinz, Gobet, & Campitelli, 2014) found that individual differences in accumulated amount of deliberate practice accounted for about one-third of the reliable variance in performance in chess and music, leaving the majority of the reliable variance unexplained and potentially explainable by other factors. Ericsson's (2014) defense of the deliberate practice view, though vigorous, is undercut by contradictions, oversights, and errors in his arguments and criticisms, several of which we describe here. We reiterate that the task now is to develop and rigorously test falsifiable theories of expert performance that take into account as many potentially relevant constructs as possible

    Herbert Simon's decision-making approach: Investigation of cognitive processes in experts

    Get PDF
    This is a post print version of the article. The official published can be obtained from the links below - PsycINFO Database Record (c) 2010 APA, all rights reserved.Herbert Simon's research endeavor aimed to understand the processes that participate in human decision making. However, despite his effort to investigate this question, his work did not have the impact in the “decision making” community that it had in other fields. His rejection of the assumption of perfect rationality, made in mainstream economics, led him to develop the concept of bounded rationality. Simon's approach also emphasized the limitations of the cognitive system, the change of processes due to expertise, and the direct empirical study of cognitive processes involved in decision making. In this article, we argue that his subsequent research program in problem solving and expertise offered critical tools for studying decision-making processes that took into account his original notion of bounded rationality. Unfortunately, these tools were ignored by the main research paradigms in decision making, such as Tversky and Kahneman's biased rationality approach (also known as the heuristics and biases approach) and the ecological approach advanced by Gigerenzer and others. We make a proposal of how to integrate Simon's approach with the main current approaches to decision making. We argue that this would lead to better models of decision making that are more generalizable, have higher ecological validity, include specification of cognitive processes, and provide a better understanding of the interaction between the characteristics of the cognitive system and the contingencies of the environment

    Does the cognitive reflection test measure cognitive reflection? A mathematical modeling approach

    Get PDF
    We used a mathematical modeling approach, based on a sample of 2,019 participants, to better understand what the cognitive reflection test (CRT; Frederick In Journal of Economic Perspectives, 19, 25–42, 2005) measures. This test, which is typically completed in less than 10 min, contains three problems and aims to measure the ability or disposition to resist reporting the response that first comes to mind. However, since the test contains three mathematically based problems, it is possible that the test only measures mathematical abilities, and not cognitive reflection. We found that the models that included an inhibition parameter (i.e., the probability of inhibiting an intuitive response), as well as a mathematical parameter (i.e., the probability of using an adequate mathematical procedure), fitted the data better than a model that only included a mathematical parameter. We also found that the inhibition parameter in males is best explained by both rational thinking ability and the disposition toward actively open-minded thinking, whereas in females this parameter was better explained by rational thinking only. With these findings, this study contributes to the understanding of the processes involved in solving the CRT, and will be particularly useful for researchers who are considering using this test in their research

    Three strategies for the critical use of statistical methods in psychological research

    No full text
    One of the earliest criticisms to the null hypothesis statistical significance testing (NHST) approach in psychology was put forward by J. Cohen (1994). Not only did Cohen criticize NHST but also the tendency of psychological researchers to utilize this method in an uncritical manner. In this article, we do not provide a critique over NHST (for a review of criticisms to NHST and a proposed solution for psychology, see Wagenmakers, 2007); rather, inspired by Cohen’s call, we propose three strategies (none of which involves NHST) to encourage a critical use of statistical methods. The strategies are (1) visual representation of cognitive processes and predictions, (2) visual representation of data distributions and choice of the appropriate distribution for analysis, and (3) model comparison. The first strategy aims to provide explicit information about the research design and the psychological theory that were considered in order to derive the main cognitive predictions. The second strategy aims to apply the most appropriate, known formal distribution to the observed data. The third strategy generates a plurality of models and selects the most suitable. The three strategies have been used in the past, so we are not claiming originality. Rather, the goal of this article is to propose that researchers use these three strategies together and give an example of how this would work. We first present the three strategies, then we provide a working example, and finally we discuss the implications of their use
    corecore