10 research outputs found

    Psychometric analysis of the empathy quotient (EQ) scale

    No full text
    The psychometric properties of the empathy quotient (EQ) measured by Baron-Cohen (2003) are examined. In particular, confirmatory factor analyses comparing a unifactorial structure and a three correlated factor structure suggest that the three factor structure proposed by Lawrence et al. (2004) is a better fit. Exploratory analysis using modification indices suggests that it might be possible to measure the three factors of empathy; cognitive empathy, emotional reactivity and social skills with three five item scales. The problems of self-report measures are discussed as are the problems posed by the pattern of sex differences on these three factors. Finally some links are suggested between the work on EQ and previous work on emotional intelligence

    The Syllable Effect in Anagram Solution: Unrecognised Evidence from Past Studies

    Get PDF
    Six previous studies of the variables affecting anagram solution are re-examined for the evidence that number of syllables contributes to solution difficulty. It was shown that the number of syllables in a solution word was confounded with imagery for one study and with diagram frequency for another. More importantly it was shown that the number of syllables has a large effect on anagram solution difficulty in the re-analysis of the results from the other four studies. In these studies, the number of syllables was either more important than the principal variable examined in the experiment or the second most important variable. Overall the effect size for the number of syllables was large, d = 1.14. The results are discussed in the light of other research and it is suggested that anagram solution may have more in common with other word identification and reading processes than has been previously thought

    Type and token bigram frequencies for two-through nine-letter words and the prediction of anagram difficulty

    Get PDF
    Recent research on anagram solution has produced two original findings. First, it has shown that a new bigram frequency measure called top rank, which is based on a comparison of summed bigram frequencies, is an important predictor of anagram difficulty. Second, it has suggested that the measures from a type count are better than token measures at predicting anagram difficulty.Testing these hypotheses has been difficult because the computation of the bigram statistics is difficult. We present a program that calculates bigram measures for two-to nine-letter words. We then show how the program can be used to compare the contribution of top rank and other bigram frequency measures derived from both a token and a type count. Contrary to previous research, we report that type measures are not better at predicting anagram solution times and that top rank is not the best predictor of anagram difficulty. Lastly we use this program to show that type bigram frequencies are not as good as token bigram frequencies at predicting word identification reaction time

    Sense making in the wake of September 11th: A network analysis of lay understandings

    No full text
    The objective of this research was to document and explore British university students' immediate understanding of the events of September 11th. A network analysis of lay causal perceptions procedure was employed to capture the social perceptions and sense-making of respondents at a time when they and the world struggled to impose meaning and coherence on the events. The study also examined the possible effects of 'belief in a just world' and 'right-wing authoritarianism' on the pattern of perceived causes. The results suggest that most participants perceive cultural and religious differences, the history of conflict in the Middle East, unfairness and prejudice as being the distal causes of the individual agent's emotions and actions. There is also some evidence that right-wing authoritarianism and belief in a just world have an interactive effect on the strength of the perceived link between some of these causes

    The causes of low back pain: a network analysis

    Get PDF
    Beliefs regarding the cause of low back pain differ between individual sufferers and health care professionals. One consequence of this is the potential acquisition of maladaptive attitudes and behaviour in relation to pain, and increases in the utilisation of primary care services (Health Expect.3(3) (2000) 161). Methods that have been used to elicit the causal interpretation of social phenomena are varied yet they are unable to categorically demonstrate the different weightings or levels of importance that individuals may assign. The diagram method of network analysis allows individuals to spontaneously consider pathways they believe to be critical to a target event and to determine the strength of those pathways. Seventy-one completed diagrams indicating the causes that sufferers perceived to be related to low back pain were analysed. The mean number of direct causal paths was 5.61 (SD=3.25) and mean number of indirect causal links was 1.16 (SD=2.34). A significant correlation between path frequency and path strength was also found (r = 0.76, p = 0.001). Sufferers do not have an overtly complex view of the causative factors of low back pain but were able to define four core contributory causes (disc, sciatica, lifting, and injury) and one indirect pathway between lifting and injury. There was a clear delineation between external (biomedical) and internal (person-related) factors that were attributed to low back pain acquisition. By determining these causal attributions it is proposed that treatment packages could be tailored to address biases in thinking. This may be particularly useful for those individuals who attribute their pain as a consequence of external (or biomedical) causes

    Power dressing and meta-analysis: Incorporating power analysis into meta-analysis

    No full text
    Aims. This paper highlights the lack of consideration that is given to power in the health and social sciences, which is a continuing problem with both single study research and more importantly for meta-analysis. Background. The power of a study is the probability that it will lead to a statistically significant result. By ignoring power the single study researcher makes it difficult to get negative results published and therefore affects meta-analysis through publication bias. Researchers using meta-analysis, who also ignore power, then compound the problem by including studies with low power that are more likely to show significant effects. Method. A simple means of calculating an easily understood measure of effect size from a contingency table is demonstrated in this paper. A computer programme for determining the power of a study is recommended and a method of reflecting the adequacy of the power of the studies in a meta-analysis is suggested. An example of this calculation from a meta-analytic study on intravenous magnesium, which produced inaccurate results, is provided. Conclusion. It is demonstrated that incorporating power analysis into this meta-analysis would have prevented misleading conclusions being reached. Some suggestions are made for changes in the protocol of meta-analytic studies, which highlight the importance of power analysis

    Predicting length of stay in hospital after brain injury

    No full text
    It is very useful to have good predictive measures of rehabilitation outcome so that limited resources can be allocated efficiently. This paper reports a cross-validation of a regression equation that predicts length of hospital stay from a patient's admission score on the modified Barthel index. The equation was successful in predicting length of hospital stay in the study

    Meta-analysis and power: Some suggestions for the use of power in research synthesis

    No full text
    The importance of statistical power is under recognized both in single study research and meta-analysis. The power of a study is the probability that it will lead to a statistically significant result. A simple method of establishing the adequacy of the power of meta-analysis is suggested and examples from 2 meta-analytic studies that may have produced inaccurate results are provided. Suggestions are made for changes in the protocol of meta-analytic studies that highlight the importance of power analysis

    Reliability of health information on the Internet: An examination of experts’ ratings

    No full text
    Background: The use of medical experts in rating the content of health-related sites on the Internet has flourished in recent years. In this research, it has been common practice to use a single medical expert to rate the content of the Web sites. In many cases, the expert has rated the Internet health information as poor, and even potentially dangerous. However, one problem with this approach is that there is no guarantee that other medical experts will rate the sites in a similar manner. Objectives: The aim was to assess the reliability of medical experts' judgments of threads in an Internet newsgroup related to a common disease. A secondary aim was to show the limitations of commonly-used statistics for measuring reliability (eg, kappa). Method: The participants in this study were 5 medical doctors, who worked in a specialist unit dedicated to the treatment of the disease. They each rated the information contained in newsgroup threads using a 6-point scale designed by the experts themselves. Their ratings were analyzed for reliability using a number of statistics: Cohen's kappa, gamma, Kendall's W, and Cronbach's alpha. Results: Reliability was absent for ratings of questions, and low for ratings of responses. The various measures of reliability used gave conflicting results. No measure produced high reliability. Conclusions: The medical experts showed a low agreement when rating the postings from the newsgroup. Hence, it is important to test inter-rater reliability in research assessing the accuracy and quality of health-related information on the Internet. A discussion of the different measures of agreement that could be used reveals that the choice of statistic can be problematic. It is therefore important to consider the assumptions underlying a measure of reliability before using it. Often, more than one measure will be needed for "triangulation" purposes

    The Role of Syllables in Anagram Solution: A Rasch Analysis

    Get PDF
    Anagrams are frequently used by experimental psychologists interested in how the mental lexicon is organized. Until very recently, research has overlooked the importance of syllable structure in solving anagrams and assumed that solution difficulty was mainly due to frequency factors (e.g., bigram statistics). The present study uses Rasch analysis to demonstrate that the number of syllables is a very important factor influencing anagram solution difficulty for both good and poor problem solvers, with polysyllabic words being harder to solve. Furthermore, it suggests that syllable frequency may have an impact on solution times for polysyllabic words, with more frequent syllables being more difficult to solve. The study illustrates the advantages of Rasch analysis for reliable and unidimensional measurement of item difficulty
    corecore