16 research outputs found

    Estimating Support for Extremism and Its Correlates: The Case of Pakistan

    Get PDF
    The extent of support of extremist ideology is a major area of concern for both policy makers and academic researchers. Identifying the extent and correlates of a difficult to measure concept such as extremist ideology is often limited by the use of a single imperfect indicator. This paper outlines one approach, latent class analysis (LCA), to overcome this issue and uses the example of estimating support for such ideology in Pakistan. Using survey data from Pakistani men, the level of support is estimated using LCA employing several indicators related to extremism. The results suggest that although most Pakistanis are not supportive of extremist ideology, a substantively important portion of men are supportive. LCA also allows for class assignment, which is useful for understanding covariate relationships with the latent variable. Based on the results of the LCA, respondents are assigned to different classifications of extremist support, and a continuation-ratio logistic regression model is employed allowing for more covariates to be examined. The results suggest that there are a number of characteristics important in influencing support within this subset of the population. In particular, younger and less educated men are more likely to support extremism ideology. The results suggest a potentially useful methodology in understanding extremism, as well as a greater understanding of the problem of extremist support

    Numeric Estimation and Response Options: An Examination of the Accuracy of Numeric and Vague Quantifier Responses

    Get PDF
    Many survey questions ask respondents to provide responses that contain quantitative information, often using either numeric open-ended responses or vague quantifier scales. Generally, survey researchers have argued against the use of vague quantifier scales. However, no study has compared accuracy between vague quantifiers and numeric open-ended responses. This study is the first to do so, using a unique data set created through an experiment. 124 participants studied word lists of paired words, where the experiment employed a 2 (context) x 2 (response form) x 6 (actual frequency) factorial design, with the context and form factors manipulated between subjects, and the frequency factor manipulated within subjects. The two conditions for the context factor are same-context and different-context conditions where the context word either was the same or different for each presentation of the target word. The other between subject factor was response form, where participants responded to a recall test using either vague quantifiers or numeric open-ended responses. Translations of vague quantifiers were obtained and used in accuracy tests. Finally, a numeracy test was administered to collect information about respondent numeracy. Different accuracy measures are estimated and analyzed. Results show context memory did not have a significant effect. Numeracy has an effect, but the direction depends on form and context. Actual frequency had a significant effect on accuracy, but did not interact with other variables. Importantly, results suggest vague quantifiers tend to improve accuracy more often relative to numeric open-ended response

    Usage and impact metrics for Parliamentary libraries

    Get PDF
    Parliamentary libraries are important in supporting informed decision-making in democracies. Understanding Members’ information needs is important, but the usage and impact of these libraries have been less explored. A particular example of the United Kingdom’s House of Lords Library is studied, collecting and analysing data using techniques from the field of data science. These techniques are useful in extracting information from existing sources that may not have been designed for the purpose of data collection. A number of data sources available at the Lords Library are outlined and an example of how these data can be used to understand Library usage and impact is presented. Results suggest that Member usage varies significantly and that there is a small but significant relationship between usage and making speeches in the chamber. Further work should explore other indicators of impact, but these methods show promise in creating library metrics, particularly in Parliamentary settings

    The Longitudinal Item Count Technique: A New Technique for Asking Sensitive Questions in Surveys

    Get PDF
    Asking respondents sensitive questions directly may lead to socially desirable responding. As alternative, some have proposed using the Item Count Technique (ICT). The problem with ICT methods is that these can have low statistical efficiency, but also do not provide an indicator of the behavior at the respondent level. We propose a new variant of the ICT to overcome these issues: the Longitudinal Item Count Technique (LICT). Instead of administering different lists (one including the sensitive item and one without) to two random groups in a single survey, the LICT administers both lists to each respondent, but at different survey waves. The sensitive attribute can be estimated as the difference within individuals across waves. Like the ICT, the LICT can be extended to a two-list version. In this paper we discuss the assumptions, implementation, limitations, and ethical implications of this novel technique, and present application of the method in the Understanding Society Innovation Panel, estimating the prevalence of the gay, lesbian, and bisexual population in the United Kingdom. In this first application, the LICT in some ways appeared to provide better estimates than the traditional ICT, but also provided some inconsistency in estimates. We discuss the implications of these results and point to routes for further research

    Linking Twitter and Survey Data: The Impact of Survey Mode and Demographics on Consent Rates Across Three UK Studies.

    Get PDF
    In light of issues such as increasing unit nonresponse in surveys, several studies argue that social media sources such as Twitter can be used as a viable alternative. However, there are also a number of shortcomings with Twitter data such as questions about its representativeness of the wider population and the inability to validate whose data you are collecting. A useful way forward could be to combine survey and Twitter data to supplement and improve both. To do so, consent within a survey is first needed. This study explores the consent decisions in three large representative surveys of the adult British population to link Twitter data to survey responses and the impact that demographics and survey mode have on these outcomes. Findings suggest that consent rates for data linkage are relatively low, and this is in part mediated by mode, where face-to-face surveys have higher consent rates than web versions. These findings are important to understand the potential for linking Twitter and survey data but also to the consent literature generally

    Understanding Society Innovation Panel Wave 6: results from methodological experiments

    Get PDF
    This paper presents some preliminary findings from Wave 6 of the Innovation Panel (IP6) of Understanding Society: The UK Household Longitudinal Study. Understanding Society is a major panel survey in the UK. In March 2013, the sixth wave of the Innovation Panel went into the field. IP6 used a mixed-mode design, using on-line interviews and face-to-face interviews. This paper describes the design of IP6, the experiments carried and the preliminary findings from early analysis of the data

    The Effect of Online and Mixed-Mode Measurement of Cognitive Ability

    Get PDF
    A number of studies, particularly longitudinal surveys, are collecting direct measures of cognitive ability, given its importance as a measure in social science research. As longitudinal studies increasingly switch to mixed-mode data collection, frequently including a web component, differences in survey outcomes including cognitive ability may result from mode effects. Differences may arise due to respondent self-selection into mode or due to the mode causing differential measurement. Using a longitudinal survey that measured cognitive ability after introducing a mixed-mode design with a web component, this research explores if and how mode affects cognitive ability outcomes. This survey allows for control of several possible selection mechanisms, including a limited set of direct cognitive ability measures collected in a single mode in an earlier wave. Findings presented here show clearly that web respondents do better on a number of cognitive ability indicators. However, it does not appear that this is wholly explainable by respondents of different ability self-selecting into particular modes. Rather, it appears that measurement of cognitive ability may differ across modes. This result is potentially problematic as comparability is a key component of using cognitive ability in further research

    Numeric Estimation and Response Options: An Examination of the Measurement Properties of Numeric and Vague Quantifier Responses

    Get PDF
    Many survey questions ask respondents to provide responses that contain quantitative information. These questions are often asked requiring open ended numeric responses, while others have been asked using vague quantifier scales. How these questions are asked, particularly in terms of the response format, can have an important impact on the data. Therefore, the response format is of particular importance for ensuring that any use of the data contains the best possible information. Generally, survey researchers have argued against the use of vague quantifier scales. This dissertation compares various measurement properties between numeric open ended and vague quantifier responses, using three studies containing questions with both formats. The first study examines uses new experimental data to compare accuracy between the measures; the second and third use existing data to compare predictive validity of the two formats, with one examining behavioral reports, the other examining subjective probabilities. All three studies examine the logical consistency between measures, and the potential correlates related to improved measurement properties. Importantly, these studies examine the influence of numeracy, a potentially important but rarely examined variable. The results of the three studies indicate that vague quantifiers may have better measurement properties than numeric open ended responses, contrary to many researchers’ arguments. Studies 2 and 3 are most clear about this increased strength; in both of the studies, using a number of tests, the predictive validity of vague quantifiers was consistently greater than that of numeric open ended responses, regardless of numeracy level. Study 1 shows that at that generally, vague quantifiers result in more accurate data than numeric, but this finding depends on other factors, such as numeracy. Therefore, numeracy was infrequently found to be important, but at times did have an impact on accuracy. Further, in the three studies, it was found that the two formats were logically consistent when translations between the questions were directly asked for, but inconsistency occurred when there was not a direct translation. Advisor: Robert Bell

    Last Year Your Answer Was ...: The Impact of Dependent Interviewing Wording and Survey Factors on Reporting of Change

    No full text
    Prior studies suggest memories are potentially error prone. Proactive dependent interviewing (PDI) is a possible method to reduce errors in reports of change in longitudinal studies, reminding respondents of previous answers while asking if there has been any change since the last survey. However, little research has been conducted on the impact of PDI question wording. This study examines the impact of PDI wording on change reports and how these wordings interact with other survey features such as mode, question content, and prior change. Experimental results indicate that asking about change in an unbalanced fashion leads to more reports of change initially than other wordings, but only in a face-to-face survey. Follow-up questions led to final change reports that were similar across all wordings, but this necessitates asking additional questions. Findings suggest that asking PDI using change as the initial option should be avoided
    corecore