205 research outputs found

    Evaluating Mode Effects in Mixed-Mode Survey Data Using Covariate Adjustment Models

    Get PDF
    Abstract The confounding of selection and measurement effects between different modes is a disadvantage of mixed-mode surveys. Solutions to this problem have been suggested in several studies. Most use adjusting covariates to control selection effects. Unfortunately, these covariates must meet strong assumptions, which are generally ignored. This article discusses these assumptions in greater detail and also provides an alternative model for solving the problem. This alternative uses adjusting covariates, explaining measurement effects instead of selection effects. The application of both models is illustrated by using data from a survey on opinions about surveys, which yields mode effects in line with expectations for the latter model, and mode effects contrary to expectations for the former model. However, the validity of these results depends entirely on the (ad hoc) covariates chosen. Research into better covariates might thus be a topic for future studies.</jats:p

    The effect of interviewer and respondent characteristics on refusals in a panel survey

    Full text link
    Die vorliegende Untersuchung basiert auf Daten einer Panelstudie aus dem Bereich der Wahlforschung. Die Ergebnisse zeigen, dass politisch Interessierte mit höherer Wahrscheinlichkeit an der zweiten Welle einer Panel-Studie teilnehmen und dass der erste Kontakt für das zweite Interview vor allem bei Frauen mit niedrigem Bildungsstand sehr wichtig ist. Der Interviewereffekt wurde mit Hilfe einer Mehrebenenanalyse untersucht. Diese Analyse zeigt, dass der von den Interviewern des Jahres 1991 auf die Antwortverweigerungen 1995 ausgehende Effekt stärker war als der Effekt, der von den Interviewern des Jahres 1995 ausgelöst wurde. Dieses bemerkenswerte Ergebnis unterstreicht die Bedeutung, die die beim ersten Interview gemachten Erfahrungen haben. Anhand verschiedener Charakteristika wurden die Unterschiede zwischen den Interviewern modelliert. Ein signifikanter Effekt ging nur von der Anzahl der Interviews aus, die ein Interviewer durchführte: mehr Interviews bedeuten mehr Verweigerungen. (ICEÜbers)"In this paper data from an election panel survey are used. The results make clear that respondents who are more interested in politics are more likely to take part in the second interview of an election panel survey and that the initial contact for the second interview is extremely important for the group of poorly educated women. To evaluate the effect of the interviewer a multi level analysis was done. The results of this analysis show that the effect of the interviewers used in '91 an the refusals realized in '95 is more significant then the effect of the interviewers used in '95. This remarkable result stresses the importance of the experience of the first interview. Several interviewer characteristics were used to model the differences between the interviewers. Only the number of interviews done by an interviewer has a significant effect: more interviews result in more refusals." (author's abstract

    Comparison of Different Approaches to Evaluate and Explain Interviewer Effects

    Get PDF
    Within survey methodology it is common knowledge that interviewers in face-to-face or telephone interviews can have undesirable effects on the obtained answers. These effects can be created in an active way by, for example, asking suggestive questions or they can be obtained in a passive way as a consequence of certain interviewer characteristics eliciting socially desirable answers. These active and passive effects may differ from interviewer to interviewer. These differences between interviewers in systematic effects create additional variance in the data. The proportion of variance in a (substantive) variable that can be explained by the interviewers is the ‘so called’ between-interviewer variance. It is clear that high between-interviewer variance results in a negative assessment of data quality. Notice that not all types of interviewer effects (e.g. ‘pure’ interviewer bias) can be evaluated by means of the analysis of interviewer variance. A frequently used measure for the evaluation of interviewer variance is the intra class correlation coefficient (ICC). This coefficient expresses the homogeneity of the obtained answers within the interviewers compared with the heterogeneity of the answers between the interviewers. To calculate the within and the between variance components it is important to take into account the two-level hierarchical data structure in which respondents are nested within the interviewers. A two-level random intercept model with no independent variables is generally the starting point for such an analysis. The model provides estimates of the within and between interviewer variance used to calculate the basic value of the ICC. The basic model can be elaborated by interviewer characteristics (e.g. experience, workload, gender, ...) at the interviewer level and respondent characteristics at the respondent level. With the interviewer characteristics one can try to explain the between interviewer variance. If these characteristics partly explain this variance, they give some insight into the mechanism behind the interviewer\u27s effects. In contrast, the evaluation of the impact of respondent characteristics (and characteristics of the interview situation) specified at the respondent level is less obvious. With respondent characteristics one tries to explain the variance of the substantive dependent variable of the model. But in the context of the evaluation of interviewer variance, respondent characteristics are also specified in the model to control for differences between interviewers in the composition of the respondent groups. The impact of the interviewers are evaluated after respondent characteristics explained part of the variance in the dependent variable. This means that respondent characteristics are used to explain the variance in the substantive dependent variable and that interviewer effects express the variability between interviewers after controlling for these respondent characteristics. Such models do not assess the effect of respondent characteristics on interviewer effects. In fact, the relationship between the respondent characteristics and the interviewer effects is not specified in the model. However it is reasonable to assume that some respondents are more sensitive to interviewer effects and that in some respondent groups the ICCs are higher. So the specification of the basic multi-level model does not really allow to investigate the relationship between certain respondent characteristics and the extent to which these characteristics influence interviewer effects. In this paper, various alternative specifications of the basic model in which this relationship is explicitly specified will be explored and compared with each other

    Comparison of Different Approaches to Evaluate and Explain Interviewer Effects

    Get PDF
    Within survey methodology it is common knowledge that interviewers in face-to-face or telephone interviews can have undesirable effects on the obtained answers. These effects can be created in an active way by, for example, asking suggestive questions or they can be obtained in a passive way as a consequence of certain interviewer characteristics eliciting socially desirable answers. These active and passive effects may differ from interviewer to interviewer. These differences between interviewers in systematic effects create additional variance in the data. The proportion of variance in a (substantive) variable that can be explained by the interviewers is the ‘so called’ between-interviewer variance. It is clear that high between-interviewer variance results in a negative assessment of data quality. Notice that not all types of interviewer effects (e.g. ‘pure’ interviewer bias) can be evaluated by means of the analysis of interviewer variance. A frequently used measure for the evaluation of interviewer variance is the intra class correlation coefficient (ICC). This coefficient expresses the homogeneity of the obtained answers within the interviewers compared with the heterogeneity of the answers between the interviewers. To calculate the within and the between variance components it is important to take into account the two-level hierarchical data structure in which respondents are nested within the interviewers. A two-level random intercept model with no independent variables is generally the starting point for such an analysis. The model provides estimates of the within and between interviewer variance used to calculate the basic value of the ICC. The basic model can be elaborated by interviewer characteristics (e.g. experience, workload, gender, ...) at the interviewer level and respondent characteristics at the respondent level. With the interviewer characteristics one can try to explain the between interviewer variance. If these characteristics partly explain this variance, they give some insight into the mechanism behind the interviewer\u27s effects. In contrast, the evaluation of the impact of respondent characteristics (and characteristics of the interview situation) specified at the respondent level is less obvious. With respondent characteristics one tries to explain the variance of the substantive dependent variable of the model. But in the context of the evaluation of interviewer variance, respondent characteristics are also specified in the model to control for differences between interviewers in the composition of the respondent groups. The impact of the interviewers are evaluated after respondent characteristics explained part of the variance in the dependent variable. This means that respondent characteristics are used to explain the variance in the substantive dependent variable and that interviewer effects express the variability between interviewers after controlling for these respondent characteristics. Such models do not assess the effect of respondent characteristics on interviewer effects. In fact, the relationship between the respondent characteristics and the interviewer effects is not specified in the model. However it is reasonable to assume that some respondents are more sensitive to interviewer effects and that in some respondent groups the ICCs are higher. So the specification of the basic multi-level model does not really allow to investigate the relationship between certain respondent characteristics and the extent to which these characteristics influence interviewer effects. In this paper, various alternative specifications of the basic model in which this relationship is explicitly specified will be explored and compared with each other

    Chapter 22: A Comparison of Different Approaches to Examining Whether Interviewer Effects Tend to Vary Across Different Subgroups of Respondents. Appendix 22A

    Get PDF
    Table A22A.1 Substantive Questions about Climate Change and Energy (Module D) and Welfare Attitudes (Module E) Included in the Analysis Synta

    'Don’t Know' Responses to Survey Items on Trust in Police and Criminal Courts: A Word of Caution

    Get PDF
    In 2010 the European Social Survey included a module on public trust in national police and criminal courts. The included questions were especially susceptible to item nonresponse. This study examines the interviewer and country variability in responding “I don’t know” to these questions using a beta-binomial logistic mixed model, controlling for demographic background variables. The results show that there are large differences between interviewers and countries which are not due to underlying demographic differences between the respondents. The difference in data quality between interviewers and countries make (inter)national comparisons more difficult. More importantly, we could assume that these missing values could be avoided with sound data collection methods and interviewer trainings

    A Procedure to Assess Interviewer Effects on Nonresponse Bias

    Full text link
    It is generally accepted that interviewers have a considerable effect on survey response. The difference between response success and failure does not only affect the response rate, but can also influence the composition of the realized sample or respondent set, and consequently introduce nonresponse bias. To measure these two different aspects of the obtained sample, response propensities will be used. They have an aggregate mean and variance that can both be used to construct quality indicators for the obtained sample of respondents. As these propensities can also be measured on the interviewer level, this allows evaluation of the interviewer group and of the extent to which individual interviewers contribute to a biased respondent set. In this article, a procedure based on a multilevel model with random intercepts and random slopes is elaborated and illustrated. The results show that the procedure is informative to detect influential interviewers with an impact on nonresponse basis. </jats:p

    Measuring the survey climate: the Flemish case

    Get PDF
    Researchers in several countries have regularly reported decreasing response rates for surveys and the need for increased efforts in order to attain an acceptable response rate: two things that can be seen as signs of a worsening survey climate. At the same time, differences between countries and surveys with regard to the actual level and evolution of response rates have also been noted. Some of these differences are probably linked to differences in the survey content or design. This may hinder the study of the evolving survey climate over time, based on different surveys in different countries, because more readily comparable conditions are desirable. An optimal opportunity for describing the changing survey climate is offered by the Survey of Social- Cultural Changes in Flanders. We analyse yearly data from 1996 to 2013 to examine the evolution of several survey climate indicators. Some indicators reveal a declining survey climate, such as an increased refusal rate and a greater number of contact attempts per respondent. Other indicators reveal a stable survey climate, such as a stable response rate and respondents’ positive, stable attitude towards surveys. Results show that, within the same survey, one can compensate for negative evolution by increasing the efforts made to ensure completed interviews

    Assessing the use of mode preference as a covariate for the estimation of measurement effects between modes: a sequential mixed mode experiment

    Get PDF
    "Mixed mode surveys are presented as a solution to increasing survey costs and decreasing response rates. The disadvantage of such designs is the lack of control over mode effects and the interaction between selection and measurement effects. In a mixed mode survey, measurement effects can put into doubt data comparability between subgroups, or similarly between waves or rounds of a survey conducted using different modes. To understand the extent of measurement effects, selection and measurement effects between modes have to be disentangled. Almost all techniques to separate these effects depend on covariates that are assumed to be mode-insensitive and to fully explain selection effects. Most of the time, these covariates are sociodemographic variables that might be mode-insensitive, but fail to sufficiently explain selection effects. The aim of this research is to assess the performance of mode preference variables as covariates to evaluate selection and measurement effects between modes. In 2012, a mixed mode survey - a web questionnaire followed by face-to-face interviews - was conducted alongside the face-to-face European Social Survey in Estonia (Ainsaar et al., 2013). The questionnaire included mode preference items. In this paper, the effects of the trade-offs between the two assumptions on the precision of estimated selection and measurement effects are compared. The results show that while adding the mode preference to the propensity score model seems to increase the explanatory power of web participation, it decreases the correlation between propensity scores and target variables. In addition, the estimated selection and measurement effects do not always fit the expectation that more selection effects are explained and more measurement effects are detected." (author's abstract

    Understanding and Improving the External Survey Environment of Official Statistics

    Get PDF
    We argue for renewed efforts to improve the external survey environment for official statistics. We introduce the concept of social marketing as one novel way of achieving this. We also propose measuring the survey-taking climate and the related changes on the societal level using a 'survey climate barometer'. Finally, by presenting current and potential initiatives planned by Statistics Canada, we illustrate activities that national statistical institutes could implement with the goal of positively influencing their external survey environment
    corecore