60 research outputs found
Techniques for asking sensitive questions in labor market surveys
This dissertation explores methods to improve the quality of data about sensitive labor market topics, such as undeclared work and receipt of basic income support in Germany, using surveys of the general population. Due to the sensitive nature of both topics, respondents may choose to misreport and adjust their answers in accordance with social norms.
Over the past decades, special strategiesâparticularly targeted to reduce misreporting on sensitive topicsâhave been developed. One such class of data collection strategies are so-called âdejeopardizingâ techniques, out of which the randomized response technique (RRT) and the item count technique (ICT) are the most popular and best investigated ones. The goal is to elicit more honest answers from respondents by increasing the anonymity of the question-and-answer process. These techniques provide prevalence estimates as well as estimates of regression coefficients, regressing dependent variables generated by means of RRT or ICT on a set of covariates of interest.
However, these dejeopardizing techniques have not been applied to collect data on undeclared work or receipt of welfare benefits in German surveys. This dissertation aims at closing this gap using an experimental design that allows us to compare the performance of these dejeopardizing techniques to direct questioning. In 2010 we conducted two telephone surveys on undeclared work and welfare benefit receipt. We experimentally tested whether the RRT, the ICT, or the newly developed item sum technique (IST) reduce bias due to social desirability compared to direct questioning (under the âmore-is-betterâ assumption and using validation data in one study).
Our results suggest that neither the RRT nor the ICT provide unambiguous results with respect to improving the accuracy of reports of the socially undesirable behavior, while the IST results were more promising. This dissertation provides insights into a variety of practical and theoretical factors contributing to a successful implementation of the RRT, the ICT and the IST in labor market surveys.Die vorliegende Dissertation geht der Frage nach, wie das AusmaĂ von Schwarzarbeit und Arbeitslosengeld-II-Bezug in Deutschland im Rahmen von Befragungen der allgemeinen Bevölkerung möglichst valide geschĂ€tzt werden kann. Aufgrund des heiklen Charakters beider Themen ist davon auszugehen, dass SelbstauskuÌnfte hĂ€ufig nicht der Wahrheit entsprechen und stattdessen in vielen FĂ€llen sozial erwuÌnschte Antworten gegeben werden und das Verhalten systematisch unterberichtet wird.
Um diesen Antwortverzerrungen entgegen zu wirken, wurden in den letzten Jahrzehnten in der empirischen Sozialforschung alternative Befragungstechniken entwickelt. So basieren beispielsweise die Randomized Response Technique (RRT) und die Item Count Technik (ICT) auf dem Prinzip der verschluÌsselten Antworten und sollen durch eine Erhöhung der AnonymitĂ€t in der Interviewsituation sozial erwuÌnschtes Antwortverhalten reduzieren. Der Vorteil dieser Erhebungsverfahren liegt darin, dass zum einen weniger Annahmen hinsichtlich der SchĂ€tzungen getroffen werden und zum anderen mittels statistischer Auswertungen ziel gerichtet multivariate ZusammenhĂ€nge zwischen einer mit ICT oder RRT generierten abhĂ€ngigen Variablen und Kovariaten auf individueller Ebene untersucht werden können.
Bislang wurden diese Techniken allerdings noch nicht zur Erhebung von Schwarzarbeit oder des Bezugs von Arbeitslosengeld-II in Deutschland eingesetzt. Die Dissertation schlieĂt diese LuÌcke und beschĂ€ftigt sich mit einem experimentellen Vergleich â sowie einer Weiterentwicklung â von Erhebungstechniken speziell fuÌr heikle Fragen mit einer direkten Befragung im Kontext von Arbeitsmarktsurveys. Mittels Fragen zum Thema Schwarzarbeit und zum Arbeitslosengeld-IIBezug, wird im Rahmen zweier Bevölkerungsbefragungen aus dem Jahre 2010 empirisch untersucht ob die RRT, die ICT bzw. die eigens entwickelte Item Sum Technik (IST) den Befragten tatsĂ€chlich ein höheres AusmaĂ sozial unerwuÌnschter Antworten entlocken als die direkte Befragung (unter der bekannten âmore-is-betterâ Annahme sowie mittels einer Validierungsstudie).
Die Befunde zeigen, dass die hĂ€ufig angenommene Wirkung der RRT oder der ICT auf die Bereitschaft der Befragten, sozial unerwuÌnschtes Verhalten zu berichten, nicht eindeutig ausfĂ€llt. Die Ergebnisse der IST fallen hingegen positiver aus. Die vorliegende Dissertation liefert somit Hinweise hinsichtlich verschiedener praktischer als auch theoretischer Faktoren, die zu einer erfolgreichen Implementation der RRT, der ICT und der IST in Arbeitsmarktsurveys beitragen können
Techniques for Asking Sensitive Questions in Labour Market Surveys
This dissertation focuses on techniques that are expected to reduce measurement error in labor market surveys due to social desirability concerns. The first part assesses the effectiveness of de-jeopardizing techniques, such as the Randomized Response Technique (RRT) and the Item Count Technique (ICT), when collecting data on undeclared work and receipt of basic income support in Germany. In addition, we developed and applied a new technique - Item Sum Technique (IST) - for eliciting responses to sensitive questions, where the responses are continuous variables. The results suggest that neither RRT nor ICT increases reports of socially undesirable behavior, whereas the IST results are more promising.Um Antwortverzerrungen bei der Erhebung von sozial unerwĂŒnschtem Verhalten in Arbeitsmarktsurveys zu reduzieren, können spezielle Befragungstechniken eingesetzt werden. Die Arbeit untersucht die Wirksamkeit dieser alternativen Fragetechniken - wie Randomized Response Technique (RRT) und Item Count Technique (ICT) - zur Erhebung des AusmaĂes von Schwarzarbeit und Arbeitslosengeld-II-Bezug in Deutschland. AuĂerdem wird eine neue Methode zur Erhebung von quantitativen heiklen Merkmalen entwickelt und angewendet: die Item Sum Technique (IST). Die Befunde zeigen, dass die hĂ€ufig angenommene Wirkung der RRT oder der ICT auf die Bereitschaft der Befragten, sozial unerwĂŒnschtes Verhalten zu berichten, nicht eindeutig ausfĂ€llt. Die Ergebnisse der IST fallen hingegen positiver aus
Techniques for asking sensitive questions in labor market surveys
This dissertation explores methods to improve the quality of data about sensitive labor market topics, such as undeclared work and receipt of basic income support in Germany, using surveys of the general population. Due to the sensitive nature of both topics, respondents may choose to misreport and adjust their answers in accordance with social norms.
Over the past decades, special strategiesâparticularly targeted to reduce misreporting on sensitive topicsâhave been developed. One such class of data collection strategies are so-called âdejeopardizingâ techniques, out of which the randomized response technique (RRT) and the item count technique (ICT) are the most popular and best investigated ones. The goal is to elicit more honest answers from respondents by increasing the anonymity of the question-and-answer process. These techniques provide prevalence estimates as well as estimates of regression coefficients, regressing dependent variables generated by means of RRT or ICT on a set of covariates of interest.
However, these dejeopardizing techniques have not been applied to collect data on undeclared work or receipt of welfare benefits in German surveys. This dissertation aims at closing this gap using an experimental design that allows us to compare the performance of these dejeopardizing techniques to direct questioning. In 2010 we conducted two telephone surveys on undeclared work and welfare benefit receipt. We experimentally tested whether the RRT, the ICT, or the newly developed item sum technique (IST) reduce bias due to social desirability compared to direct questioning (under the âmore-is-betterâ assumption and using validation data in one study).
Our results suggest that neither the RRT nor the ICT provide unambiguous results with respect to improving the accuracy of reports of the socially undesirable behavior, while the IST results were more promising. This dissertation provides insights into a variety of practical and theoretical factors contributing to a successful implementation of the RRT, the ICT and the IST in labor market surveys.Die vorliegende Dissertation geht der Frage nach, wie das AusmaĂ von Schwarzarbeit und Arbeitslosengeld-II-Bezug in Deutschland im Rahmen von Befragungen der allgemeinen Bevölkerung möglichst valide geschĂ€tzt werden kann. Aufgrund des heiklen Charakters beider Themen ist davon auszugehen, dass SelbstauskuÌnfte hĂ€ufig nicht der Wahrheit entsprechen und stattdessen in vielen FĂ€llen sozial erwuÌnschte Antworten gegeben werden und das Verhalten systematisch unterberichtet wird.
Um diesen Antwortverzerrungen entgegen zu wirken, wurden in den letzten Jahrzehnten in der empirischen Sozialforschung alternative Befragungstechniken entwickelt. So basieren beispielsweise die Randomized Response Technique (RRT) und die Item Count Technik (ICT) auf dem Prinzip der verschluÌsselten Antworten und sollen durch eine Erhöhung der AnonymitĂ€t in der Interviewsituation sozial erwuÌnschtes Antwortverhalten reduzieren. Der Vorteil dieser Erhebungsverfahren liegt darin, dass zum einen weniger Annahmen hinsichtlich der SchĂ€tzungen getroffen werden und zum anderen mittels statistischer Auswertungen ziel gerichtet multivariate ZusammenhĂ€nge zwischen einer mit ICT oder RRT generierten abhĂ€ngigen Variablen und Kovariaten auf individueller Ebene untersucht werden können.
Bislang wurden diese Techniken allerdings noch nicht zur Erhebung von Schwarzarbeit oder des Bezugs von Arbeitslosengeld-II in Deutschland eingesetzt. Die Dissertation schlieĂt diese LuÌcke und beschĂ€ftigt sich mit einem experimentellen Vergleich â sowie einer Weiterentwicklung â von Erhebungstechniken speziell fuÌr heikle Fragen mit einer direkten Befragung im Kontext von Arbeitsmarktsurveys. Mittels Fragen zum Thema Schwarzarbeit und zum Arbeitslosengeld-IIBezug, wird im Rahmen zweier Bevölkerungsbefragungen aus dem Jahre 2010 empirisch untersucht ob die RRT, die ICT bzw. die eigens entwickelte Item Sum Technik (IST) den Befragten tatsĂ€chlich ein höheres AusmaĂ sozial unerwuÌnschter Antworten entlocken als die direkte Befragung (unter der bekannten âmore-is-betterâ Annahme sowie mittels einer Validierungsstudie).
Die Befunde zeigen, dass die hĂ€ufig angenommene Wirkung der RRT oder der ICT auf die Bereitschaft der Befragten, sozial unerwuÌnschtes Verhalten zu berichten, nicht eindeutig ausfĂ€llt. Die Ergebnisse der IST fallen hingegen positiver aus. Die vorliegende Dissertation liefert somit Hinweise hinsichtlich verschiedener praktischer als auch theoretischer Faktoren, die zu einer erfolgreichen Implementation der RRT, der ICT und der IST in Arbeitsmarktsurveys beitragen können
Examining Changes of Interview Length Over the Course of the Field Period
It is well established that interviewers learn behaviors both during training and on the job. How this learning occurs has received surprisingly little empirical attention: Is it driven by the interviewer herself or by the respondents she interviews? There are two competing hypotheses about what happens during field data collection: (1) interviewers learn behaviors from their previous interviews, and thus change their behavior in reaction to the behaviors previously encountered; and (2) interviewers encounter different types of and, especially, less cooperative respondents (i.e., nonresponse propensity affecting the measurement error situation), leading to changes in interview behaviors over the course of the field period. We refer to these hypotheses as the experience and response propensity hypotheses, respectively. This paper examines the relationship between proxy indicators for the experience and response propensity hypotheses on interview length using data and paradata from two telephone surveys. Our results indicate that both interviewer-driven experience and respondent-driven response propensity are associated with the length of interview. While general interviewing experience is nonsignificant, within-study experience decreases interview length significantly, even when accounting for changes in sample composition. Interviewers with higher cooperation rates have significantly shorter interviews in study one; however, this effect is mediated by the number of words spoken by the interviewer. We find that older respondents and male respondents have longer interviews despite controlling for the number of words spoken, as do respondents who complete the survey at first contact. Not surprisingly, interviews are significantly longer the more words interviewers and respondents speak
Examining Changes of Interview Length Over the Course of the Field Period
It is well established that interviewers learn behaviors both during training and on the job. How this learning occurs has received surprisingly little empirical attention: Is it driven by the interviewer herself or by the respondents she interviews? There are two competing hypotheses about what happens during field data collection: (1) interviewers learn behaviors from their previous interviews, and thus change their behavior in reaction to the behaviors previously encountered; and (2) interviewers encounter different types of and, especially, less cooperative respondents (i.e., nonresponse propensity affecting the measurement error situation), leading to changes in interview behaviors over the course of the field period. We refer to these hypotheses as the experience and response propensity hypotheses, respectively. This paper examines the relationship between proxy indicators for the experience and response propensity hypotheses on interview length using data and paradata from two telephone surveys. Our results indicate that both interviewer-driven experience and respondent-driven response propensity are associated with the length of interview. While general interviewing experience is nonsignificant, within-study experience decreases interview length significantly, even when accounting for changes in sample composition. Interviewers with higher cooperation rates have significantly shorter interviews in study one; however, this effect is mediated by the number of words spoken by the interviewer. We find that older respondents and male respondents have longer interviews despite controlling for the number of words spoken, as do respondents who complete the survey at first contact. Not surprisingly, interviews are significantly longer the more words interviewers and respondents speak
Panel "Arbeitsmarkt und soziale Sicherung" - Die PASS Campus Files : DatensĂ€tze fĂŒr den Einsatz in der Lehre
"Exclusively for use in academic teaching at universities or research institutes, the IAB generated absolutely anonymized data, so-called campus files (CF), based on the data of the 'Panel Study Labour Market and Social Security' (PASS). This documentation briefly provides a description of the data and its limitations." (Author's abstract, IAB-Doku) ((en)) Additional Information Hier finden Sie weitere Informationen zum Datensatz PASSIAB-Haushaltspanel, Datenanonymisierung, Datensatzbeschreibung, Datenorganisation
Do Interviewer Postsurvey Evaluations of Respondentsâ Engagement Measure Who Respondents Are or What They Do? A Behavior Coding Study
Survey interviewers are often tasked with assessing the quality of respondentsâ answers after completing a survey interview. These interviewer observations have been used to proxy for measurement error in interviewer-administered surveys. How interviewers formulate these evaluations and how well they proxy for measurement error has received little empirical attention. According to dual-process theories of impression formation, individuals form impressions about others based on the social categories of the observed person (e.g., sex, race) and individual behaviors observed during an interaction. Although initial impressions start with heuristic, rule-of-thumb evaluations, systematic processing is characterized by extensive incorporation of available evidence. In a survey context, if interviewers default to heuristic information processing when evaluating respondent engagement, then we expect their evaluations to be primarily based on respondent characteristics and stereotypes associated with those characteristics. Under systematic processing, on the other hand, interviewers process and evaluate respondents based on observable respondent behaviors occurring during the question-answering process. We use the Work and Leisure Today Survey, including survey data and behavior codes, to examine proxy measures of heuristic and systematic processing by interviewers as predictors of interviewer postsurvey evaluations of respondentsâ cooperativeness, interest, friendliness, and talkativeness. Our results indicate that CATI interviewers base their evaluations on actual behaviors during an interview (i.e., systematic processing) rather than perceived characteristics of the respondent or the interviewer (i.e., heuristic processing). These results are reassuring for the many surveys that collect interviewer observations as proxies for data quality.
Supplemental material is attached below
The Effect of Question Characteristics on Question Reading Behaviors in Telephone Surveys
Asking questions fluently, exactly as worded, and at a reasonable pace is a fundamental part of a survey interviewerâs role. Doing so allows the question to be asked as intended by the researcher and may decrease the risk of measurement error and contribute to rapport. Despite the central importance placed on reading questions exactly as worded, interviewers commonly misread questions, and it is not always clear why. Thus, understanding the risk of measurement error requires understanding how different interviewers, respondents, and question features may trigger question reading problems. In this article, we evaluate the effects of question features on question asking behaviors, controlling for interviewer and respondent characteristics. We also examine how question asking behaviors are related to question-asking time. Using two nationally representative telephone surveys in the United States, we find that longer questions and questions with transition statements are less likely to be read exactly and fluently, that questions with higher reading levels and parentheticals are less likely to be read exactly across both surveys and that disfluent readings decrease as interviewers gain experience across the field period. Other question characteristics vary in their associations with the outcomes across the two surveys. We also find that inexact and disfluent question readings are longer, but read at a faster pace, than exact and fluent question reading. We conclude with implications for interviewer training and questionnaire design
Memory Gaps in the American Time Use Survey. Investigating the Role of Retrieval Cues and Respondentsâ Level of Effort
Unaccounted respondent memory gasp- i.e., those activity gaps that are attributed by interviewers to respondents\u27 memory failure- have serious implications for data quality. We contribute to the existing literature by investigating interviewing dynamics using paradata, distinguishing temporary memory gaps, which can be resolved during the interview, from enduring memory gaps, which cannot be resolved. We investigate factors that are associated with both kinds of memory gaps and how different response strategies are associated with data quality. We investigate two hypotheses that are associated with temporary and enduring memory gaps. The motivated cuing hypothesis posits that respondents who display more behaviors related to the presence and use of retrieval cues throughout the survey will resolve temporary memory gaps more successfully compared to respondents displaying fewer such behaviors. This should result in overall lower levels of enduring memory gaps. The lack of effort hypothesis suggests that respondent who are less eager to participate in the survey will expend less cognitive effort to resolve temporary memory gaps compared to more motivated respondents. This should then result in a positive association with enduring memory gaps and no association with temporary memory gaps. Using survey and paradata from the 2010 ATUS, our analyses indicate that, as hypothesized, behaviors indicating the use of retrieval cues are positively associated with temporary memory gaps and negatively associated with enduring memory gaps. Motivated respondents experiencing memory difficulties overcome what otherwise would result in enduring memory gaps more successfully compared to other respondents. Indicators of lack of effort, such as whether or not the respondent initially refused to participate in the survey, are positively associated with enduring memory gaps suggesting that reluctant respondents do not resolve memory gaps. The paper concludes with a discussion of implications for survey research
- âŠ