28 research outputs found
Interacting with Interviewers in Voice and Text Interviews on Smartphones
As people increasingly adopt SMS text messaging for communicating in their daily lives, texting becomes a potentially important way to interact with survey respondents, who may expect that they can communicate with survey researchers as they communicate with others. Thus far our evidence from analyses of 642 iPhone interviews suggests that text interviewing can lead to higher quality data (less satisficing, more disclosure) than voice interviews on the same device, whether the questions are asked by an interviewer or an automated system. Respondents also report high satisfaction with text interviews, with many reporting that text is more convenient because they can continue with other activities while responding. But the interaction with an interviewer in a text interview is substantially different than in a voice interview, with much less of a sense of the interviewerâs social presence as well as quite different time pressure. In principle, this suggests there should be different potential for interviewer effects in text than in voice. In this paper we report analyses of how text interviews differed from voice interviews in our corpus, as well as how interviews with human interviewers differed from interviews with automated interviewing systems in both modes, based on transcripts and coding of multiple features of the interaction. Text interviews took more than twice as long as voice interviews, but the amount of time between turns (text messages) was large, and the total number of turns was two thirds as many as in voice interviews. As in the voice interviews, text interviews with human interviewers involved a small but significantly greater number of turns than text interviews with automated systems, not only because respondents engaged in small âtalkâ with human interviewers but because they requested clarification and help with the survey task more often than with the automated text interviewer. Respondents were more likely to type out full response options (as opposed to equally acceptable single character responses) with a human text interviewer. Analyses of the content and format of text interchanges compared to voice interchanges demonstrate both potential improvements in data quality and ease for respondents, but also pitfalls and challenges that a more asynchronous mode brings. The âanytime anywhereâ qualities of text interviewing may reduce pressure to answer quickly, allowing respondents to answer more thoughtfully and to consult records even if they are mobile or multitasking. From a Total Survey Error perspective, the more streamlined nature of text interaction, which largely reduces the interview to its essential question-asking and -answering elements, may help reduce the potential for unintended interviewer influence
Chapter 13: Interacting with interviewers in text and voice interviews on smartphones. Appendix 13
Appendix A: Example human text and voice interchange that includes clarification.
Appendix B: Coding Manual
Appendix A13C.1 (Data) attached belo
Time to full enteral feeding for very low-birth-weight infants varies markedly among hospitals worldwide but may not be associated with incidence of necrotizing enterocolitis:The NEOMUNE-NeoNutriNet Cohort Study
Background: Transition to enteral feeding is difficult for very low-birth-weight (VLBW; â€1500 g) infants, and optimal nutrition is important for clinical outcomes. Method: Data on feeding practices and short-term clinical outcomes (growth, necrotizing enterocolitis [NEC], mortality) in VLBW infants were collected from 13 neonatal intensive care units (NICUs) in 5 continents (n = 2947). Specifically, 5 NICUs in Guangdong province in China (GD), mainly using formula feeding and slow feeding advancement (n = 1366), were compared with the remaining NICUs (non-GD, n = 1581, Oceania, Europe, United States, Taiwan, Africa) using mainly human milk with faster advancement rates. Results: Across NICUs, large differences were observed for time to reach full enteral feeding (TFF; 8â33 days), weight gain (5.0â14.6 g/kg/day), âz-scores (â0.54 to â1.64), incidence of NEC (1%â13%), and mortality (1%â18%). Adjusted for gestational age, GD units had longer TFF (26 vs 11 days), lower weight gain (8.7 vs 10.9 g/kg/day), and more days on antibiotics (17 vs 11 days; all P <.001) than non-GD units, but NEC incidence and mortality were similar. Conclusion: Feeding practices for VLBW infants vary markedly around the world. Use of formula and long TFF in South China was associated with more use of antibiotics and slower weight gain, but apparently not with more NEC or higher mortality. Both infant- and hospital-related factors influence feeding practices for preterm infants. Multicenter, randomized controlled trials are required to identify the optimal feeding strategy during the first weeks of life
Chapter 13: Interacting with interviewers in text and voice interviews on smartphones. Appendix 13
Appendix A: Example human text and voice interchange that includes clarification.
Appendix B: Coding Manual
Appendix A13C.1 (Data) attached belo
Interacting with Interviewers in Voice and Text Interviews on Smartphones
As people increasingly adopt SMS text messaging for communicating in their daily lives, texting becomes a potentially important way to interact with survey respondents, who may expect that they can communicate with survey researchers as they communicate with others. Thus far our evidence from analyses of 642 iPhone interviews suggests that text interviewing can lead to higher quality data (less satisficing, more disclosure) than voice interviews on the same device, whether the questions are asked by an interviewer or an automated system. Respondents also report high satisfaction with text interviews, with many reporting that text is more convenient because they can continue with other activities while responding. But the interaction with an interviewer in a text interview is substantially different than in a voice interview, with much less of a sense of the interviewerâs social presence as well as quite different time pressure. In principle, this suggests there should be different potential for interviewer effects in text than in voice. In this paper we report analyses of how text interviews differed from voice interviews in our corpus, as well as how interviews with human interviewers differed from interviews with automated interviewing systems in both modes, based on transcripts and coding of multiple features of the interaction. Text interviews took more than twice as long as voice interviews, but the amount of time between turns (text messages) was large, and the total number of turns was two thirds as many as in voice interviews. As in the voice interviews, text interviews with human interviewers involved a small but significantly greater number of turns than text interviews with automated systems, not only because respondents engaged in small âtalkâ with human interviewers but because they requested clarification and help with the survey task more often than with the automated text interviewer. Respondents were more likely to type out full response options (as opposed to equally acceptable single character responses) with a human text interviewer. Analyses of the content and format of text interchanges compared to voice interchanges demonstrate both potential improvements in data quality and ease for respondents, but also pitfalls and challenges that a more asynchronous mode brings. The âanytime anywhereâ qualities of text interviewing may reduce pressure to answer quickly, allowing respondents to answer more thoughtfully and to consult records even if they are mobile or multitasking. From a Total Survey Error perspective, the more streamlined nature of text interaction, which largely reduces the interview to its essential question-asking and -answering elements, may help reduce the potential for unintended interviewer influence
Precision and Disclosure in Text and Voice Interviews on Smartphones.
As people increasingly communicate via asynchronous non-spoken modes on mobile devices, particularly text messaging (e.g., SMS), longstanding assumptions and practices of social measurement via telephone survey interviewing are being challenged. In the study reported here, 634 people who had agreed to participate in an interview on their iPhone were randomly assigned to answer 32 questions from US social surveys via text messaging or speech, administered either by a human interviewer or by an automated interviewing system. 10 interviewers from the University of Michigan Survey Research Center administered voice and text interviews; automated systems launched parallel text and voice interviews at the same time as the human interviews were launched. The key question was how the interview mode affected the quality of the response data, in particular the precision of numerical answers (how many were not rounded), variation in answers to multiple questions with the same response scale (differentiation), and disclosure of socially undesirable information. Texting led to higher quality data-fewer rounded numerical answers, more differentiated answers to a battery of questions, and more disclosure of sensitive information-than voice interviews, both with human and automated interviewers. Text respondents also reported a strong preference for future interviews by text. The findings suggest that people interviewed on mobile devices at a time and place that is convenient for them, even when they are multitasking, can give more trustworthy and accurate answers than those in more traditional spoken interviews. The findings also suggest that answers from text interviews, when aggregated across a sample, can tell a different story about a population than answers from voice interviews, potentially altering the policy implications from a survey
Participation, response rates and break-off rates.
<p><sup>a</sup>The response rate (known as AAPOR RR1 [<a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0128337#pone.0128337.ref029" target="_blank">29</a>]) is calculated as the number of complete interviews divided by the number of invitations.</p><p><sup>b</sup>The break-off rate is calculated as the number of people who dropped off during the survey divided by the number of people who started.</p><p>Participation, response rates and break-off rates.</p
Interview duration and median number of turns per survey question.
<p>These timelines display the median duration of question-answer sequences with the median number of turns after each question.</p