8,229 research outputs found
Recommended from our members
Conducting Web-Based Surveys
Web-based surveying is becoming widely used in social science and educational research. The Web offers significant advantages over more traditional survey techniques however there are still serious methodological challenges with using this approach. Currently coverage bias or the fact significant numbers of people do not have access, or choose not to use the Internet is of most concern to researchers. Survey researchers also have much to learn concerning the most effective ways to conduct surveys over the Internet. While in its early stages, research on Internet-based survey methodology has identified a number of factors that influence data quality. Of note, several studies have found Internet surveys have significantly lower response rates than comparable mailed surveys. Several factors have been found to increase response rates including personalized email cover letters, follow-up reminders, pre-notification of the intent to survey and simpler formats. A variety of software tools are now available for conducting Internet surveys and they are becoming a increasing sophisticated and easy to use. While there is a need for caution, the use of Web-based surveying is clearly going to grow
Web-Based Surveys
Dillman states that one of the three most significant advances in survey technology in the twentieth century is the electronic survey. The other two are the telephone and random sampling. With such impact potential, Extension professionals should learn more about Web-based surveys. This article shares major advantages and disadvantages of Web-based surveys. It lists design guidelines, as well as tips for conducting Web-based surveys. A comparison of expenses of a traditional mail-based survey versus Web-based survey is made in today\u27s dollars. Finally, this article shares examples of the administration of three Web-based surveys
Recommended from our members
Conducting web-based surveys
Accessed 84,321 times on https://pareonline.net from August 23, 2001 to December 31, 2019. For downloads from January 1, 2020 forward, please click on the PlumX Metrics link to the right
Methods for Evaluating Respondent Attrition in Web-Based Surveys
Background: Electronic surveys are convenient, cost effective, and increasingly popular tools for collecting information. While the online platform allows researchers to recruit and enroll more participants, there is an increased risk of participant dropout in Web-based research. Often, these dropout trends are simply reported, adjusted for, or ignored altogether.
Objective: To propose a conceptual framework that analyzes respondent attrition and demonstrates the utility of these methods with existing survey data.
Methods: First, we suggest visualization of attrition trends using bar charts and survival curves. Next, we propose a generalized linear mixed model (GLMM) to detect or confirm significant attrition points. Finally, we suggest applications of existing statistical methods to investigate the effect of internal survey characteristics and patient characteristics on dropout. In order to apply this framework, we conducted a case study; a seventeen-item Informed Decision-Making (IDM) module addressing how and why patients make decisions about cancer screening.
Results: Using the framework, we were able to find significant attrition points at Questions 4, 6, 7, and 9, and were also able to identify participant responses and characteristics associated with dropout at these points and overall.
Conclusions: When these methods were applied to survey data, significant attrition trends were revealed, both visually and empirically, that can inspire researchers to investigate the factors associated with survey dropout, address whether survey completion is associated with health outcomes, and compare attrition patterns between groups. The framework can be used to extract information beyond simple responses, can be useful during survey development, and can help determine the external validity of survey results
Subject Acquisition for Web-Based Surveys
This article provides a basic report about subject recruitment processes for Web-based surveys. Using data from our ongoing Internet Survey of American Opinion project, two different recruitment techniques (banner advertisement and subscription campaigns) are compared. This comparison, together with a typology of Web-based surveys, provides insight into the validity and generalizability of Internet survey data. The results from this analysis show that, although Internet survey respondents differ demographically from the American population, the relationships among variables are similar across recruitment methods and match those implied by substantive theory. Thus, our research documents the basic methodology of subject acquisition for Web-based surveys, which, as we argue in our conclusion, may soon become the survey interview mode of choice for social scientists
Factors Contributing to Participation in Web-based Surveys among Italian University Graduates
An established yearly survey aimed at monitoring the employment opportunities of Italian graduates, traditionally carried out with Cati methods, has been integrated during the last few years with Cawi. Cawi has become increasingly crucial due to the high number of graduates involved in the survey, which has mandated a reduction in fieldwork duration and unit costs. Although the seven Cawi surveys used here have different substantive and methodological characteristics, preliminary analysis reveals a common trend: the utmost participation is observed during the first few days immediately following initiation of fieldwork and, to a lesser degree, the delivery of follow-up reminders. Web respondents comprise a self-selected subgroup of the target population, having better academic performance and greater computer skills. A Cox regression model estimating response probability (or response time) shows, besides the obvious effects of certain personal and survey design characteristics, that faster response times are expressed by graduates in science or engineering and reporting good computer skills, whereas the fields of medicine/health and defence/security and no computer skills give rise to lower response probability. Ways to use these findings for fine-tuning data collection are discussed.Cawi surveys, Response rate, University graduates,Cox regression
Testing for the survey mode effect on contingent valuation data quality: A case study of web based versus in-person interviews
This paper addresses the lack of empirical evaluation of the use of web based surveys in the context of contingent valuation surveys. We compare, using a case study, in-person interviews with web based surveys regarding response rate, information additivity effects and respondents' attitudes towards paying. The web based survey had a much lower response (5.1%) than the in-person interviewing (84%). We find the web based contingent valuation surveys to be neither more susceptible to information additivity effects nor more prone to zero protest responses. We conclude in favor of the use of web based surveys, namely in Portugal, where the number of Internet users is rapidly increasing, although further research efforts are required on their use.http://www.sciencedirect.com/science/article/B6VDY-4NBRFTR-1/1/8fa454b56154abf6569359519de2610
Design Effects in the Transition to Web-Based Surveys
Innovation within survey modes should always be mitigated by concerns about survey quality and in particular sampling, coverage, nonresponse, and measurement error. This is as true today with the development of web surveying as it was in the 1970s when telephone surveying was being developed. This paper focuses on measurement error in web surveys. Although Internet technology provides significant opportunities for innovation in survey design, systematic research has yet to be conducted on how most of the possible innovations might affect measurement error, leaving many survey designers “out in the cold.” This paper summarizes recent research to provide an overview of how choosing the web mode affects the asking and answering of questions. It starts with examples of how question formats used in other survey modes perform differently in the web mode. It then provides examples of how the visual design of web surveys can influence answers in unexpected ways and how researchers can strategically use visual design to get respondents to provide their answers in a desired format. Finally, the paper concludes with suggested guidelines for web survey design
Exploring Slider vs. Categorical Response Formats in Web-Based Surveys
Web-based surveys have become a common mode of data collection for researchers in many fields, but there are many methodological questions that need to be answered. This article examines one such question—do the use of sliders to express numerical amounts and the use of the more traditional radio-button scales give the same, or different, measurements? First, we review the central debates surrounding the use of slider scales, including advantages and disadvantages. Second, we report findings from a controlled simple randomized design field experiment using a sample of business managers in Italy to compare the two response formats. Measures of topic sensitivity, topic interest, and likelihood of participation were obtained. No statistically significant differences were found between the response formats. The article concludes with suggestions for researchers who wish to use slider scales as a measurement device
- …