34 research outputs found
Web Versus Other Survey Modes: An Updated and Extended Meta-Analysis Comparing Response Rates
Do web surveys still yield lower response rates compared with other survey modes? To answer this question, we replicated and extended a meta-analysis done in 2008 which found that, based on 45 experimental comparisons, web surveys had an 11 percentage points lower response rate compared with other survey modes. Fundamental changes in internet accessibility and use since the publication of the original meta-analysis would suggest that people’s propensity to participate in web surveys has changed considerably in the meantime. However, in our replication and extension study, which comprised 114 experimental comparisons between web and other survey modes, we found almost no change: web surveys still yielded lower response rates than other modes (a difference of 12 percentage points in response rates). Furthermore, we found that prenotifications, the sample recruitment strategy, the survey’s solicitation mode, the type of target population, the number of contact attempts, and the country in which the survey was conducted moderated the magnitude of the response rate differences. These findings have substantial implications for web survey methodology and operations
Mobile phones in an environment of competing survey modes
In recent years, mobile phones have become an increasingly important component in survey data collection. This holds true for self-administered questionnaires and particularly for interview surveys, where mobile phones enforce the combination with other survey modes. However, whether to include mobile phones in a particular survey design depends on complex cost-error relationships. To address this issue, the authors elaborate a metric-based on a product of costs and estimates of survey errors-that is then used for post survey comparison of design alternatives. The authors illustrate this approach with a simulation study using parameters from empirical research. The results show that such evaluation can potentially change the selection of the preferred design option compared to situations where only some of the components (e.g., response rate, nonresponse bias) are used for evaluation. More specifically, the decision about the inclusion of mobile phones predominantly depends on their bias-removing potential, while it is much less sensitive to changes in costs and other error parameters
Usage of online panels in survey-methodology field, 2016
Study is a systematic review that aims to assess (i) the characteristics of online panels that are used in survey methodology, (ii) the quality of these online panels, (iii) the characteristics of individual panel studies, and (iv) the usage of online panels as a sample source for research on survey-data quality. The bibliographic database WebSM was used to select empirical studies from survey methodology that relate to online panels. To address the research questions, a specific coding categories to describe the characteristics of online panels, as well as the purpose of the usage of online panels, were defined. Related to purpose of the study various panel-data quality aspects for studies on the quality of panels themselves, as well as methodological questions for studies that use panels as sample sources for survey-methodology research were coded.Study is a systematic review that aims to assess (i) the characteristics of online panels that are used in survey methodology, (ii) the quality of these online panels, (iii) the characteristics of individual panel studies, and (iv) the usage of online panels as a sample source for research on survey-data quality. The bibliographic database WebSM was used to select empirical studies from survey methodology that relate to online panels. To address the research questions, a specific coding categories to describe the characteristics of online panels, as well as the purpose of the usage of online panels, were defined. Related to purpose of the study various panel-data quality aspects for studies on the quality of panels themselves, as well as methodological questions for studies that use panels as sample sources for survey-methodology research were coded
Survey Design Features Influencing Response Rates in Web Surveys
In this paper we present an overview of several Web surveys. The aim of this research is to study Web survey design characteristics that may influence participation in Web surveys (Lozar Manfreda, 2001; Vehovar et al., 2002). Except for a few studies (whose results we present below), the previous research has mainly studied the effect of a single factor, or a group of related factors on the response rate, while attempting to hold all other potential factors constant. However, there may be interaction among the factors. In addition, authors rarely define which stage of the Web survey process (Lozar Manfreda, 2001; Vehovar et al., 2002) they refer to. However, different factors may have an impact at different stages. Here, we propose a review of several Web surveys where distinct stages of the Web survey process are identified. In addition, characteristics of the design of these surveys are studied simultaneously. First, we present previous studies which draw conclusions from reviews of several Web surveys. Then, we present the methodology used for this study, called the WebSM study. We continue by presenting results of the WebSM study. There, several outcome rates are modelled separately.
Veljavnost interneta kot anketnega orodja
Despite the \u27non-coverage\u27 problem the Internetis becoming a complement and/oralternative survey method. However, its validity, especially the convergent validity, needs to be established. The Internet surveying will become equivalent to other survey methods when proved that the survey errors are not larger than survey errors occuring by other survey methods. The paper presents some comparisons of Internet surveys with other survey methods and discusses their survey errors. A particular emphasis is placed on a comparisonof a Web and mail survey, conducted as a part of project RIS at the Faculty of Social Sciences.Internet se, kljub problemu nepokritja, v anketnem raziskovanju uveljavlja kot komplementarna in/ali alternativna metoda anketnega zbiranja podatkov. Vendar pa je pri tem potrebno določiti njeno veljavnost, pri čemer mislimo predvsem na konvergentno veljavnost. Anketiranje po internetu bo postalo enakovredno drugim metodam, ko bo dokazano, da njegove anketne napake niso večje od tistih, dobljenih z drugimi metodami. V članku bomo predstavili nekaj primerjav rezultatov anket po internetu z drugimi anketnimi metodami, pri čemer bomo razpravljali o različnih anketnih napakah kot vzrokih za morebitne razlike. Predstavili bomo tudi rezultate primerjalne študije ankete na svetovnem spletu in po pošti, ki smo jo izvedli v okviru projekta RIS na Fakulteti za družbene vede
Programska orodja za družboslovne ankete na spletu
Spletne ankete postajajo pomemben način zbiranja družboslovnih podatkov. V številnih primerih (npr. anketiranje posebnih populacij z visoko stopnjo uporabe interneta, kot so študenti, člani organizacij, zaposleni) je njihova uporaba najprimernejša zaradi relativno nizkih stroškov, hitrosti ter predvsem enostavnosti izvedbe. K slednjemu prispevajo številni programski paketi, ki omogočajo izvedbo ankete od upravljanja z vzorcem, pošiljanja vabil, prek oblikovanja vprašalnika, dejanskega zbiranja podatkov preko spleta do analiz podatkov, za njihovo uporabo pa ni potrebno posebno programersko znanje. Pričujoči članek opisuje pomembnejše funkcije takšnih programov ter podaja analizo obstoječih orodij. Namen članka je pomagati raziskovalcem pri izbiri ustreznega orodja za dosego svojega cilja.Web surveys represent an important part of the survey industry. Their use is in many cases (e.g. surveying special population with high Internet penetration, such as students, members of an organization, employees) most appropriate because of relatively low costs, speed of data collection and especially easiness of implementation. The latter is possible due to many software tools that allow implementing a survey from sample management, sending invitations, through questionnaire design, actually data collection to data analysis. No special programming knowledge is needed for their usage. This paper describes the most important functions of such tools and presents an overview of the tools on the market. Out purpose is to help researchers to choose the right toll for their goal
Item nonresponse in web versus other survey modes
Nonresponse is a fundamental issue in survey research, due to the trend of declining response rates across survey modes. This issue is particularly serious for web surveys: A recent meta-analysis found that unit nonresponse in web surveys is, on average, higher by 12 percentage points than in other modes (Daikeler et al., 2020). Although the issue of unit nonresponse in web and other survey modes has been investigated in several meta-analyses, item nonresponse has not been examined meta-analytically, a gap addressed by this paper. Are web surveys at a disadvantage compared to other survey modes in terms of item nonresponse as well? To address this question, 13 eligible experimental manuscripts reporting 23 effect sizes were identified in a comprehensive literature search. Meta-analytic findings showed that there was no statistically significant difference in the average item nonresponse rate in web versus other survey modes. However, six moderator variables were found to statistically significantly affect the relation between the survey mode and the item nonresponse rate, namely, the target population, the number of contacts, the mode to which the web survey mode was compared, the survey sponsor, the age of the survey, and the baseline item nonresponse rate of the compared mode. The main practical implication is that while web and other survey modes differ in terms of unit nonresponse (on average), item nonresponse rates in web surveys are similar compared to other modes