28 research outputs found

    Approaches to dealing with survey errors in online panel research

    Get PDF
    Survey research is a relatively young field, and online surveys including online panel surveys are now routinely used for collecting survey data. We distinguish between different types of online panels, and this thesis is focused on both probability-based and nonprobability-based general population panels. To increase the quality of online panels in the era of nonresponse, more methodological research is needed, and that is the focus of the research in this thesis. To investigate approaches to dealing with survey errors, the Total Survey Error paradigm as a conceptual framework is applied, and both errors of representation and errors of measurement are the subject of this research. One of the contributions of this thesis is a review and discussion of a range of data sources and methodology which can be used in the study of survey errors. The other theoretical and practical contributions, presented within three groups, are related to the investigation of individual types of survey errors in online panel research. First, worldwide probability-based online panels are identified, and their methodological approaches to recruitment and data collection reviewed and compared as part of a meta-analysis. The study shows high levels of heterogeneity in both recruitment rates and recruitment solutions, as well as explains variability of recruitment rates. The other studies on errors of representation present evidence on how online panel paradata can be effectively transformed and used to identify about three in four nonrespondents in a subsequent panel wave, and answer the question of why people participate in online panel surveys while presenting evidence on how social-psychological theories can explain survey participation in a longitudinal design. Second, two studies focus on measurement error in probability-based online panel research due to mixing modes. The study on measurement mode effects shows how measurement error is present in the case of a lack of measurement equivalence between modes, and presents evidence on how applying matching methods (like coarsened exact matching) quite effectively controls for self-selection bias due to non-random assignment of online panellists to modes. The study on individual-level measurement mode effects presents a newly identified source of measurement error in online panel survey, that is, panel measurement mode effects. It also conceptualizes and showcases how panel conditioning can be a factor of two measurement aspects. These results are later related to a trade-off between representation (undercoverage) and measurement bias. Third, the thesis studies two cost- and time-efficient approaches to online data collection - nonprobability online panels and a fairly new combination of random digit dialing, text message invitations, and web-push methodology. The study on nonprobability panels, which are generally considered as less accurate but cheaper than probability-based panels, investigates post-survey adjustment methodology to improve inference in nonprobability samples. It presents evidence on how accuracy can be improved under different external data access scenarios. The study on a new approach to online survey data collection shows very low response rates, and outlines effective solutions to increase response (such as advance SMS and reminders). It also presents evidence on the fairly high accuracy of the proposed approach, which seems to be feasible for continuing recruitment to a probability-based online panel. In the final section of the thesis, the cost dimension of online survey research is discussed, the requirement of collecting data from the offline population in probability-based online panel research from different perspectives is challenged, and the theoretical contributions of this research are explained in more detail

    Massive open online courses and completion rates: does academic readiness and its factors influence completion rates in MOOCs?

    Get PDF
    With the increase in the cost of an education and the flat employment rate, many institutions and students are looking to online learning to solve this academic dilemma. Online education is thought to be a low-cost academic alternative to brick and mortar courses. Massive Open Online Courses (MOOCs) goals include issues of equity in higher education, the rising costs of a college education, and funding concerns. MOOCs can be taken from anywhere as long as the participant has a computer and access to the Internet is available. Also, traditional MOOCs do not require any financial commitment and do not have academic prerequisites or an admissions process. Completion rates among learners taking MOOCs are low, begging the question of whether they actually address matters of escalating college costs and higher education equity. The purpose of this study is to explore whether academic readiness in the context of the likelihood the learner completing the course. This study focuses on one component of the many factors in MOOCs - the likelihood of course completion and academic readiness. Academic readiness in MOOCs is not a requirement, but a component that may determine whether a learner has the tools needed to complete a MOOC. Academic readiness suggests a level of knowledge and cognitive abilities necessary to understand the course content and to navigate the course technologically. Theories addressing structural elements within MOOCs include Clow's funnel of participation, behaviorism, and constructivism. Of these theories, constructivism provides the theoretical framework for understanding learners' abilities and willingness to learn in the study. This quantitative study attempts to evaluate the likelihood of course completion and the factors that may influence these outcomes using secondary data from Duke's MOOC pre- and post-course surveys. Logistic regression analysis with the dependent variable (a learner completes a Duke's MOOCs) and the independent variables (academic readiness and its factors -- college degree; age; race; gender; previous experience with course subject, course level -- beginner, intermediate; or advanced; and STEM or non-STEM) will be used to estimate the likelihood that these variables will encourage learners to complete MOOCs or understand why learners do not
    corecore