41 research outputs found

    Onlinebefragungen auf mobilen Endgeräten: Potentiale und Herausforderungen

    Full text link
    Zentraler Vorteil von Onlinebefragungen auf mobilen Endgeräten (Tablet, Smartphone) ist ihre Allgegenwart und ihr technologisches Potenzial. Umfragemethodisch sind solche Befragungen jedoch eine Herausforderung und die Konsequenzen für die Datenqualität nicht ignorierbar

    Port: A software tool for digital data donation

    Get PDF
    Recently, a new workflow has been introduced that allows academic researchers to partner with individuals interested in donating their digital trace data for academic research purposes (Boeschoten, Ausloos, et al., 2022). In this workflow, the digital traces of participants are processed locally on their own devices in such a way that only the subset of participants’ digital trace data that is of legitimate interest to a research project are shared with the researcher, which can only occur after the participant has provided their informed consent.This data donation workflow consists of the following steps: First, the participant requests a digital copy of their personal data at the platform of interest, such as Google, Meta, Twitter and other digital platforms, i.e., their Data Download Package (DDP). Platforms, as data controllers, are required as per the European Union’s General Data Protection Regulation (GDPR) to share a digital copy with each participant requesting such a copy. Second, they download the DDP onto their personal device. Third, by means of local processing, only thedata points of interest to the researcher are extracted from that DDP. Fourth, the participant inspects the extracted data points after which the participant can consent to donate. Only after providing this consent, the donated data is sent to a storage location and can be accessed by the researcher, which would mean that the storage location can be accessed for further analysis.In this paper, we introduce Port. Port is a software tool that allows researchers to configure the local processing step of the data donation workflow, allowing the researcher to collect exactly the digital traces needed to answer their research question. When using Port, a researcher can decide:• Which digital platforms are investigated;• Which digital traces are collected;• How the extracted digital traces are visually presented to the participant;• What is communicated to the participant

    Comparative study of thermostability and structure of close homologues - bamase and binase

    Get PDF
    Parameters of heat denaturation and intrinsic fluorescence of bamase and its close homologue, binase in the pH region 2-6 have been determined. The bamase heat denaturation (pH 2.85.5) proceeds according to the “all-or-none” principle. Bamase denaturation temperature is lower than that of binase and this difference increases from 2.5 °C at pH 5 to 7 °C at pH 3. Enthalpy values of bamase and binase denaturation coincide only at pH 4.5-5.5, but as far as pH decreases the bamase denaturation enthalpy decreases significantly and in this respect it differs from binase. The fluorescence and CD techniques do not reveal any distinctions in the local environment of aromatic residues in the two proteins, and the obtained difference in the parameters of intrinsic fluorescence is due to fluorescence quenching of the bamase Trp94 by the His 18 residue, absent in binase. Secondary structures of both native and denaturated proteins also do not differ. Some differences in the barnase and binase electrostatic characteristics, revealed in the character of the dipole moments distribution, have been found. © 1993 Taylor & Francis Ltd

    A review of conceptual approaches and empirical evidence on probability and nonprobability sample survey research

    Get PDF
    There is an ongoing debate in the survey research literature about whether and when probability and nonprobability sample surveys produce accurate estimates of a larger population. Statistical theory provides a justification for confidence in probability sampling as a function of the survey design, whereas inferences based on nonprobability sampling are entirely dependent on models for validity. This article reviews the current debate about probability and nonprobability sample surveys. We describe the conditions under which nonprobability sample surveys may provide accurate results in theory and discuss empirical evidence on which types of samples produce the highest accuracy in practice. From these theoretical and empirical considerations, we derive best-practice recommendations and outline paths for future research

    Development of an international survey attitude scale: measurement equivalence, reliability, and predictive validity

    Get PDF
    Declining response rates worldwide have stimulated interest in understanding what may be influencing this decline and how it varies across countries and survey populations. In this paper, we describe the development and validation of a short 9-item survey attitude scale that measures three important constructs, thought by many scholars to be related to decisions to participate in surveys, that is, survey enjoyment, survey value, and survey burden. The survey attitude scale is based on a literature review of earlier work by multiple authors. Our overarching goal with this study is to develop and validate a concise and effective measure of how individuals feel about responding to surveys that can be implemented in surveys and panels to understand the willingness to participate in surveys and improve survey effectiveness. The research questions relate to factor structure, measurement equivalence, reliability, and predictive validity of the survey attitude scale. The data came from three probability-based panels: the German GESIS and PPSM panels and the Dutch LISS panel. The survey attitude scale proved to have a replicable three-dimensional factor structure (survey enjoyment, survey value, and survey burden). Partial scalar measurement equivalence was established across three panels that employed two languages (German and Dutch) and three measurement modes (web, telephone, and paper mail). For all three dimensions of the survey attitude scale, the reliability of the corresponding subscales (enjoyment, value, and burden) was satisfactory. Furthermore, the scales correlated with survey response in the expected directions, indicating predictive validity

    Data quality in probability-based online panels: Nonresponse, attrition, and panel conditioning

    No full text
    Online panels – surveys administered over the Internet in which persons are asked to complete surveys regularly – offer cost reductions compared to surveys that use more traditional modes of data collection (face-to-face, telephone, and mail). However, some characteristics of online panels may cause errors, threatening the data quality. For example, excluding non-Internet users may result in coverage error; if persons selected for the study cannot be reached or do not want to participate, it may result in nonresponse error; study participants may choose to stop participating in later waves (attrition). Furthermore, respondents may learn to answer dishonestly or answer filter questions negatively to reduce the burden of participation. The main question of this dissertation is that of how good is the data collected in probability-based online panels (i.e., panels in which respondents are selected by researchers as a result of application of statistical procedures of random sampling). The five studies in this dissertation address the questions of data quality, using data from a probability-based telephone-recruited online panel of Internet users in Germany. To answer the question about goodness of the final estimates collected in the online panel, we compared data from the online panel to data from two high-quality face-to-face reference surveys. We found several differences among the surveys, however, most of these differences averaged to a few percentage points. We took the analysis further studying mode system effects (i.e., differences in the estimates as the results of the whole process by which they were collected). We found that the online panel and further two reference surveys differed in attitudinal measures. However, for factual questions the reference surveys differed from the online panel and not from each other. Our overall conclusion is that the data from the online panel is fairly comparable to the data from high-quality face-to-face surveys. This dissertation concentrated on the processes that can cause errors in data collected in the online panel. We found that participation in the panel is selective: previous experience with the Internet and online surveys predicted willingness to participate and actual participation in the panel. Incentives and fieldwork agencies that performed the recruitment also influenced the decision to participate. To study why panel members chose to discontinue participation, we contrasted the role of incentives and non-reward motivation. We found that respondents who viewed surveys as long, difficult, too personal were more likely to attrite and that incentives (although negatively related to attrition) did not compensate for this burdensome experience. To find out if the group of respondents who are longer in the panel would answer differently than the group of respondents with a shorter duration, we conducted two experiments. We found limited evidence of advantageous learning and no evidence of disadvantageous learning. The results of this dissertation provide additional insight into the processes that contribute to the quality of data produced by probability-based online panels. These results can guide researchers who plan to build online panels and might prove useful for existing panels that consider switching to the online mode

    Early and Late Participation during the Field Period: Response Timing in a Mixed-Mode Probability-Based Panel Survey

    No full text
    Reluctance of respondents to participate in surveys has long drawn the attention of survey researchers. Yet, little is known about what drives a respondent’s decision to answer the survey invitation early or late during the field period. Moreover, we still lack evidence on response timing in longitudinal surveys. That is, the questions on whether response timing is a rather stable respondent characteristic and what—if anything—affects change in response timing across different interviews remain open. We relied on data from a mixed-mode general population panel survey collected between 2014 and 2016 to study the stability of response timing across 18 panel waves and factors that influence the decision to participate early or late in the field period. Our results suggest that the factors which had effects on response timing are different in the mail and web modes. Moreover, we found that experience with prior panel waves affected the respondent’s decision to participate early or late. Overall, the present study advocates understanding response timing as a metric variable and, consequently, the need to reflect this in modeling strategies

    Onlinebefragungen auf mobilen Endgeräten: Potentiale und Herausforderungen

    No full text
    Zentraler Vorteil von Onlinebefragungen auf mobilen Endgeräten (Tablet, Smartphone) ist ihre Allgegenwart und ihr technologisches Potenzial. Umfragemethodisch sind solche Befragungen jedoch eine Herausforderung und die Konsequenzen für die Datenqualität nicht ignorierbar

    Survey data documentation

    No full text
    Documentation of research results is an essential process within the research lifecycle, which includes the steps of study planning and developing the survey instruments, data collection and preparation, data analysis, and data archiving. Primary researchers have to ensure that the collected data and all accompanying materials are properly documented and archived. This enables the scientific community to understand and reproduce the results of a scientific project. The purpose of this survey guideline is to provide a brief introduction and an overview about data preparation and data documentation in order to help primary researchers to make their data and other study-related materials long-term accessible. This overview will therefore help researchers to comply with the principles of reproducibility as a crucial aspect of good scientific practice. This guideline will be useful for researchers who are in the stages of planning a study as well as for those who have already collected data and would like to prepare it for archiving
    corecore