11,818 research outputs found

    A threshold for a q-sorting methodology for computer-adaptive surveys

    Get PDF
    © 2017 Proceedings of the 25th European Conference on Information Systems, ECIS 2017. All rights reserved. Computer-Adaptive Surveys (CAS) are multi-dimensional instruments where questions asked of respondents depend on the previous questions asked. Due to the complexity of CAS, little work has been done on developing methods for validating their content and construct validity. We have created a new q-sorting technique where the hierarchies that independent raters develop are transformed into a quantitative form, and that quantitative form is tested to determine the inter-rater reliability of the individual branches in the hierarchy. The hierarchies are then successively transformed to test if they branch in the same way. The objective of this paper is to identify suitable measures and a “good enough” threshold for demonstrating the similarity of two CAS trees. To find suitable measures, we perform a set of bootstrap simulations to measure how various statistics change as a hypothetical CAS deviates from a “true” version. We find that the 3 measures of association, Goodman and Kruskal's Lambda, Cohen's Kappa, and Goodman and Kruskal's Gamma together provide information useful for assessing construct validity in CAS. In future work we are interested in both finding a “good enough” threshold(s) for assessing the overall similarity between tree hierarchies and diagnosing causes of disagreements between the tree hierarchies

    A Q-sorting methodology for Computer-Adaptive Surveys - Style "Research"

    Get PDF
    Computer-Adaptive Surveys (CAS) are multi-dimensional instruments where questions asked of respondents depend on the previous questions asked. Due to the complexity of CAS, little work has been done on developing methods for validating their construct validity. This paper describes the process of using a variant of Q-sorting to validate a CAS item bank. The method and preliminary results are presented. In addition, lessons learned from this study are discussed

    A test of a computer-adaptive survey using online reviews

    Get PDF
    © 26th European Conference on Information Systems: Beyond Digitization - Facets of Socio-Technical Change, ECIS 2018. All Rights Reserved. Traditional surveys are excellent instruments for establishing the correlational relationship between two constructs. However, they are unable to identify reasons why such correlations exist. Computer-Adaptive Surveys (CAS) are multi-dimensional instruments where questions asked of respondents depend on the previous questions asked. Assessing the validity of CAS is an underexplored research area as CAS differs from traditional surveys. Therefore, validating a CAS requires different techniques. This study attempts to validate the conclusion validity of a CAS about café customer satisfaction using online customer reviews. For our CAS to have conclusion validity, there should be a high correspondence where most respondents in CAS and online reviewers both agree that certain constructs are the cause of their dissatisfaction. We created a Computer-Adaptive Survey (CAS) of café satisfaction and used online customer reviews to assess its conclusion validity. Our research thus contributes to the measurement literature in two ways, one, we demonstrate that CAS captures the same criticisms of cafes as that in online reviews, and two, CAS captures problems about customer satisfaction at a deeper level than that found in online reviews

    A TEST OF A COMPUTER-ADAPTIVE SURVEY USING ONLINE REVIEWS

    Get PDF
    Traditional surveys are excellent instruments for establishing the correlational relationship between two constructs. However, they are unable to identify reasons why such correlations exist. Computer-Adaptive Surveys (CAS) are multi-dimensional instruments where questions asked of respondents depend on the previous questions asked. Assessing the validity of CAS is an underexplored research area as CAS differs from traditional surveys. Therefore, validating a CAS requires different techniques. This study attempts to validate the conclusion validity of a CAS about café customer satisfaction using online customer reviews. For our CAS to have conclusion validity, there should be a high correspondence where most respondents in CAS and online reviewers both agree that certain constructs are the cause of their dissatisfaction. We created a Computer-Adaptive Survey (CAS) of café satisfaction and used online customer reviews to assess its conclusion validity. Our research thus contributes to the measurement literature in two ways, one, we demonstrate that CAS captures the same criticisms of cafes as that in online reviews, and two, CAS captures problems about customer satisfaction at a deeper level than that found in online reviews

    Computer-Adaptive Surveys (CAS) as a Means of Answering Questions of Why

    Get PDF
    Traditional surveys are excellent instruments for establishing the correlational relationship between two constructs. However, they are unable to identify reasons why such correlations exist. Computer- Adaptive Surveys (CAS) are multi-dimensional instruments where questions asked of respondents depend on the previous questions asked. Their principal advantage is they allow the survey developer to input a large number of potential causes. Respondents then roll down through the causes to identify the one or few significant causes impacting a correlation. This study compared a café satisfaction CAS to a traditional survey of the same item bank to test whether CAS performs its intended task better than a traditional survey. Our study demonstrates that when one is trying to find root cause, CAS achieves a higher response rate, requires fewer items for respondents to answer, has better item discrimination, and has a higher agreement among respondents for each item

    DeepMarks: A Digital Fingerprinting Framework for Deep Neural Networks

    Get PDF
    This paper proposes DeepMarks, a novel end-to-end framework for systematic fingerprinting in the context of Deep Learning (DL). Remarkable progress has been made in the area of deep learning. Sharing the trained DL models has become a trend that is ubiquitous in various fields ranging from biomedical diagnosis to stock prediction. As the availability and popularity of pre-trained models are increasing, it is critical to protect the Intellectual Property (IP) of the model owner. DeepMarks introduces the first fingerprinting methodology that enables the model owner to embed unique fingerprints within the parameters (weights) of her model and later identify undesired usages of her distributed models. The proposed framework embeds the fingerprints in the Probability Density Function (pdf) of trainable weights by leveraging the extra capacity available in contemporary DL models. DeepMarks is robust against fingerprints collusion as well as network transformation attacks, including model compression and model fine-tuning. Extensive proof-of-concept evaluations on MNIST and CIFAR10 datasets, as well as a wide variety of deep neural networks architectures such as Wide Residual Networks (WRNs) and Convolutional Neural Networks (CNNs), corroborate the effectiveness and robustness of DeepMarks framework
    • …
    corecore