24 research outputs found

    Continuous monitoring of the isotopic composition of surface water vapor at Lhasa, southern Tibetan Plateau

    Get PDF
    The stable isotopes (δ18O and δD) of water vapor are used to characterize continuous variations in large-scale and boundary-layer atmospheric processes. We presented continuous measurements of δ18O in surface water vapor at Lhasa, southern Tibetan Plateau, from October 2018 to September 2019 to investigate how large-scale and local atmospheric processes influence variations in water vapor δ18O at different time scales. The water vapor δ18O measurements reveal different seasonal characteristics and diurnal patterns. At the seasonal scale, δ18O exhibits a W-shape with two maxima in May–June and October as well as two minima in July–August and February. The diurnal variations in the water vapor δ18O and meteorological data throughout the year present distinct occurrences of maxima and minima during different periods. We found that the significant seasonal variability is mainly associated with the transition between the Indian summer monsoon and the westerlies, which transport distinct moisture to the southern Tibetan Plateau. The local temperature, specific humidity and boundary layer height impact the diurnal variations in water vapor δ18O to some extent with remarkable seasonal differences.publishedVersio

    Handling Missing Responses in Psychometrics: Methods and Software

    No full text
    The presence of missing responses in assessment settings is inevitable and may yield biased parameter estimates in psychometric modeling if ignored or handled improperly. Many methods have been proposed to handle missing responses in assessment data that are often dichotomous or polytomous. Their applications remain nominal, however, partly due to that (1) there is no sufficient support in the literature for an optimal method; (2) many practitioners and researchers are not familiar with these methods; and (3) these methods are usually not employed by psychometric software and missing responses need to be handled separately. This article introduces and reviews the commonly used missing response handling methods in psychometrics, along with the literature that examines and compares the performance of these methods. Further, the use of the TestDataImputation package in R is introduced and illustrated with an example data set and a simulation study. Corresponding R codes are provided

    Dealing with Missing Responses in Cognitive Diagnostic Modeling

    No full text
    Missing data are a common problem in educational assessment settings. In the implementation of cognitive diagnostic models (CDMs), the presence and/or inappropriate treatment of missingness may yield biased parameter estimates and diagnostic information. Using simulated data, this study evaluates ten approaches for handling missing data in a commonly applied CDM (the deterministic inputs, noisy “and” gate (DINA) model): treating missing data as incorrect (IN), person mean (PM) imputation, item mean (IM) imputation, two-way (TW) imputation, response function (RF) imputation, logistic regression (LR), expectation-maximization (EM) imputation, full information maximum likelihood (FIML) estimation, predictive mean matching (PMM), and random imputation (RI). Specifically, the current study investigates how the estimation accuracy of item parameters and examinees’ attribute profiles from DINA are impacted by the presence of missing data and the selection of missing data methods across conditions. While no single method was found to be superior to other methods across all conditions, the results suggest the use of FIML, PMM, LR, and EM in recovering item parameters. The selected methods, except for PM, performed similarly across conditions regarding attribute classification accuracy. Recommendations for the treatment of missing responses for CDMs are provided. Limitations and future directions are discussed

    Investigation of Missing Responses in Q-Matrix Validation

    No full text
    Missing data can be a serious issue for practitioners and researchers who are tasked with Q-matrix validation analysis in implementation of cognitive diagnostic models. The article investigates the impact of missing responses, and four common approaches (treat as incorrect, logistic regression, listwise deletion, and expectation–maximization [EM] imputation) for dealing with them, on the performance of two major Q-matrix validation methods (the EM-based δ-method and the nonparametric Q-matrix refinement method) across multiple factors. Results of the simulation study show that both validation methods perform better when missing responses are imputed using EM imputation or logistic regression instead of being treated as incorrect and using listwise deletion. The nonparametric Q-matrix validation method outperforms the EM-based δ-method in most conditions. Higher missing rates yield poorer performance of both methods. Number of attributes and items have an impact on performance of both methods as well. Results of a real data example are also discussed in the study

    Results Files for Svetina and Dai, Journal of Experimental Education

    No full text
    Results related to Number of Response Categories and Sample Size Requirements in Polytomous IRT Models by Dubravka Svetina Valdivia and Shenghai Dai, published in the Journal of Experimental Education, https://doi.org/10.1080/00220973.2022.2153783  </p

    Read me file - please read first

    No full text
    This file shows a roadmap to other files included in the project for researchers use.</p

    Simulation Study Code and Results Files for Svetina and Dai Paper, Journal of Experimental Education

    No full text
    Read me file, Simulation Study and Results files related to Number of Response Categories and Sample Size Requirements in Polytomous IRT Models by Dubravka Svetina Valdivia and Shenghai Dai, published in the Journal of Experimental Education, https://doi.org/10.1080/00220973.2022.2153783  </p

    Examining DIF in the Context of CDMs When the Q-Matrix Is Misspecified

    No full text
    The rise in popularity and use of cognitive diagnostic models (CDMs) in educational research are partly motivated by the models’ ability to provide diagnostic information regarding students’ strengths and weaknesses in a variety of content areas. An important step to ensure appropriate interpretations from CDMs is to investigate differential item functioning (DIF). To this end, the current simulation study examined the performance of three methods to detect DIF in CDMs, with particular emphasis on the impact of Q-matrix misspecification on methods’ performance. Results illustrated that logistic regression and Mantel–Haenszel had better control of Type I error than the Wald test; however, high power rates were found using logistic regression and Wald methods, only. In addition to the tradeoff between Type I error control and acceptable power, our results suggested that Q-matrix complexity and item structures yield different results for different methods, presenting a more complex picture of the methods’ performance. Finally, implications and future directions are discussed
    corecore