553 research outputs found

    Quality Scalability Compression on Single-Loop Solution in HEVC

    Get PDF
    This paper proposes a quality scalable extension design for the upcoming high efficiency video coding (HEVC) standard. In the proposed design, the single-loop decoder solution is extended into the proposed scalable scenario. A novel interlayer intra/interprediction is added to reduce the amount of bits representation by exploiting the correlation between coding layers. The experimental results indicate that the average Bjøntegaard delta rate decrease of 20.50% can be gained compared with the simulcast encoding. The proposed technique achieved 47.98% Bjøntegaard delta rate reduction compared with the scalable video coding extension of the H.264/AVC. Consequently, significant rate savings confirm that the proposed method achieves better performance

    Integration of the Forced-Choice Questionnaire and the Likert Scale: A Simulation Study

    Get PDF
    The Thurstonian item response theory (IRT) model allows estimating the latent trait scores of respondents directly through their responses in forced-choice questionnaires. It solves a part of problems brought by the traditional scoring methods of this kind of questionnaires. However, the forced-choice designs may still have their own limitations: The model may encounter underidentification and non-convergence and the test may show low test reliability in simple test designs (e.g., test designs with only a small number of traits measured or short length). To overcome these weaknesses, the present study applied the Thurstonian IRT model and the Graded Response Model to a different test format that comprises both forced-choice blocks and Likert-type items. And the Likert items should have low social desirability. A Monte Carlo simulation study is used to investigate how the mixed response format performs under various conditions. Four factors are considered: the number of traits, test length, the percentage of Likert items, and the proportion of pairs composed of items keyed in opposite directions. Results reveal that the mixed response format can be superior to the forced-choice format, especially in simple designs where the latter performs poorly. Besides the number of Likert items needed is small. One point to note is that researchers need to choose Likert items cautiously as Likert items may bring other response biases to the test. Discussion and suggestions are given to construct personality tests that can resist faking as much as possible and have acceptable reliability

    Assessment of Collaborative Problem Solving Based on Process Stream Data: A New Paradigm for Extracting Indicators and Modeling Dyad Data

    Get PDF
    As one of the important 21st-century skills, collaborative problem solving (CPS) has aroused widespread concern in assessment. To measure this skill, two initiative approaches have been created: the human-to-human and human-to-agent modes. Between them, the human-to-human interaction is much closer to the real-world situation and its process stream data can reveal more details about the cognitive processes. The challenge for fully tapping into the information obtained from this mode is how to extract and model indicators from the data. However, the existing approaches have their limitations. In the present study, we proposed a new paradigm for extracting indicators and modeling the dyad data in the human-to-human mode. Specifically, both individual and group indicators were extracted from the data stream as evidence for demonstrating CPS skills. Afterward, a within-item multidimensional Rasch model was used to fit the dyad data. To validate the paradigm, we developed five online tasks following the asymmetric mechanism, one for practice and four for formal testing. Four hundred thirty-four Chinese students participated in the assessment and the online platform recorded their crucial actions with time stamps. The generated process stream data was handled with the proposed paradigm. Results showed that the model fitted well. The indicator parameter estimates and fitting indexes were acceptable, and students were well differentiated. In general, the new paradigm of extracting indicators and modeling the dyad data is feasible and valid in the human-to-human assessment of CPS. Finally, the limitations of the current study and further research directions are discussed

    Research and practice on training mode of applied talents for electrical engineering majors in universities

    Get PDF
    The training of applied talents focuses on students’ innovation ability and engineering practice ability, which requires electrical engineering teachers to fully embody the integration of theoretical knowledge and engineering practice in their teaching. In the teaching reform work, teachers should pay attention to optimize the teaching Settings, starting from two aspects of theory and practice, to promote students to obtain more knowledge of electricity, eff ectively develop students’ application level of electric electronic technology, and help students develop comprehensively. Colleges and universities should pay more attention to the training of application-oriented talents and train more qualifi ed talents for the development of social production. Based on this, this paper analyzes the practical strategies for the training mode of applied talents for electrical engineering majors in colleges and universities, in order to provide references for educators

    Comparison of Different LGM-Based Methods with MAR and MNAR Dropout Data

    Get PDF
    The missing not at random (MNAR) mechanism may bias parameter estimates and even distort study results. This study compared the maximum likelihood (ML) selection model based on missing at random (MAR) mechanism and the Diggle–Kenward selection model based on MNAR mechanism for handling missing data through a Monte Carlo simulation study. Four factors were considered, including the missingness mechanism, the dropout rate, the distribution shape (i.e., skewness and kurtosis), and the sample size. The results indicated that: (1) Under the MAR mechanism, the Diggle–Kenward selection model yielded similar estimation results with the ML approach; Under the MNAR mechanism, the results of ML approach were underestimated, especially for the intercept mean and intercept slope (μi and μs). (2) Under the MAR mechanism, the 95% CP of the Diggle–Kenward selection model was lower than that of the ML method; Under the MNAR mechanism, the 95% CP for the two methods were both under the desired level of 95%, but the Diggle–Kenward selection model yielded much higher coverage probabilities than the ML method. (3) The Diggle–Kenward selection model was easier to be influenced by the non-normal degree of target variable's distribution than the ML approach. The level of dropout rate was the major factor affecting the parameter estimation precision, and the dropout rate-induced difference of two methods can be ignored only when the dropout rate falls below 10%

    Modeling Test-Taking Non-effort in MIRT Models

    Get PDF
    The validity of inferences based on test scores will be threatened when examinees' test-taking non-effort is ignored. A possible solution is to add test-taking effort indicators in the measurement model after the non-effortful responses are flagged. As a new application of the multidimensional item response theory (MIRT) model for non-ignorable missing responses, this article proposed a MIRT method to account for non-effortful responses. Two simulation studies were conducted to examine the impact of non-effortful responses on item and latent ability parameter estimates, and to evaluate the performance of the MIRT method, comparing to the three-parameter logistic (3PL) model as well as the effort-moderated model. Results showed that: (a) as the percentage of non-effortful responses increased, the unidimensional 3PL model yielded poorer parameter estimates; (b) the MIRT model could obtain as accurate item parameter estimates as the effort-moderated model; (c) the MIRT model provided the most accurate ability parameter estimates when the correlation between test-taking effort and ability was high. A real data analysis was also conducted for illustration. The limitation and future research were discussed further
    • …
    corecore