5 research outputs found

    A Phenomenological Study on the Effectiveness of Curriculum and Course Information Packages in the Bologna Process

    Get PDF
    The aim of the present study is to analyze the School of Education and Department of Educational Sciences' curricula (program and course information packages) with respect to the Bologna process. Designed in line with phenomenology, the study focuses on the phenomenon of "the effectiveness of curricula with respect to the Bologna process". The data were collected by interviewing two separate focus groups of students and lecturers, and analyzed by using Miles and Huberman's stages. The results of the three research questions are explained in terms of preparation, implementation, follow-up and revision, and quality assurance. The results are as follows: Considering the positive aspects of the process, both lecturers and students agreed that the process eased access to information and course selection with the help of elective courses; however, all participants complained about the lack of information flow, unclear tasks and process, disbelief in the importance of the process, resistance to the preparation process, unfair work distribution, and the mismatch between competencies and courses. Lecturers also mentioned problems related to the revision and feedback processes

    Investigation Of Item Selection Methods According To Test Termination Rules In Cat Applications

    Get PDF
    In this research, computerized adaptive testing item selection methods were investigated in regard to ability estimation methods and test termination rules. For this purpose, an item pool including 250 items and 2000 people were simulated (M = 0, SD = 1). A total of thirty computerized adaptive testing (CAT) conditions were created according to item selection methods (Maximum Fisher Information, a-stratification, Likelihood Weight Information Criterion, Gradual Information Ratio, and Kullback-Leibler), ability estimation methods (Maximum Likelihood Estimation, Expected a Posteriori Distribution), and test termination rules (40 items, SE < .20 and SE < .40). According to the fixed test-length stopping rule, the SE values that were obtained by using the Maximum Likelihood Estimation method were found to be higher than the SE values that were obtained by using the Expected a Posteriori Distribution ability estimation method. When ability estimation was Maximum Likelihood, the highest SE value was obtained from a-stratification item selection method when the test length is smaller then 30. Whereas, Kullback-Leibler item selection method yielded the highest SE value when the test length is larger then 30. According to Expected a Posteriori ability estimation method, the highest SE value was obtained from a-stratification item selection method in all test lengths. In the conditions where test termination rule was SE <.20, and Maximum Likelihood Ability Estimation method was used, the lowest and highest average number of items were obtained from the Gradual Information Ratio and Maximum Fisher Information item selection method, respectively. Furthermore, when the SE is lower than.20 and Expected a Posteriori ability estimation method was utilized, the lowest average number of items was obtained through Kullback-Leibler, and the highest was obtained through Likelihood Weight Information Criterion item selection method. In the conditions where the test termination rule was SE < .40, and ability estimation method was Maximum Likelihood Estimation, the maximum and minimum number of items were obtained by using Maximum Fisher Information and Kullback-Leibler item selection methods respectively. Additionally, when Expected a Posteriori ability estimation was used, the maximum and minimum number of items were obtained via Maximum Fisher Information and a-stratification item selection methods. For the cases where the stopping rule was SE < .20 and SE <.40 and Maximum Likelihood Estimation method was used, the average number of items were found to be highest in all item selection methods.WoSScopu

    Investigation of Item Selection Methods According to Test Termination Rules in CAT Applications

    No full text

    The Impact Of Test Dimensionality, Common-Item Set Format, And Scale Linking Methods On Mixed-Format Test Equating

    No full text
    The purpose of this study was to examine the impact of dimensionality, common-item set format, and different scale linking methods on preserving equity property with mixed-format test equating. Item response theory (IRT) true-score equating (TSE) and IRT observed-score equating (OSE) methods were used under common-item nonequivalent groups design. The equity property was evaluated based on first-order equity (FOE) and second-order equity (SOE) properties. A simulation study was conducted based on actual item parameter estimates obtained from the TIMSS 2011 8th grade mathematics assessment. The results showed that: (i) The FOE and SOE properties were best preserved under the unidimensional condition, were poorly preserved when the degree of multidimensionality was severe. (ii) The TSE and OSE results, which were provided by using a mixed-format common-item set, preserved FOE better compared to equating results, which provided only a multiple-choice common item set. (iii) Under the unidimensional and multidimensional test structure, characteristic curve methods performed significantly better than moment scale linking methods in terms of preserving FOE and SOE properties.WoSScopu

    Comparing Differential Item Functioning Based On Manifest Groups And Latent Classes

    No full text
    In this study, performance of differential item functioning (DIF) methods was compared under 36 different conditions based on latent classes and manifest groups. In the study, simulation conditions such as DIF-containing item rate, reference-focal group rate, DIF effect size and overlap ratio of manifest groups and latent classes were taken into consideration. To examine DIF, the Mantel-Haenszel (MH) method, which is a method related to the manifest group variable, was used within the framework of classical test theory and Lord's x(2) method and item response theory. Latent classes were determined using the model of multilevel mixture item response theory (MMIRT). Results show that data fit the MMIRT model with larger effect size and with a higher number of items containing DIF. When DIF effect size was 1.0, the power of MMIRT was found to be higher and the type I error rate was found to be lower in all overlap and DIF-containing item rates and reference-focal group conditions. While the rate of overlap was 90%, the power of MH and Lord's x2 methods and type I errors were at acceptable levels under all conditions. It was observed that the power of MH and Lord's x2 methods decreased as a result of a decrease in the overlap ratio for manifest groups and latent classes.WoSScopu
    corecore