8 research outputs found
Differences in Research Literacy in Educational Science Depending on Study Program and University
[EN] The ability to purposefully access, reflect on, and use evidence from educational research (Educational Research Literacy, ERL) are key competencies of future professionals in educational practice. Based on the conceptual framework presented in this paper, a test instrument was developed to assess ERL, consisting of the competence facets Information Literacy, Statistical Literacy, and Evidence-based Reasoning. This contribution aims to delve deeper into the question of whether Educational Science students differ in their overall ERL proficiency depending on their study program and university. This comparison is based on a large-scale study of 1,213 Educational Science students (Teacher Training and Educational Studies) at six German universities in the winter semester of 2012/13 and in the summer semester of 2013. The results indicate that students seem to profit from their studies at the different universities. Moreover, the ERL competence facets differentiate to some extent between universities and degree programs, which can serve as the starting point for curricular quality development measures. Subsequently, the results are critically discussed, and the desiderata for future research are stated, e.g., the identification of predictors that cause the reported differences.Groß Ophoff, J.; Schladitz, S.; Wirtz, M. (2017). Differences in Research Literacy in Educational Science Depending on Study Program and University. En Proceedings of the 3rd International Conference on Higher Education Advances. Editorial Universitat Politècnica de València. 1193-1202. https://doi.org/10.4995/HEAD17.2017.55561193120
Psychosocial Burden and Strains of Pedagogues - Using the Job Demands-Resources Theory to Predict Burnout, Job Satisfaction, General State of Health, and Life Satisfaction
The current study examines the Job Demands-Resources theory among pedagogical professionals. A total of 466 pedagogues (n = 227 teachers; n = 239 social workers) completed the Copenhagen Psychosocial Questionnaire online. After testing the questionnaire structure using confirmatory factor analysis, a JD-R-based prediction model to predict effects of strains on the outcome constructs of burnout, job satisfaction, general state of health, and life satisfaction was estimated. The results confirm the questionnaire structure (RMSEA= 0.038; CFI = 0.94) as well as the fit of the prediction model (RMSEA = 0.039; CFI = 0.93). The outcome constructs could be predicted by emotional demands, work–privacy conflict, role conflicts, influence at work, scope for decision making, and opportunities for development (0.41 ≤ R² ≤ 0.57). Especially for life satisfaction, a moderator analysis proved the differences between teachers and social workers in the structure of the prediction model. For teachers, quantitative demands and work–privacy conflict are predictive, and for social workers, role conflicts and burnout are predictive. The study offers starting points for job-related measures of prevention and intervention
Effekte verschiedener Antwortformate in der Erfassung bildungswissenschaftlicher Forschungskompetenz
The use of appropriate response formats in competency testing has been a topic of interest for the last few decades. Especially the comparison of multiple-choice items with free-response items has been widely examined. The present study examines objective and subjective difficulty of those two response formats and furthermore addresses the question of construct dimensionality based on response formats. Test items measuring Educational Research Literacy were presented to 600 university students in Educational Sciences. To eliminate possible distortions from memory effects, stem-equivalent items of both formats were distributed among two test booklets and linked together with anchoring items. Comparing the response formats did not reveal a clear result concerning objective difficulty. Free-response items were in most cases subjectively rated to be more difficult than multiple-choice items. Objective item difficulty was in most cases not related to subjectively rated difficulty, independent of response format. Model comparisons suggested no differing dimensionality when a method factor based on response format was defined additionally. The results show that in the domain of Educational Research Literacy, there is no distinct advantage of one format over the other in terms of difficulty. This and the established unidimensionality suggest that both formats may be used in competency tests in this content domain. (DIPF/Orig.)Der Einsatz angemessener Antwortformate in der Kompetenzmessung ist ein viel diskutiertes Thema pädagogisch-psychologischer Forschung. Dabei steht vor allem der Vergleich offener und geschlossener Antwortformate im Vordergrund. Die vorliegende Studie vergleicht die objektive und subjektive Schwierigkeit dieser beiden Formate und prüft die sich daraus ergebende Dimensionalität des Konstrukts Bildungswissenschaftliche Forschungskompetenz. 600 Studierenden der Bildungswissenschaften wurden Testitems in geschlossenem und offenem Antwortformat vorgelegt. Um Verzerrungen durch Erinnerungseffekte auszuschließen, wurde inhaltsgleiche Items in beiden Formaten auf zwei Testhefte verteilt und mit Ankeritems untereinander verlinkt. Im Vergleich zeigte sich kein klarer Vorteil eines Formats in der objektiven Schwierigkeit; subjektiv wurden jedoch Items mit freien Antworten eher schwieriger eingeschätzt. Für die meisten Items zeigten sich keine Zusammenhänge zwischen objektiver und subjektiv eingeschätzter Schwierigkeit. Modellvergleiche auf latenter Ebene deuteten nicht auf eine spezifische Dimensionalität in Abhängigkeit von den Antwortformaten hin. Die Ergebnisse sprechen gegen einen klaren Vorteil eines bestimmten Antwortformats im Bereich der Bildungswissenschaftlichen Forschungskompetenz. Zusammen mit der gefunden Eindimensionalität des Konstrukts legen die Ergebnisse nahe, dass in dieser Domäne beide Formate in Tests eingesetzt werden können. (DIPF/Orig.
Erfassung bildungswissenschaftlicher Forschungskompetenz in der Hochschulbildung: Konstruktvalidierung der Faktorstruktur eines Testverfahrens unter Berücksichtigung des unterschiedlichen Umgangsmit ausgelassenen Antworten
The ability to purposefully access, reflect, and use evidence from educational research (Educational Research Literacy) are key competencies of future professionals in educational practice. A test instrument was developed to assess Educational Research Literacy with the competence facets Information Literacy, Statistical Literacy, and Evidence-based Reasoning. Even though there are certain overlaps with generic concepts like critical thinking or problem solving, Educational Research Literacy is acquired within its reference disciplines. This contribution aimed to delve deeper into the question which factorial model is most appropriate. Four competing models were compared: unidimensional, three-dimensional, and two bifactor models. The comparison was based on a study of 1360 students at six German universities and was validated by another study of 753 students at three universities. The results also were examined relative to the scoring of omitted responses and the booklet design used in the first study. The results indicate that the four-dimensional bifactor model was the most appropriate: Educational Research Literacy seems to consist of one dominant factor and three secondary factors. The results also support handling both omitted and not-reached responses as missing information. Subsequently, the results are critically discussed relative to the requirements for assessing and for imparting competencies in higher education. Recommendations for future research are stated. (DIPF/Orig.)Evidenz aus bildungswissenschaftlicher Forschung zielgerichtet erschließen, reflektieren und anwenden zu können (sog. Bildungswissenschaftliche Forschungskompetenz, BFK) ist zentral für Fachpersonal im Bildungswesen. Zur Erfassung dieser Kompetenz (mit den Facetten Informationskompetenz, Statistische Kompetenz, Evidenzbasiertes Schlussfolgern) wurde ein Testinstrument entwickelt. Trotz Gemeinsamkeiten mit generischen Konzepten wie kritisches Denken oder Problemlösen wird BFK innerhalb der Bezugsdisziplinen erworben und entwickelt. Dieser Beitrag widmet sich der Frage nach dem am besten passenden Strukturmodell. Hierzu wurden ein eindimensionales Modell, ein dreidimensionales Modell und zwei bifaktorielle Modelle verglichen. Der Modellvergleich basierte auf Daten einer Studie an sechs deutschen Hochschulen (1360 Studierende) und wurde anhand einer Folgestudie an drei Hochschulen validiert (753 Studierende). Untersucht wurden auch Unterschiede bezüglich der Kodierung ausgelassener Antworten oder dem Testheftdesign der ersten Studie. Die Ergebnisse sprechen für das vierdimensionale bifaktorielle Modell, wonach BFK aus einem dominanten Faktor und drei Sekundärfaktoren besteht. Die Ergebnisse unterstützen die Empfehlung, Auslassungen als fehlende Information in den Analysen zu belassen. Die Ergebnisse werden abschließend hinsichtlich der Anforderungen an die Erfassung und Vermittlung von Kompetenzen im Hochschulsektor diskutiert und Desiderata für künftige Forschung benannt. (DIPF/Orig.
Konstruktvalidierung eines Tests zur Messung bildungswissenschaftlicher Forschungskompetenz
Die Befähigung, auf wissenschaftlicher Evidenz basierende Entscheidungen zu treffen, ist zentrales Ziel einer Hochschulausbildung. Das Projekt LeScEd (Learning the Science of Education) integriert Ansätze aus Bereichen wie Informationswissenschaften, Mathematikdidaktik und Psychologie in ein gemeinsames Strukturmodell bildungswissenschaftlicher Forschungskompetenz (BFK). Die aktuelle Studie untersucht Zusammenhänge der BFK zu fluider Intelligenz und selbsteingeschätzter Kompetenz im Sinne diskriminanter bzw. konvergenter Validierung. In Strukturgleichungsmodellen zeigen sich kleine positive Zusammenhänge zur Intelligenz, aber keine zur Selbsteinschätzung. Dies deutet darauf hin, dass BFK mit Intelligenz verwandt, aber von ihr abgrenzbar ist und dass die Selbsteinschätzung keinen geeigneten Indikator für die tatsächliche Kompetenz darstellt. (DIPF/Orig.)Being able to make evidence-based decisions is a central aim in higher education. The project LeScEd (Learning the Science of Education) aims at incorporating approaches from fields like Information Sciences, Mathematical Education and Psychology into a comprehensive structure model of Educational Research Literacy (ERL). The study described in this article analyzes the relations of ERL to fluid intelligence and self-reported ERL to analyze discriminant and congruent validity of the developed instrument, respectively. Structural equation modeling revealed small positive effects regarding fluid intelligence and no effects regarding self-reported ERL. These results indicate that ERL is related to, but distinguishable from general intelligence and that self-reported competence is no reliable indicator for actual competence. (DIPF/Orig.
Psychosocial Burden and Strains of Pedagogues—Using the Job Demands-Resources Theory to Predict Burnout, Job Satisfaction, General State of Health, and Life Satisfaction
The current study examines the Job Demands-Resources theory among pedagogical professionals. A total of 466 pedagogues (n = 227 teachers; n = 239 social workers) completed the Copenhagen Psychosocial Questionnaire online. After testing the questionnaire structure using confirmatory factor analysis, a JD-R-based prediction model to predict effects of strains on the outcome constructs of burnout, job satisfaction, general state of health, and life satisfaction was estimated. The results confirm the questionnaire structure (RMSEA= 0.038; CFI = 0.94) as well as the fit of the prediction model (RMSEA = 0.039; CFI = 0.93). The outcome constructs could be predicted by emotional demands, work–privacy conflict, role conflicts, influence at work, scope for decision making, and opportunities for development (0.41 ≤ R² ≤ 0.57). Especially for life satisfaction, a moderator analysis proved the differences between teachers and social workers in the structure of the prediction model. For teachers, quantitative demands and work–privacy conflict are predictive, and for social workers, role conflicts and burnout are predictive. The study offers starting points for job-related measures of prevention and intervention