8 research outputs found

    PENGEMBANGAN COLLABORATIVE PROBLEM SOLVING INVENTORY (CPSI) BERBASIS WEB UNTUK MENGUKUR KETERAMPILAN KOLABORASI DALAM PEMECAHAN MASALAH SISWA

    Get PDF
    Abstrak. Collaborative Problem Solving (CPS) adalah salah satu keterampilan penting abad ke-21. Keterampilan ini juga merupakan keterampilan yang harus ditanamkan pada siswa dalam pembelajaran. Pengukuran CPS dapat dilakukan dengan menggunakan instrument berupa inventory. Dalam penelitian ini, dikembangkan Collaborative Problem Solving Inventory (CPSI) berbasis web dengan metode pengembangan Learning Development Cycle (LDC). CPSI yang dikembangkan oleh penyusun merupakan pengukuran terhadap Collaborative Problem Solving yang inovatif, menekankan pada pengukuran sebelum, selama dan sesudah kegiatan pembelajaran Collaborative Problem Solving dilakukan. Tahapan penelitian ini meliputi: (1) Observasi, Survei dan Analisis Kebutuhan; (2) Focus Group Discussion; (3) Penyusunan Desain CPSI; (4) Pengembangan CPSI Berbasis Web melalui Model Pengembangan LDC; (5) Validasi CPSI oleh ahli pengembang bahan ajar (kelayakan) dan guru (kepraktisan); dan (6) Analisis Data. Berdasarkan hasil penelitian didapatkan instrumen CPSI berbasis web dengan indikator penilaian keterampilan kolaborasi yang telah divalidasi oleh ahli dan praktisi. Hasil validasi menunjukkan bahwa instrumen layak digunakan dengan perbaikan kecil (validitas 3,75 oleh ahli media; validitas 3,78 oleh ahli evaluasi; dan rerata 3,65 oleh praktisi). Tindak lanjut penelitian dapat dilakukan dengan mengujicobakan instrumen dalam situasi pembelajaran di kelas, dengan terlebih dahulu melakukan perbaikan terkait kemenarikan tampilan dan desain website.Abstract. Collaborative Problem Solving (CPS) is one of the essential skills of the 21st century. This skill is also must be instilled in students in learning process. CPS can be gauged by using an instrument in the form of inventory. In this study, a web-based Collaborative Problem Solving Inventory (CPSI) was developed with the Learning Development Cycle (LDC) development method. The CPSI developed by the authors is a measurement of innovative Collaborative Problem Solving, emphasizing on measurements before, during and after Collaborative Problem Solving learning activities are carried out. The stages of this research include: (1) Observation, Survey and Needs Analysis; (2) Focus Group Discussion; (3) Preparation of CPSI Design; (4) Development of Web-Based CPSI through LDC Development Model; (5) CPSI validation by experts in developing teaching materials (feasibility) and teachers (practical); and (6) Data Analysis. Based on the results of the study, it was obtained that the CPSI instrument was web-based with indicators for assessing collaboration skills that had been validated by experts and practitioners. The validation results showed that the instrument is feasible to use with minor improvements (validity 3.75 by media experts; validity 3.78 by evaluation experts; and mean 3.65 by practitioners). Follow-up research can be done by testing the instrument in a classroom learning situation, by first making improvements related to the attractiveness of the appearance and website design

    Students perception of auto-scored online exams in blended assessment: feedback for improvement

    Full text link
    [ES] El desarrollo de las tecnologías de la información y la comunicación ha producido un incremento del uso de la Computer Based Assessment (CBA, evaluación basada en ordenadores). en la educación superior. En la última década, ha habido un debate sobre los exámenes online vs los escritos tradicionales. El objetivo del presente estudio ha sido verificar si los estudiantes tienen prejuicios sobre los exámenes online con corrección automática, y si ese es el caso, determinar los motivos. El estudio se realizó en el contexto de una evaluación mixta que implicó a 1200 estudiantes matriculados en una asignatura de física de primer curso universitario. De entre ellos, 463 respondieron a una encuesta anónima. Del análisis cuantitativo de la encuesta surgieron tres factores (etiquetados «F1-Learning», «F2-Use of Tool» y «F3-Assessment»), y se estableció una escala aditiva. Hemos encontrado diferencias significativas en el factor «F3-Assessment» en comparación con los otros dos factores, lo que indica una menor aceptación de la herramienta para la evaluación del estudiante. Parece ser que, a pesar de que los estudiantes están acostumbrados a los ordenadores, tienen una falta de confianza en los exámenes online. Para reforzar y matizar los resultados cuantitativos de la encuesta, incluimos una pregunta abierta y realizamos una entrevista a un pequeño grupo de 11 estudiantes. Aunque sus comentarios fueron en general positivos, especialmente sobre la facilidad de uso y sobre su utilidad para conocer el nivel alcanzado durante el proceso de aprendizaje, hubo algunas críticas sobre la claridad de las preguntas y el rigor del sistema de puntuación. Estos dos factores, entre otros, podrían ser la causa de la peor percepción del factor «F3-Assessment» y el origen de las reticencias de los estudiantes a los exámenes online y a la corrección automática.[EN] Development of the information and communication technologies has led to an increase in the use of Computer Based Assessment (CBA) in higher education. In the last decade, there has been a discussion on online versus the traditional pen-and-paper exams. The aim of this study was to verify whether students have reserves about auto-scored online exams, and if that is the case, to determine the reasons. The study was performed in the context of a blended assessment in which 1200 students were enrolled on a first-year physics university course. Among them, 463 answered an anonymous survey, supplemented by information obtained from an open-ended question and from interviews with students. Three factors (labelled `F1-Learning,¿ `F2-Use of Tool,¿ and `F3-Assessment¿) emerged from the quantitative analysis of the survey, and an additive scale was established. We found significant differences in the `F3-Assessment¿ factor compared to the other two factors, indicating a lower acceptance of the tool for student assessment. It seems that even though students are used to computers, they have a lack of confidence in online exams. We carried out an in-depth survey on this topic in the form of an open-ended question and by interviewing a small group of 11 students to confer strength and nuance to the quantitative results of the survey. Although their comments were in general positive, especially on ease-of-use and on its usefulness in indicating the level achieved during the learning process, there was also some criticism of the clarity of questions and the strictness system of marking. These two factors, among others, could have been the cause of the worse perception of F3-Assessment and the origin of the students¿ reluctances of online exams and automatic scoring.This work was supported by the Universitat Politècnica de València through the A15/16 Project (Convocatoria de Proyectos de Innovación y Convergencia de la UPV). We would like to thank the ICE in the Universitat Politècnica de València for their help, through the Innovation and Educational Quality Program and for supporting the team Innovación en Metodologías Activas para el Aprendizaje de la Física (e-MACAFI).Riera Guasp, J.; Ardid Ramírez, M.; Gómez-Tejedor, J.; Vidaurre, A.; Meseguer Dueñas, JM. (2018). Students perception of auto-scored online exams in blended assessment: feedback for improvement. Educacion XX1. 21(2):79-103. https://doi.org/10.5944/ educXX1.19559S7910321

    PERCEPCIÓN DE LOS ESTUDIANTES ACERCA DE LOS EXÁMENES ONLINE CON CORRECCIÓN AUTOMÁTICA EN UNA EVALUACIÓN MIXTA: RETROALIMENTACIÓN PARA LA MEJORA

    Get PDF
    Development of the information and communication technologies has led to an increase in the use of Computer Based Assessment (CBA) in higher education. In the last decade, there has been a discussion on online versus traditional pen-and-paper exams. The aim of this study was to verify whether students have reserves about auto-scored online exams, and if that is the case, to determine the reasons. The study was performed in the context of a blended assessment in which 1200 students were enrolled on a first-year physics university course. Among them, 463 answered an anonymous survey, supplemented by information obtained from an open-ended question and from interviews with students. Three factors (labelled ‘F1-Learning,’ ‘F2-Use of Tool,’ and ‘F3-Assessment’) emerged from the quantitative analysis of the survey, and an additive scale was established. We found significant differences in the ‘F3-Assessment’ factor compared to the other two factors, indicating a lower acceptance of the tool for student assessment. It seems that even though students are used to computers, they have a lack of confidence in online exams. We carried out an in-depth survey on this topic in the form of an open-ended question and by interviewing a small group of 11 students to confer strength and nuance to the quantitative results of the survey. Although their comments were positive in general, especially on ease-of-use and on its usefulness in indicating the level achieved during the learning process, there was also some criticism of the clarity of questions and the strictness of the marking system. These two factors, among others, could have been the cause of the worse perception of F3-Assessment and the origin of the students’ reluctance towards online exams and automatic scoring.El desarrollo de las tecnologías de la información y la comunicación haproducido un incremento del uso de la Computer Based Assessment (CBA,evaluación basada en ordenadores). en la educación superior. En la últimadécada, ha habido un debate sobre los exámenes online vs los escritostradicionales. El objetivo del presente estudio ha sido verificar si los estudiantes tienen prejuicios sobre los exámenes online con corrección automática, y si ese es el caso, determinar los motivos. El estudio se realizó en el contexto de una evaluación mixta que implicó a 1200 estudiantes matriculados en una asignatura de física de primer curso universitario. De entre ellos, 463 respondieron a una encuesta anónima. Del análisis cuantitativo de la encuesta surgieron tres factores (etiquetados «F1-Learning», «F2-Use of Tool» y «F3-Assessment»), y se estableció una escala aditiva. Hemos encontrado diferenciassignificativas en el factor «F3-Assessment» en comparación con los otrosdos factores, lo que indica una menor aceptación de la herramienta parala evaluación del estudiante. Parece ser que, a pesar de que los estudiantes están acostumbrados a los ordenadores, tienen una falta de confianza en los exámenes online. Para reforzar y matizar los resultados cuantitativos de la encuesta, incluimos una pregunta abierta y realizamos una entrevista a un pequeño grupo de 11 estudiantes. Aunque sus comentarios fueron en general positivos, especialmente sobre la facilidad de uso y sobre su utilidad para conocer el nivel alcanzado durante el proceso de aprendizaje, hubo algunas críticas sobre la claridad de las preguntas y el rigor del sistema de puntuación.Estos dos factores, entre otros, podrían ser la causa de la peor percepción del factor «F3-Assessment» y el origen de las reticencias de los estudiantes a los exámenes online y a la corrección automática

    Scales of Online Learning Readiness: Empirical Validation of Factors Affecting EFL Learners in Online Learning during Covid-19 Pandemic

    Get PDF
    Although extensive research has been carried out on university students’ online learning readiness, very little attention has been paid to online learning readiness of foreign language learners. Examining the learners' readiness get involved in online learning becomes more fundamental to conduct in this current Covid-19 pandemic since online learning is the only alternative to run educational programs at every level. This study set out to investigate the construct validity of a scale to measure EFL learners' readiness in online learning during covid-19 pandemic. The scale was construdted based on the theories underlying students readiness in online learning. Both exploratory and confirmatory factor analyses were used to empirically validate the scale. A total of 682 undergraduate students from seven universities in Indonesia participated in the study by completing in the google form-based scale. The results of the study showed that the scale comprised of 24 items that converged into a five-latent factor with an acceptable fit. These results are expected to be a consideration basis in planning, implementing, and evaluating EFL online learning programs in the Indonesia context

    Same same but different: Learning with technology – are first-year college students prepared for this?

    Get PDF
    March 2020 changed the world of learning. Ever since, students have been relying on remote lecturers, virtual fellow students, and electronic learning material. For many, this greatly differs from how they used to learn before and even though technology is incremental to students’ everyday life, many are not familiar with using technology for their learning. The purpose of this study was to investigate if first-year college students are prepared for learning with technology and to empirically document possible gaps. To assess this, two successive first year cohorts completed a 32-items questionnaire that was based on standardized scales to assess time management, collaboration, and self-directedness, as the three core competencies for higher education learners to successfully engage in learning with technology. The answers were related to students’ prior experiences and their motivation to learn online. First results indicated that time management is a major challenge for first-year students with and without work experience. Results also suggest that the motivation to learn has a positive relationship with the concept variables chosen to assess first-year students’ expectation and readiness for online learning. The findings may support the need for higher education institutions to understand students’ expectations and self-assessed readiness and to identify areas for improvement

    Social science education students’ preparedness for problem-based hybrid learning

    Get PDF
    This research aims to investigate social science education students’ preparedness before they attend problem-based hybrid learning (PBHL). This research is quantitative research with an explorative survey method conducted on college students taking Social Science Education Program in Universitas Islam Negeri Maulana Malik Ibrahim Malang, Indonesia. The participant of this study were 118 students, subsisting of 32 male and 86 female students. This research used a questionnaire with a 1-4 Likert scale as an instrument to measure students’ readiness, weighted from their motivation, prospective behavior, and information and communication technolog (ICT) skills. The data collection process was carried out through Google Form in April 2020. This research used descriptive quantitative analysis to discover students’ preparedness and one-way ANOVA to identify the effect of gender type to the students’ preparation in PBHL. The results of this research show that social science education students’ preparedness (motivation, prospective behavior, and ICT skills) in PBHL is classified high, namely in the B+ category. Furthermore, the gender type has no significant effect on students’ preparedness for PBHL (p>0.05). Recommendation based on the research result is the university has to facilitate easy internet access, such as by accelerating the bandwidth, internet connection, and promote other policies that support PBHL

    An empirical analysis of the determinants of mobile instant messaging appropriation in university learning

    Get PDF
    Published ArticleResearch on technology adoption often profiles device usability (such as perceived usefulness) and user dispositions (such as perceived ease of use) as the prime determinants of effective technology adoption. Since any process of technology adoption cannot be conceived out of its situated contexts, this paper argues that any pre-occupation with technology acceptance from the perspective of device usability and user dispositions potentially negates enabling contexts that make successful adoption a reality. Contributing to contemporary debates on technology adoption, this study presents flexible mobile learning contexts comprising cost (device cost and communication cost), device capabilities (portability, collaborative capabilities), and learner traits (learner control) as antecedents that enable the sustainable uptake of emerging technologies. To explore the acceptance and capacity of mobile instant messaging systems to improve student performance, the study draws on these antecedents, develops a factor model and empirically tests it on tertiary students at a South African University of Technology. The study involved 223 national diploma and bachelor’s degree students and employed partial least squares for statistical analysis. Overall, the proposed model displayed a good fit with the data and rendered satisfactory explanatory power for students’ acceptance of mobile learning. Findings suggest that device portability, communication cost, collaborative capabilities of device and learner control are the main drivers of flexible learning in mobile environments. Flexible learning context facilitated by learner control was found to have a positive influence on attitude towards mobile learning and exhibited the highest path coefficient of the overall model. The study implication is that educators need to create varied learning opportunities that leverage learner control of learning in mobile learning systems to enhance flexible mobile learning. The study also confirmed the statistical significance of the original Technology Acceptance Model constructs
    corecore