4 research outputs found

    Assessment of work performed by paired students: a pedagogical experience

    Full text link
    “Teamwork” is one of the abilities most valued by employers. In [16] we describe the process of adapting to the ECTS methodologies (for ongoing assessment), a course in computer programming for students in a technical degree (Marine Engineering, UPM) not specifically dedicated to computing. As a further step in this process we have emphasized cooperative learning. For this, the students were paired and the work of each pair was evaluated via surprise tests taken and graded jointly, and constituting a substantial part of the final grade. Here we document this experience, discussing methodological aspects, describing indicators for measuring the impact of these methodologies on the educational experience, and reporting on the students’ opinion of it

    Constructed response or multiple-choice questions for assessing declarative programming knowledge? That is the question!

    Get PDF
    Aim/Purpose This paper presents a data mining approach for analyzing responses to advanced declarative programming questions. The goal of this research is to find a model that can explain the results obtained by students when they perform exams with Constructed Response questions and with equivalent Multiple-Choice Questions. Background The assessment of acquired knowledge is a fundamental role in the teachinglearning process. It helps to identify the factors that can contribute to the teacher in the developing of pedagogical methods and evaluation tools and it also contributes to the self-regulation process of learning. However, better format of questions to assess declarative programming knowledge is still a subject of ongoing debate. While some research advocates the use of constructed responses, others emphasize the potential of multiple-choice questions. Methodology A sensitivity analysis was applied to extract useful knowledge from the relevance of the characteristics (i.e., the input variables) used for the data mining process to compute the score. Contribution Such knowledge helps the teachers to decide which format they must consider with respect to the objectives and expected students results. Findings The results shown a set of factors that influence the discrepancy between answers in both formats. Recommendationsfor Practitioners Teachers can make an informed decision about whether to choose multiplechoice questions or constructed-response taking into account the results of this study. Recommendations for Researchers In this study a block of exams with CR questions is verified to complement the area of learning, returning greater performance in the evaluation of students and improving the teaching-learning process. Impact on Society The results of this research confirm the findings of several other researchers that the use of ICT and the application of MCQ is an added value in the evaluation process. In most cases the student is more likely to succeed with MCQ, however if the teacher prefers to evaluate with CR other research approaches are needed. Future Research Future research must include other question formats.info:eu-repo/semantics/publishedVersio

    Providing Insight into the Relationship Between Constructed Response Questions and Multiple Choice Questions in Introduction to Computer Programming Courses

    Get PDF
    This Research-to-Practice Work in Progress (WIP) investigates the format of student assessment questions. In particular, the focus is on the relationship between student performance on open-ended, constructed-response questions (CRQs) versus close-ended, multiple-choice-response questions (MCQs) in first-year introductory programming courses. We introduce a study to evaluate whether these different response formats return distinct or comparable results. In order to assess this, we compare and correlate student scores on each question type. Our focus is on assessments (exams and tests) in first-year classes. The paper investigates two first-year programming courses with a total of seven sections and approximately 180 combined students. The subject of the sequential set of courses is the procedural C programming language. Based on extant studies comparing student performance on MCQs to their performance on open-ended questions, we investigate whether MCQ scores predict CRQ scores. Preliminary results on the comparison between student performance on these two question formats are presented to assess whether MCQs produce similar results as CRQs, or whether MCQs yield unique contributions. Possible avenues for future work are also discussed

    Testing Programming Skills with Multiple Choice Questions

    No full text
    Abstract. Multiple choice questions are a convenient and popular means of testing beginning students in programming courses. However, they are qualitatively different from exam questions. This paper reports on a study into which types of multiple choice programming questions discriminate well on a final exam, and how well they predict exam scores
    corecore