786 research outputs found

    Development and Evaluation of the Nebraska Assessment of Computing Knowledge

    Get PDF
    One way to increase the quality of computing education research is to increase the quality of the measurement tools that are available to researchers, especially measures of students’ knowledge and skills. This paper represents a step toward increasing the number of available thoroughly-evaluated tests that can be used in computing education research by evaluating the psychometric properties of a multiple-choice test designed to differentiate undergraduate students in terms of their mastery of foundational computing concepts. Classical test theory and item response theory analyses are reported and indicate that the test is a reliable, psychometrically-sound instrument suitable for research with undergraduate students. Limitations and the importance of using standardized measures of learning in education research are discussed

    Predicting and Improving Performance on Introductory Programming Courses (CS1)

    Get PDF
    This thesis describes a longitudinal study on factors which predict academic success in introductory programming at undergraduate level, including the development of these factors into a fully automated web based system (which predicts students who are at risk of not succeeding early in the introductory programming module) and interventions to address attrition rates on introductory programming courses (CS1). Numerous studies have developed models for predicting success in CS1, however there is little evidence on their ability to generalise or on their use beyond early investigations. In addition, they are seldom followed up with interventions, after struggling students have been identified. The approach overcomes this by providing a web-based real time system, with a prediction model at its core that has been longitudinally developed and revalidated, with recommendations for interventions which educators could implement to support struggling students that have been identified. This thesis makes five fundamental contributions. The first is a revalidation of a prediction model named PreSS. The second contribution is the development of a web-based, real time implementation of the PreSS model, named PreSS#. The third contribution is a large longitudinal, multi-variate, multi-institutional study identifying predictors of performance and analysing machine learning techniques (including deep learning and convolutional neural networks) to further develop the PreSS model. This resulted in a prediction model with approximately 71% accuracy, and over 80% sensitivity, using data from 11 institutions with a sample size of 692 students. The fourth contribution is a study on insights on gender differences in CS1; identifying psychological, background, and performance differences between male and female students to better inform the prediction model and the interventions. The final, fifth contribution, is the development of two interventions that can be implemented early in CS1, once identified by PreSS# to potentially improve student outcomes. The work described in this thesis builds substantially on earlier work, providing valid and reliable insights on gender differences, potential interventions to improve performance and an unsurpassed, generalizable prediction model, developed into a real time web-based system

    Development and Validation of the Computational Thinking Concepts and Skills Test

    Get PDF
    Calls for standardized and validated measures of computational thinking have been made repeatedly in recent years. Still, few such tests have been created and even fewer have undergone rig- orous psychometric evaluation and been made available to re- searchers. The purpose of this study is to report our work in de- veloping and validating a test of computational thinking concepts and skills and to compare different scoring methods for the test. This computational thinking exam is intended to be used in com- puting education research as a common measure of computational thinking so that the research community will be able to make more meaningful comparisons across samples and studies. The Computational Thinking Concepts and Skills Test (CTCAST) was administered to students in several courses, evaluated and revised, and then administered to another group of students. Part of the revision included changing half of the items to a multiple-select format. The test scores using the three scoring methods were com- pared to each other and to scores on a different test of core com- puter science knowledge. Results indicate the CTCAST and the test of core computer science knowledge measure similar, but not identical, aspects of students’ knowledge and skills, and that item- level statistics vary according to the scoring method that is used. Recommendations for using and scoring the test are presented

    Review of Measurements Used in Computing Education Research and Suggestions for Increasing Standardization

    Get PDF
    The variables that researchers measure and how they measure them are central in any area of research. Which research questions can be asked and how they are answered depends on measurement. This paper describes a systematic review of the literature in computing education research to summarize the commonly used variables and measurements in 197 papers and to compare them to best practices in measurement for human-subjects research. Characteristics of the literature that are examined in the review include variables measured (including learner characteristics), measurements used, and type of data analysis. The review illuminates common practices related to each of these characteristics and their interactions with other characteristics. The paper lists standardized measurements that were used in the literature and highlights commonly used variables for which no standardized measures exist. To conclude, this review compares common practice in computing education to best practices in human-subjects research to make recommendations for increasing rigor

    Development and Validation of the Computational Thinking Concepts and Skills Test

    Get PDF
    Calls for standardized and validated measures of computational thinking have been made repeatedly in recent years. Still, few such tests have been created and even fewer have undergone rig- orous psychometric evaluation and been made available to re- searchers. The purpose of this study is to report our work in de- veloping and validating a test of computational thinking concepts and skills and to compare different scoring methods for the test. This computational thinking exam is intended to be used in com- puting education research as a common measure of computational thinking so that the research community will be able to make more meaningful comparisons across samples and studies. The Computational Thinking Concepts and Skills Test (CTCAST) was administered to students in several courses, evaluated and revised, and then administered to another group of students. Part of the revision included changing half of the items to a multiple-select format. The test scores using the three scoring methods were com- pared to each other and to scores on a different test of core com- puter science knowledge. Results indicate the CTCAST and the test of core computer science knowledge measure similar, but not identical, aspects of students’ knowledge and skills, and that item- level statistics vary according to the scoring method that is used. Recommendations for using and scoring the test are presented

    Development and Validation of the Computational Thinking Concepts and Skills Test

    Get PDF
    Calls for standardized and validated measures of computational thinking have been made repeatedly in recent years. Still, few such tests have been created and even fewer have undergone rig- orous psychometric evaluation and been made available to re- searchers. The purpose of this study is to report our work in de- veloping and validating a test of computational thinking concepts and skills and to compare different scoring methods for the test. This computational thinking exam is intended to be used in com- puting education research as a common measure of computational thinking so that the research community will be able to make more meaningful comparisons across samples and studies. The Computational Thinking Concepts and Skills Test (CTCAST) was administered to students in several courses, evaluated and revised, and then administered to another group of students. Part of the revision included changing half of the items to a multiple-select format. The test scores using the three scoring methods were com- pared to each other and to scores on a different test of core com- puter science knowledge. Results indicate the CTCAST and the test of core computer science knowledge measure similar, but not identical, aspects of students’ knowledge and skills, and that item- level statistics vary according to the scoring method that is used. Recommendations for using and scoring the test are presented

    CS1: how will they do? How can we help? A decade of research and practice

    Get PDF
    Background and Context: Computer Science attrition rates (in the western world) are very concerning, with a large number of students failing to progress each year. It is well acknowledged that a significant factor of this attrition, is the students’ difficulty to master the introductory programming module, often referred to as CS1. Objective: The objective of this article is to describe the evolution of a prediction model named PreSS (Predict Student Success) over a 13-year period (2005–2018). Method: This article ties together, the PreSS prediction model; pilot studies; a longitudinal, multi-institutional re-validation and replication study; improvements to the model since its inception; and interventions to reduce attrition rates. Findings: The outcome of this body of work is an end-to-end real-time web-based tool (PreSS#), which can predict student success early in an introductory programming module (CS1), with an accuracy of 71%. This tool is enhanced with interventions that were developed in conjunction with PreSS#, which improved student performance in CS1. Implications: This work contributes significantly to the computer science education (CSEd) community and the ITiCSE 2015 working group’s call (in particular the second grand challenge), by re-validating and developing further the original PreSS model, 13 years after it was developed, on a modern, disparate, multi-institutional data set

    Introductory programming: a systematic literature review

    Get PDF
    As computing becomes a mainstream discipline embedded in the school curriculum and acts as an enabler for an increasing range of academic disciplines in higher education, the literature on introductory programming is growing. Although there have been several reviews that focus on specific aspects of introductory programming, there has been no broad overview of the literature exploring recent trends across the breadth of introductory programming. This paper is the report of an ITiCSE working group that conducted a systematic review in order to gain an overview of the introductory programming literature. Partitioning the literature into papers addressing the student, teaching, the curriculum, and assessment, we explore trends, highlight advances in knowledge over the past 15 years, and indicate possible directions for future research

    Toward Predicting Success and Failure in CS2: A Mixed-Method Analysis

    Full text link
    Factors driving success and failure in CS1 are the subject of much study but less so for CS2. This paper investigates the transition from CS1 to CS2 in search of leading indicators of success in CS2. Both CS1 and CS2 at the University of North Carolina Wilmington (UNCW) are taught in Python with annual enrollments of 300 and 150 respectively. In this paper, we report on the following research questions: 1) Are CS1 grades indicators of CS2 grades? 2) Does a quantitative relationship exist between CS2 course grade and a modified version of the SCS1 concept inventory? 3) What are the most challenging aspects of CS2, and how well does CS1 prepare students for CS2 from the student's perspective? We provide a quantitative analysis of 2300 CS1 and CS2 course grades from 2013--2019. In Spring 2019, we administered a modified version of the SCS1 concept inventory to 44 students in the first week of CS2. Further, 69 students completed an exit questionnaire at the conclusion of CS2 to gain qualitative student feedback on their challenges in CS2 and on how well CS1 prepared them for CS2. We find that 56% of students' grades were lower in CS2 than CS1, 18% improved their grades, and 26% earned the same grade. Of the changes, 62% were within one grade point. We find a statistically significant correlation between the modified SCS1 score and CS2 grade points. Students identify linked lists and class/object concepts among the most challenging. Student feedback on CS2 challenges and the adequacy of their CS1 preparations identify possible avenues for improving the CS1-CS2 transition.Comment: The definitive Version of Record was published in 2020 ACM Southeast Conference (ACMSE 2020), April 2-4, 2020, Tampa, FL, USA. 8 page
    corecore