1,711 research outputs found

    Automatização de feedback para apoiar o aprendizado no processo de resolução de problemas de programação.

    Get PDF
    No ensino de programação, é fundamental que os estudantes realizem atividades práticas. Para que sejam bem sucedidos nessas atividades, os professores devem guiá-los, especialmente os iniciantes, ao longo do processo de programação. Consideramos que o processo de programação, no contexto do ensino desta prática, engloba as atividades necessárias para resolver um problema de computação. Este processo é composto por uma série de etapas que são executadas de forma não linear, mas sim iterativa. Nós consideramos o processo de programação adaptado de Polya (1957) para a resolução de problemas de programação, que inclui os seguintes passos [Pól57]: (1) Entender o problema, (2) Planejar a solução, (3) Implementar o programa e (4) Revisar. Com o foco no quarto estágio, nós almejamos que os estudantes tornem-se proficientes em corrigir as suas estratégias e, através de reflexão crítica, serem capazes de refatorar os seus códigos tendo em vista a boa qualidade de programação. Durante a pesquisa deste doutorado, nós desenvolvemos uma abordagem para gerar e fornecer feedback na última fase do processo de programação: avaliação da solução. O desafio foi entregar aos estudantes feedback elaborado e a tempo, referente ás atividades de programação, de forma a estimulá-los a pensar sobre o problema e a sua solução e melhorar as suas habilidades. Como requisito para a geração de feedback, comprometemo-nos a não impormais carga de trabalho aos professores, evitando-os de criar novos artefatos. Extraímos informações a partir do material instrucional já desenvolvido pelos professores quando da criação de uma nova atividade de programação: a solução de referência. Implementamos e avaliamos nossa proposta em um curso de programação introdutória em um estudo longitudinal. Os resultados obtidos no nosso estudo vão além da desejada melhoria na qualidade de código. Observamos que os alunos foram incentivados a melhorar as suas habilidades de programação estimulados pelo exercício de raciocinar sobre uma solução para um problema que já está funcionando.In programming education, the development of students’ programming skills through practical programming assignments is a fundamental activity. In order to succeed in those assignments, instructors need to provide guidance, especially to novice learners, about the programming process. We consider that this process, in the context of programming education, encompasses steps needed to solve a computer-programming problem. We took into consideration the programming process adapted from Polya (1957) to computer programming problem-solving, that includes the following stages [Pól57]: (1) Understand the problem; (2) Plan the solution; (3) Implement the program and (4) Look Back. Focusing on the fourth stage, we want students to be proficient in correcting strategies and, with critical reflection, being able to refactor their code caring about good programming quality. During this doctoral research, we developed an approach to generate formative feedback to leverage programming problem-solving in the last stage of the programming process: targeting the solution evaluation. The challenge was to provide timely and elaborated feedback, referring to programming assignments, to stimulate students to reason about the problem and their solution, aiming to improve their programming skills. As a requirement for generating feedback, we compromised not to impose the creation of new artifacts or instructional materials to instructors, but to take advantage of a usual resource already created when proposing a new programming assignment: the reference solution. We implemented and evaluated our proposal in an introductory programming course in a longitudinal study. The results go beyond what we initially expected: the improved assignments’ code quality. We observed that students felt stimulated, and in fact, improved their programming abilities driven by the exercise of reasoning about their already functioning solution

    Understanding and Addressing Misconceptions in Introductory Programming: A Data-Driven Approach

    Get PDF
    With the expansion of computer science (CS) education, CS teachers in K-12 schools should be cognizant of student misconceptions and be prepared to help students establish accurate understanding of computer science and programming. This exploratory design-based research (DBR) study implemented a data-driven approach to identify secondary school students’ misconceptions using both their compilation and test errors and provide targeted feedback to promote students’ conceptual change in introductory programming. Research subjects were two groups of high school students enrolled in two sections of a Java-based programming course in a 2017 summer residential program for gifted and talented students. This study consisted of two stages. In the first stage, students of group 1 took the introductory programming class and used an automated learning system, Mulberry, which collected data on student problem-solving attempts. Data analysis was conducted to identify common programming errors students demonstrated in their programs and relevant misconceptions. In the second stage, targeted feedback to address these misconceptions was designed using principles from conceptual change and feedback theories and added to Mulberry. When students of group 2 took the same introductory programming class and solved programming problems in Mulberry, they received the targeted feedback to address their misconceptions. Data analysis was conducted to assess how the feedback affected the evolution of students’ (mis)conceptions. Using students’ erroneous solutions, 55 distinct compilation errors were identified, and 15 of them were categorized as common ones. The 15 common compilation errors accounted for 92% of all compilation errors. Based on the 15 common compilation errors, three underlying student misconceptions were identified, including deficient knowledge of fundamental Java program structure, misunderstandings of Java expressions, and confusion about Java variables. In addition, 10 common test errors were identified based on nine difficult problems. The results showed that 54% of all test errors were related to the difficult problems, and the 10 common test errors accounted for 39% of all test errors of the difficult problems. Four common student misconceptions were identified based on the 10 common test errors, including misunderstandings of Java input, misunderstandings of Java output, confusion about Java operators, and forgetting to consider special cases. Both quantitative and qualitative data analysis were conducted to see whether and how the targeted feedback affected students’ solutions. Quantitative analysis indicated that targeted feedback messages enhanced students’ rates of improving erroneous solutions. Group 2 students showed significantly higher improvement rates in all erroneous solutions and solutions with common errors compared to group 1 students. Within group 2, solutions with targeted feedback messages resulted in significantly higher improvement rates compared to solutions without targeted feedback messages. Results suggest that with targeted feedback messages students were more likely to correct errors in their code. Qualitative analysis of students’ solutions of four selected cases determined that students of group 2, when improving their code, made fewer intermediate incorrect solutions than students in group 1. The targeted feedback messages appear to have helped to promote conceptual change. The results of this study suggest that a data-driven approach to understanding and addressing student misconceptions, which is using student data in automated assessment systems, has the potential to improve students’ learning of programming and may help teachers build better understanding of their students’ common misconceptions and develop their pedagogical content knowledge (PCK). The use of automated assessment systems with misconception identification components may be helpful in pre-college introductory programming courses and so is encouraged as K-12 CS education expands. Researchers and developers of automated assessment systems should develop components that support identifying common student misconceptions using both compilation and non-compilation errors. Future research should continue to investigate the use of targeted feedback in automated assessment systems to address students’ misconceptions and promote conceptual change in computer science education

    Development of Lecturer Performance Evaluation Tools in the Implementation of Merdeka Belajar Kampus Merdeka

    Get PDF
    This research aims to develop a model for assessing lecturer performance that includes tools, evaluation standards, standard-setting, computer programs, assessment standards, and instructions for using evaluation results. This is a development project. Nine experts approved the instrument design before using Aiken's V formula to determine content validity, exploratory factor analysis to determine construction validity, and Cronbach's alpha to determine reliability. The study's findings revealed that: 1) the entire instrument meets the criteria for validity; 2) an instrumental analysis of the teacher's instructional activities revealed reliability of the preparation, implementation, and evaluation components of 0.843; and finally, 3) the effectiveness of teaching, the efficacy of research, the efficacy of dedication to the community, and the qualifications of lecturers were all taken into account when evaluating the efficiency of the lecturer's employment

    ChatGPT and the EFL Classroom: Supplement or Substitute in Saudi Arabia’s Eastern Region

    Get PDF
    This paper compares EFL learners satisfaction with teacher-mediated versus ChatGPT-assisted writing opportunities. Results show that, except for ease of use, learners reported greater satisfaction with the teachers role in all other factors, with the construct of interactive opportunities in the teacher mediated period rated the highest contributor to learning satisfaction, while the component of learning content being reported to be ‘almost satisfied in teacher mediated mode. The study was based on 64 EFL learners’ perceptions of learning satisfaction in teacher- mediated versus bot-created writing opportunities using factor analysis of data generated from responses to five open- ended questions at a language learning institution in the Eastern Region in Saudi Arabia. The data collection period was four weeks, and learners’ responses were sought in an eight-item written interview which loaded onto four learning satisfaction components: Learning content, learning progress, ease of use, interactive opportunities. We suggest that ChatGPT can supplement the learning process but cannot replace the teachers role without proper training for cautious use as ChatGPT does not help in students progress due to the lack of learning satisfaction attained in the use of this tool

    An Exploration of Student Reasoning about Undergraduate Computer Science Concepts: An Active Learning Technique to Address Misconceptions

    Get PDF
    Computer science (CS) is a popular but often challenging major for undergraduates. As the importance of computing in the US and world economies continues to grow, the demand for successful CS majors grows accordingly. However, retention rates are low, particularly for under-represented groups such as women and racial minorities. Computing education researchers have begun to investigate causes and explore interventions to improve the success of CS students, from K-12 through higher education. In the undergraduate CS context, for example; student difficulties with pointers, functions, loops, and control flow have been observed. We and others have utilized student responses to multiple choice questions aimed at determining misconceptions, engaged in retroactive examination of code samples and design artifacts, and conducted interviews in an attempt to understand the nature of these problems. Interventions to address these problems often apply evidenced-based active learning techniques in CS classrooms as a way to engage students and improve learning.In this work, I employ a human-centered approach, one in which the focus of data collection is on the student thought processes as evidenced in their speech and writing. I seek to determine what students are thinking not only through what can be surmised in retrospect from the artifacts they create, but also to gain insight into their thoughts as they engage in the design, implementation,and analysis of those artifacts and as they reflect on those processes and artifacts shortly after. For my dissertation work, I have conducted four studies: 1. a conceptual assessment survey asking students to “Please explain your reasoning” after each answer to code tracing/execution questions followed by task-based interviews with a smaller, different group of students 2. a “coding in the wild” think aloud study that recorded the screen and audio of students as they implemented a simple program and explained their thought process 3. interview analyses of student design diagrams/documentation in a software engineering course, tasking students to explain their designs and comparing what they believed they had designed with what is actually shown from their submitted documentation. These first three studies were formative, leading to some key insights including the benefits students can gain from feedback, students’ tendencies to avoid complexity when programming or encountering concepts they do not fully grasp, the nature of student struggles with the planning stages of problem solving, and insight into the fragile understanding of some key CS concepts that students form. I leverage the benefits of feedback with guided prompts using the misconceptions uncovered in my formative studies to conduct a final, evaluative study. This study seeks to evaluate the benefits that can be gained from a guided feedback intervention for learning introductory programming concepts and compare those benefits and the effort and resource costs associated with each variation, comparing the costs and benefits associated with two forms of feedback. The first is an active learning technique I developed and deem misconception-based feedback (MBF), which has peers working in pairs use prompts based on misconceptions to guide their discussion of a recently completed coding assignment. The second is a human autograder (HAG) group acting as a control. HAG simulates typical autograders, supplying test cases and correct solutions, but utilizes a human stand-in for a computer. In both conditions, one student uses provided prompts to guide the discussion. The other student responds/interacts with their code based on the prompts. I captured screen and audio recordings of these discussions. Participants completed conceptual pre-tests and post-tests that asked them to explain their reasoning. I hypothesized that the MBF intervention will offer avaluable way to increase learning, address misconceptions, and get students more engaged that will be feasible in CS courses of any size and have benefits over the HAG intervention. Results show that for questions involving parameter passing with regards to pass by reference versus pass by value semantics, particularly with pointers, there were significant improvements in learning outcomes for the MBF group but not the HAG group

    Senior Computer Science Students’ Task and Revised Task Interpretation While Engaged in Programming Endeavor

    Get PDF
    Developing a computer program is not an easy task. Studies reported that a large number of computer science students decided to change their major due to the extreme challenge in learning programming. Fortunately, studies also reported that learning various self-regulation strategies may help students to continue studying computer science. This study is interested in assessing students’ self-regulation, in specific their task understanding and its revision during programming endeavors. Task understanding is specifically selected because it affects the entire programming endeavor. In this qualitative case study, two female and two male senior computer science students were voluntarily recruited as research participants. They were asked to think aloud while answering five programming problems. Before solving the problem, they had to explain their understanding of the task and after that answer some questions related to their problem-solving process. The participants’ problem-solving process were video and audio-recorded, transcribed, and analyzed. This study found that the participants’ were capable of tailoring their problem-solving approach to the task types, including when understanding the tasks. Given enough time, the participants can understand the problem correctly. When the task is complicated, the participants will gradually update their understanding during the problem-solving endeavor. Some situations may have prevented the participants from understanding the task correctly, including overconfidence, being overwhelmed, utilizing an inappropriate presentation technique, or drawing knowledge from irrelevant experience. Last, the participants tended to be inexperienced in managing unfavorable outcomes

    The Impact of E-Readiness on ELearning Success in Saudi Arabian Higher Education Institutions

    Get PDF
    This research investigates how e-readiness impacts the success of e-learning initiatives in Saudi Arabia’s higher education institutions. The research model assesses this relationship taking into account the unique attributes of teachers, students and administrator in higher education institutions. Seven dimensions constituting the component factors of e-readiness were identified including policy and institutional business strategy, pedagogy, technology, interface design, management, administrative and resource support as well as evaluation and continual improvement. Also six dimensions which constitute the component factors of e-learning success including system, information and service qualities, use and user satisfaction as well as net benefits were also identified. The research hypothesizes, construct and test structural equation models (SEM) on the current levels of e-readiness of Saudi Arabian higher education institutions to successfully implement e-learning initiatives. Research instrument was developed using a pool of items generated from literature. The instruments used were verified and confirmed using exploratory factor analysis (EFA) and Confirmatory Factor Analysis (CFA). Results of EFA, CFA indicated the measurement scale can serve as reliable and valid tool to assess the relationship between e-readiness and e-learning success in Saudi Arabian higher education institutions. Structural equation modelling was used to test this relationship and to assess the applicability of the study’s theoretical framework to different and multiple groups. The unique attributes of teachers, students and administrator to achieve meaningful comparisons across groups were considered and the results exhibit adequate cross-group equivalence which was achieved at different levels. Finding confirmed the universality of the five dimensions of e-readiness to have significant effects on the six dimensions of e-learning success. Additionally, the findings indicated stability of the relationships among the variables within the structural equation model and it isn’t influenced by differences of teachers, students, and administrators either conceptually or psychometrically. The current work contributes to our knowledge of e-learning through the lens of theoretical insights and empirical findings. The implications of the research in the context of Saudi Arabia are discussed and it is intended that the findings from this research can be used to inform strategic decision making towards harnessing the power of e-learning in the country’s higher institutions of learning

    An Evaluation of the Factors Used to Predict Writing Ability at the Air Force Institute of Technology

    Get PDF
    A study of 574 students at the Air Force Institute of Technology compared performance, education, and experience factors (the later two as stated by the students themselves) to a locally developed estimate of true writing ability (WGPA). This exploratory research was additionally intended to assess the effectiveness of AFITs current writing student skill diagnostic and instructional system. Direct (essay evaluation) and indirect (objective test) evaluations of AFIT student writing ability were analyzed for their predictive impact. The statistical analysis procedures used in this study included the factor analysis of a survey, ANOVA, the adjustment of multiple correlations due to measurement error and range attenuation, and the performance of a regression analysis using the raw data and the adjusted correlation matrix. The results of this study indicate AFIT\u27s direct evaluation portion (essay examination) is useful for determining writing ability; the indirect portion (objective test) did not significantly contribute to the model. Due to the combination of independent variables chosen for the predictive model, the study was unable to identify the immediate benefits of the written communications review course on AFIT performance

    Understanding Learner-Centeredness and Student Engagement in Undergraduate Biology Education

    Get PDF
    The overarching goal of my dissertation research is to better understand how undergraduate students engage in biology. Considering the notable lack of interest in the sciences among undergraduates in recent years, actively engaging more students in biology throughout college could potentially increase their motivation to learn biology and retain more students in science fields. Using both quantitative and qualitative approaches, I sought to discover the dimensionality of learner-centeredness in the biology classroom using a variety of instruments. Outside of the classroom, I aimed to describe college-age adults’ learning experiences at informal learning settings such as zoos via development and administration of a novel survey, as well as to discover whether participation in structured or free-choice learning experiences at a zoo related to undergraduates’ motivation and interest to learn biology. I generally concluded that learner-centeredness in the college biology classroom is multidimensional, and often, that perceptions of those in the classroom environment as well as the metrics used to quantify learner-centeredness are misaligned. I found that informal learning experiences of biology undergraduates vary widely. Further, we discovered that all students report increases in motivation and interest to learn biology regardless of structure of learning group or academic level—though we cannot say with certainty that a zoo trip was the cause of these changes. I suggest that both reforming classrooms to be more learner-centered environments and including more learning experiences at informal settings have the potential to more fully engage undergraduate students in biology and improve retention rates of biology majors over time

    EDM 2011: 4th international conference on educational data mining : Eindhoven, July 6-8, 2011 : proceedings

    Get PDF
    corecore