13,584 research outputs found

    Alternative model for the administration and analysis of research-based assessments

    Full text link
    Research-based assessments represent a valuable tool for both instructors and researchers interested in improving undergraduate physics education. However, the historical model for disseminating and propagating conceptual and attitudinal assessments developed by the physics education research (PER) community has not resulted in widespread adoption of these assessments within the broader community of physics instructors. Within this historical model, assessment developers create high quality, validated assessments, make them available for a wide range of instructors to use, and provide minimal (if any) support to assist with administration or analysis of the results. Here, we present and discuss an alternative model for assessment dissemination, which is characterized by centralized data collection and analysis. This model provides a greater degree of support for both researchers and instructors in order to more explicitly support adoption of research-based assessments. Specifically, we describe our experiences developing a centralized, automated system for an attitudinal assessment we previously created to examine students' epistemologies and expectations about experimental physics. This system provides a proof-of-concept that we use to discuss the advantages associated with centralized administration and data collection for research-based assessments in PER. We also discuss the challenges that we encountered while developing, maintaining, and automating this system. Ultimately, we argue that centralized administration and data collection for standardized assessments is a viable and potentially advantageous alternative to the default model characterized by decentralized administration and analysis. Moreover, with the help of online administration and automation, this model can support the long-term sustainability of centralized assessment systems.Comment: 7 pages, 1 figure, accepted in Phys. Rev. PE

    Introductory programming: a systematic literature review

    Get PDF
    As computing becomes a mainstream discipline embedded in the school curriculum and acts as an enabler for an increasing range of academic disciplines in higher education, the literature on introductory programming is growing. Although there have been several reviews that focus on specific aspects of introductory programming, there has been no broad overview of the literature exploring recent trends across the breadth of introductory programming. This paper is the report of an ITiCSE working group that conducted a systematic review in order to gain an overview of the introductory programming literature. Partitioning the literature into papers addressing the student, teaching, the curriculum, and assessment, we explore trends, highlight advances in knowledge over the past 15 years, and indicate possible directions for future research

    Beyond Surveys: Analyzing Software Development Artifacts to Assess Teaching Efforts

    Full text link
    This Innovative Practice Full Paper presents an approach of using software development artifacts to gauge student behavior and the effectiveness of changes to curriculum design. There is an ongoing need to adapt university courses to changing requirements and shifts in industry. As an educator it is therefore vital to have access to methods, with which to ascertain the effects of curriculum design changes. In this paper, we present our approach of analyzing software repositories in order to gauge student behavior during project work. We evaluate this approach in a case study of a university undergraduate software development course teaching agile development methodologies. Surveys revealed positive attitudes towards the course and the change of employed development methodology from Scrum to Kanban. However, surveys were not usable to ascertain the degree to which students had adapted their workflows and whether they had done so in accordance with course goals. Therefore, we analyzed students' software repository data, which represents information that can be collected by educators to reveal insights into learning successes and detailed student behavior. We analyze the software repositories created during the last five courses, and evaluate differences in workflows between Kanban and Scrum usage

    Lecturers’ and Students’ Experiences with an Automated Programming Assessment System

    Get PDF
    Assessment of source code in university education has become an integral part of grading students and providing them valuable feedback on their developed software solutions. Thereby, lecturers have to deal with a rapidly growing number of students from heterogeneous fields of study, a shortage of lecturers, a highly dynamic set of learning objectives and technologies, and the need for more targeted student support. To meet these challenges, the use of an automated programming assessment system (APAS) to support traditional teaching is a promising solution. This paper examines this trend by analyzing the experiences of lecturers and students at various universities with an APAS and its impact over the course of a semester. In doing so, we conducted a total number of 30 expert interviews with end users, including 15 lecturers and 15 students, from four different universities within the same country. The results discuss the experiences of lecturers and students and highlight challenges that should be addressed in future research

    TLAD 2010 Proceedings:8th international workshop on teaching, learning and assesment of databases (TLAD)

    Get PDF
    This is the eighth in the series of highly successful international workshops on the Teaching, Learning and Assessment of Databases (TLAD 2010), which once again is held as a workshop of BNCOD 2010 - the 27th International Information Systems Conference. TLAD 2010 is held on the 28th June at the beautiful Dudhope Castle at the Abertay University, just before BNCOD, and hopes to be just as successful as its predecessors.The teaching of databases is central to all Computing Science, Software Engineering, Information Systems and Information Technology courses, and this year, the workshop aims to continue the tradition of bringing together both database teachers and researchers, in order to share good learning, teaching and assessment practice and experience, and further the growing community amongst database academics. As well as attracting academics from the UK community, the workshop has also been successful in attracting academics from the wider international community, through serving on the programme committee, and attending and presenting papers.This year, the workshop includes an invited talk given by Richard Cooper (of the University of Glasgow) who will present a discussion and some results from the Database Disciplinary Commons which was held in the UK over the academic year. Due to the healthy number of high quality submissions this year, the workshop will also present seven peer reviewed papers, and six refereed poster papers. Of the seven presented papers, three will be presented as full papers and four as short papers. These papers and posters cover a number of themes, including: approaches to teaching databases, e.g. group centered and problem based learning; use of novel case studies, e.g. forensics and XML data; techniques and approaches for improving teaching and student learning processes; assessment techniques, e.g. peer review; methods for improving students abilities to develop database queries and develop E-R diagrams; and e-learning platforms for supporting teaching and learning

    Mutation testing and self/peer assessment: analyzing their effect on students in a software testing course

    Get PDF
    Testing is a crucial activity in the development of software systems. With the increasing complexity of software projects, the industry requires incorporating graduates with adequate testing skills and preparation in this field. A challenge in software testing education is to make students perceive the benefits of writing tests and assess their quality with advanced testing techniques. In this paper, we present an experience integrating both mutation testing and self/peer assessment –two of the most used techniques to that end in the past– into a software testing course during three years. This experience allowed us to analyze the effect of applying these strategies on the students’ perception of their manually-written test suites. Noticeably, the computation of the mutation score significantly undermined the initial expectations they had on the developed test suites. Also, the application of peer testing helped them estimate the relative quality of two comparable test suites, as we found a notable correspondence with their respective mutation coverage. Besides, a more in-depth analysis revealed that the students' test suites with more test cases did not always achieve the highest scores, that they found more readable their own tests, and that they tended to cover the basic operations while forgetting about more advanced features. An opinion survey confirmed the impact that the use of mutants had on their perception about testing, and they mostly supported paying a higher level of attention to testing concepts in software engineering degree plans.The work was partially funded by the European Commission (FEDER), the Spanish Ministry of Science, Innovation and Universities under the project FAME (RTI2018-093608-B-C33), the European project ASSETs (612678-EPP-1-2019-1-IT-EPPKA2-SSA-B), and the University of Cádiz

    TLAD 2010 Proceedings:8th international workshop on teaching, learning and assesment of databases (TLAD)

    Get PDF
    This is the eighth in the series of highly successful international workshops on the Teaching, Learning and Assessment of Databases (TLAD 2010), which once again is held as a workshop of BNCOD 2010 - the 27th International Information Systems Conference. TLAD 2010 is held on the 28th June at the beautiful Dudhope Castle at the Abertay University, just before BNCOD, and hopes to be just as successful as its predecessors.The teaching of databases is central to all Computing Science, Software Engineering, Information Systems and Information Technology courses, and this year, the workshop aims to continue the tradition of bringing together both database teachers and researchers, in order to share good learning, teaching and assessment practice and experience, and further the growing community amongst database academics. As well as attracting academics from the UK community, the workshop has also been successful in attracting academics from the wider international community, through serving on the programme committee, and attending and presenting papers.This year, the workshop includes an invited talk given by Richard Cooper (of the University of Glasgow) who will present a discussion and some results from the Database Disciplinary Commons which was held in the UK over the academic year. Due to the healthy number of high quality submissions this year, the workshop will also present seven peer reviewed papers, and six refereed poster papers. Of the seven presented papers, three will be presented as full papers and four as short papers. These papers and posters cover a number of themes, including: approaches to teaching databases, e.g. group centered and problem based learning; use of novel case studies, e.g. forensics and XML data; techniques and approaches for improving teaching and student learning processes; assessment techniques, e.g. peer review; methods for improving students abilities to develop database queries and develop E-R diagrams; and e-learning platforms for supporting teaching and learning

    An Analysis of Programming Course Evaluations Before and After the Introduction of an Autograder

    Full text link
    Commonly, introductory programming courses in higher education institutions have hundreds of participating students eager to learn to program. The manual effort for reviewing the submitted source code and for providing feedback can no longer be managed. Manually reviewing the submitted homework can be subjective and unfair, particularly if many tutors are responsible for grading. Different autograders can help in this situation; however, there is a lack of knowledge about how autograders can impact students' overall perception of programming classes and teaching. This is relevant for course organizers and institutions to keep their programming courses attractive while coping with increasing students. This paper studies the answers to the standardized university evaluation questionnaires of multiple large-scale foundational computer science courses which recently introduced autograding. The differences before and after this intervention are analyzed. By incorporating additional observations, we hypothesize how the autograder might have contributed to the significant changes in the data, such as, improved interactions between tutors and students, improved overall course quality, improved learning success, increased time spent, and reduced difficulty. This qualitative study aims to provide hypotheses for future research to define and conduct quantitative surveys and data analysis. The autograder technology can be validated as a teaching method to improve student satisfaction with programming courses.Comment: Accepted full paper article on IEEE ITHET 202
    • 

    corecore