111,404 research outputs found
Forty hours of declarative programming: Teaching Prolog at the Junior College Utrecht
This paper documents our experience using declarative languages to give
secondary school students a first taste of Computer Science. The course aims to
teach students a bit about programming in Prolog, but also exposes them to
important Computer Science concepts, such as unification or searching
strategies. Using Haskell's Snap Framework in combination with our own
NanoProlog library, we have developed a web application to teach this course.Comment: In Proceedings TFPIE 2012, arXiv:1301.465
A Novice's Process of Object-Oriented Programming
Exposing students to the process of programming is merely implied but not explicitly addressed in texts on programming which appear to deal with 'program' as a noun rather than as a verb.We present a set of principles and techniques as well as an informal but systematic process of decomposing a programming problem. Two examples are used to demonstrate the application of process and techniques.The process is a carefully down-scaled version of a full and rich software engineering process particularly suited for novices learning object-oriented programming. In using it, we hope to achieve two things: to help novice programmers learn faster and better while at the same time laying the foundation for a more thorough treatment of the aspects of software engineering
Automata Tutor v3
Computer science class enrollments have rapidly risen in the past decade.
With current class sizes, standard approaches to grading and providing
personalized feedback are no longer possible and new techniques become both
feasible and necessary. In this paper, we present the third version of Automata
Tutor, a tool for helping teachers and students in large courses on automata
and formal languages. The second version of Automata Tutor supported automatic
grading and feedback for finite-automata constructions and has already been
used by thousands of users in dozens of countries. This new version of Automata
Tutor supports automated grading and feedback generation for a greatly extended
variety of new problems, including problems that ask students to create regular
expressions, context-free grammars, pushdown automata and Turing machines
corresponding to a given description, and problems about converting between
equivalent models - e.g., from regular expressions to nondeterministic finite
automata. Moreover, for several problems, this new version also enables
teachers and students to automatically generate new problem instances. We also
present the results of a survey run on a class of 950 students, which shows
very positive results about the usability and usefulness of the tool
Effects of Automated Interventions in Programming Assignments: Evidence from a Field Experiment
A typical problem in MOOCs is the missing opportunity for course conductors
to individually support students in overcoming their problems and
misconceptions. This paper presents the results of automatically intervening on
struggling students during programming exercises and offering peer feedback and
tailored bonus exercises. To improve learning success, we do not want to
abolish instructionally desired trial and error but reduce extensive struggle
and demotivation. Therefore, we developed adaptive automatic just-in-time
interventions to encourage students to ask for help if they require
considerably more than average working time to solve an exercise. Additionally,
we offered students bonus exercises tailored for their individual weaknesses.
The approach was evaluated within a live course with over 5,000 active students
via a survey and metrics gathered alongside. Results show that we can increase
the call outs for help by up to 66% and lower the dwelling time until issuing
action. Learnings from the experiments can further be used to pinpoint course
material to be improved and tailor content to be audience specific.Comment: 10 page
How do particle physicists learn the programming concepts they need?
The ability to read, use and develop code efficiently and successfully is a
key ingredient in modern particle physics. We report the experience of a
training program, identified as "Advanced Programming Concepts", that
introduces software concepts, methods and techniques to work effectively on a
daily basis in a HEP experiment or other programming intensive fields. This
paper illustrates the principles, motivations and methods that shape the
"Advanced Computing Concepts" training program, the knowledge base that it
conveys, an analysis of the feedback received so far, and the integration of
these concepts in the software development process of the experiments as well
as its applicability to a wider audience.Comment: 8 pages, 2 figures, CHEP2015 proceeding
Automated Feedback for 'Fill in the Gap' Programming Exercises
Timely feedback is a vital component in the learning process. It is especially important for beginner students in Information Technology since many have not yet formed an effective internal model of a computer that they can use to construct viable knowledge. Research has shown that learning efficiency is increased if immediate feedback is provided for students. Automatic analysis of student programs has the potential to provide immediate feedback for students and to assist teaching staff in the marking process. This paper describes a “fill in the gap” programming analysis framework which tests students’ solutions and gives feedback on their correctness, detects logic errors and provides hints on how to fix these errors. Currently, the framework is being used with the Environment for Learning to Programming (ELP) system at Queensland University of Technology (QUT); however, the framework can be integrated into any existing online learning environment or programming Integrated Development Environment (IDE
Functional Baby Talk: Analysis of Code Fragments from Novice Haskell Programmers
What kinds of mistakes are made by novice Haskell developers, as they learn about functional programming? Is it possible to analyze these errors in order to improve the pedagogy of Haskell? In 2016, we delivered a massive open online course which featured an interactive code evaluation environment. We captured and analyzed 161K interactions from learners. We report typical novice developer behavior; for instance, the mean time spent on an interactive tutorial is around eight minutes. Although our environment was restricted, we gain some understanding of Haskell novice errors. Parenthesis mismatches, lexical scoping errors and do block misunderstandings are common. Finally, we make recommendations about how such beginner code evaluation environments might be enhanced
- …