14 research outputs found

    A Sustainable Learning Environment based on an Open-Source Content Management System

    No full text
    This paper presents our approach for supporting face-to-face courses with software components for e-learning based on a general-purpose content management system (CMS). These components—collectively named eduComponents—can be combined with other modules to create tailormade, sustainable learning environments, which help to make teaching and learning more efficient and effective. We give a short overview of these components, and we report on our practical experiences with the software in our courses

    An environment for educational service communities

    Get PDF
    In most global economies, there is a strong trend from agriculture and manufacturing towards service-orientation and tertiarisation: Services, products with value-added service solutions and, more recently, automated Internet service offerings seamlessly delivered through on-demand elastic cloud computing resources. In the affected societies, education is recognised as a key factor for maintaining the competitiveness. Specialised education about services is widely available, but tool support for hands-on learning and testing of how services can be produced, offered, delivered and improved is missing. We aim to fill this gap between theory and application by proposing an integrated environment for educational service communities such as service engineering classes. Initial results of our work show that the environment, which supports both auto-didactic learning and team-based competitive and collaborative learning-by-doing throughout the service lifecycle, motivates students and increases their practical knowledge about services. Our experience with the actual use of the environment in the context of a university course about web services and conclude the article with suggestions for future work are discussed

    Peer group testing in software engineering projects

    Get PDF
    Testing is important if you want to produce a quality product, but generally speaking student programmers have little enthusiasm for testing. Students perform a certain level of testing on any assignment work before submission but this is mainly superficial. There is no denying that testing is a crucial part of the software engineering process and this is why testing experience is a real skill needed by employers. Over the last six years students in the Software Engineering Project course at the University of Tasmania have undertaken projects in teams with four or five members. Each team is collaborating with a different member of the IT industry to produce a unique piece of software. Since the year 2000, the students have conducted peer group testing sessions. The critique that the testers perform helps the development team to identify problems before assessment, hence increasing the quality of the work submitted. The testing sessions are also providing many different, but valuable, benefits such as serving as milestones, increasing learning, and increasing collaboration between teams

    A FOCUS ON CONTENT: THE USE OF RUBRICS IN PEER REVIEW TO GUIDE STUDENTS AND INSTRUCTORS

    Get PDF
    Students who are solving open-ended problems would benefit from formative assessment, i.e., from receiving helpful feedback and from having an instructor who is informed about their level of performance. Open-ended problems challenge existing assessment techniques. For example, such problems may have reasonable alternative solutions, or conflicting objectives. Analyses of open-ended problems are often presented as free-form text since they require arguments and justifications for one solution over others, and students may differ in how they frame the problems according to their knowledge, beliefs and attitudes.This dissertation investigates how peer review may be used for formative assessment. Computer-Supported Peer Review in Education, a technology whose use is growing, has been shown to provide accurate summative assessment of student work, and peer feedback can indeed be helpful to students. A peer review process depends on the rubric that students use to assess and give feedback to each other. However, it is unclear how a rubric should be structured to produce feedback that is helpful to the student and at the same time to yield information that could be summarized for the instructor.The dissertation reports a study in which students wrote individual analyses of an open-ended legal problem, and then exchanged feedback using Comrade, a web application for peer review. The study compared two conditions: some students used a rubric that was relevant to legal argument in general (the domain-relevant rubric), while others used a rubric that addressed the conceptual issues embedded in the open-ended problem (the problem-specific rubric).While both rubric types yield peer ratings of student work that approximate the instructor's scores, feedback elicited by the domain-relevant rubric was redundant across its dimensions. On the contrary, peer ratings elicited by the problem-specific rubric distinguished among its dimensions. Hierarchical Bayesian models showed that ratings from both rubrics can be fit by pooling information across students, but only problem-specific ratings are fit better given information about distinct rubric dimensions

    Marmoset: A Programming Project Assignment Framework to Improve the Feedback Cycle for Students, Faculty and Researchers

    Get PDF
    We developed Marmoset, a system that improves the feedback cycle on programming assignments for students, faculty and researchers alike. Using automation, Marmoset substantially lowers the burden on faculty for grading programming assignments, allowing faculty to give students more rapid feedback on their assignments. To further improve the feedback cycle, Marmoset provides students with limited access to the results of the instructor's private test cases before the submission deadline using a novel token-based incentive system. This both encourages students to start their work early and to think critically about their work. Because students submit early, instructors can monitor all students' progress on test cases and identify where in projects students are having problems in order to update the project requirements in a timely fashion and make the best use of time in lectures, discussion sections, and office hours. To further improve the feedback cycle, Marmoset provides students with limited access to the results of the instructor's private test cases before the submission deadline using a novel token-based incentive system. This both encourages students to start their work early and to think critically about their work. Because students submit early, instructors can monitor all students' progress on test cases and identify where in projects students are having problems in order to update the project requirements in a timely fashion and make the best use of time in lectures, discussion sections, and office hours

    Automated generation and checking of verification conditions

    Get PDF
    LAV is a system for statically verifying program assertions and locating bugs such as buffer overflows, pointer errors and division by zero. LAV is primarily aimed at analyzing programs written in the programming language C. Since LAV uses the popular LLVM intermediate code representation, it can also analyze programs written in other procedural languages. Also, the proposed approach can be used with any other similar intermediate low level code representation. System combines symbolic execution, SAT encoding of program’s control-flow, and elements of bounded model checking. LAV represents the program meaning using first-order logic (FOL) formulas and generates final verification conditions as FOL formulas. Each block of the code (blocks have no internal branchings and no loops) is represented by a FOL formula obtained through symbolic execution. Symbolic execution, however, is not performed between different blocks. Instead, relationships between blocks are modeled by propositional variables encoding transitions between blocks. LAV constructs formulas that encode block semantics once for each block. Then, it combines these formulas with propositional formulas encoding the transitions between the blocks. The resulting compound FOL formulas describe correctness and incorrectness conditions of individual instructions. These formulas are checked by an SMT solver which covers suitable combination of theories. Theories that can be used for modeling correctness conditions are: theory of linear arithmetic, theory of bit-vectors, theory of uninterpreted functions, and theory of arrays. Based on the results obtained from the solver, the analyzed command may be given the status safe (the command does not lead to an error), flawed (the command always leads to an error), unsafe (the command may lead to an error) or unreachable (the command will never be executed). If a command cannot be proved to be safe, LAV translates a potential counterexample from the solver into a program trace that exhibits this error. It also extracts the values of relevant program variables along this trace. The proposed system is implemented in the programming language C++, as a publicly available and open source tool named LAV. LAV has support for several SMT solvers (Boolector, MathSAT, Yices, and Z3). Experimental evaluation on a corpus of C programs, which are designed to demonstrate areas of strengths and weaknesses of different verification techniques, suggests that LAV is competitive with related tools. Also, experimental results show a big advantage of the proposed system compared to symbolic execution applied to programs containing a big number of possible execution paths. The proposed approach allows determining status of commands in programs which are beyond the scope of analysis that can be done by symbolic execution tools. LAV is successfully applied in educational context where it was used for finding bugs in programs written by students at introductory programming course. This application showed that in these programs there is a large number of bugs that a verification tool can efficiently find. Experimental results on a corpus of students’ programs showed that LAV can find bugs that cannot be found by commonly used automated testing techniques. Also, it is shown that LAV can improve evaluation of students’s assignments: (i) by providing useful and helpful feedback to students, which is important in the learning process, and (ii) by improving automated grading process, which is especially important to teachers.LAV je sistem za automatsko generisanje i proveravanje uslova ispravnosti programa. Sistem je namenjen prvenstveno analizi programa napisanih u programskom jeziku S, ali, pošto koristi LLVM međujezik, može se primeniti i za analizu programa napisanih u drugim proceduralnim programskim jezicima. Sistem kombinuje simboličko izvršavanje, opisivanje ponašanja programa iskaznim promenljivama i proveravanje ograničenih modela. Individualni blokovi LLVM međukoda modeluju se formulama logike prvog reda koje se konstruišu simboličkim izvršavanjem. Relacije između blokova se modeluju iskaznim promenljivama. Formule, koje opisuju ponašanja blokova koda zajedno sa relacijama između blokova, kombinuju se i na osnovu njih prave se formule koje opisuju ponašanje programa. Te formule koriste se za formiranje uslova ispravnosti pojedinačnih komandi programa. Uslovi ispravnosti šalju se na proveru SMT rešavaču koji pokriva odgovarajuću kombinaciju teorija. Podržane teorije, u kojima je moguće modelovati uslove ispravnosti programa, su teorija linearne aritmetike, teorija bit vektora, teorija neinterpretiranih funkcija i teorija nizova. Komanda može imati status bezbedne (izvršavanje komande ne dovodi do greške), neispravne (izvršavanje komande sigurno dovodi do greške), nebezbedne (izvršavanje komande može da dovede do greške) i nedostižne (do izvršavanja komande nikada neće doći). Predloženi sistem je implementiran u programskom jeziku C++ kao alat LAV koji je javno dostupan i otvorenog koda. U okviru alata postoji podrška za rad sa nekoliko SMT rešavača (Boolector, MathSAT, Yices i Z3). Eksperimentalni rezultati na korpusu S programa, koji služe za utvrđivanje mogućnosti verifikacijskih alata, pokazuju da je predstavljen pristup uporediv sa postojećim srodnim alatima. Takođe, eksperimentalni rezultati pokazuju prednost predloženog sistema u odnosu na simboličko izvršavanje u programima u kojima postoji veliki broj mogućih putanja. Kompaktno modelovanje mogućih putanja kroz program omogućava utvrđivanje ispravnosti komandi u programima koji su van domašaja alata za simboličko izvršavanje. LAV je uspešno primenjen i u kontekstu obrazovanja gde je korišćen za otkrivanje grešaka u studentskim radovima. Ovom primenom pokazano je da u studentskim radovima uvodnog kursa programiranja postoji veliki broj grešaka koje verifikacijski alat može efikasno da pronađe. Eksperimentalni rezultati pokazali su da je verifikacija efikasnija od trenutno dominantno korišćenih tehnika automatskog testiranja jer može da otkrije greške u studentskim programima koje su van domašaja ovih tehnika. Takođe, pokazano je kako LAV može da unapredi automatsku evaluaciju studentskih radova na polju generisanja kvalitetnih, razumljivih i pouzdanih podataka o rezultatima rada studenta, kao i na polju automatskog ocenjivanja

    Ein Kommunikations- und Tutoring-System für Lerngruppen im Internet

    Get PDF
    In der klassischen Lehre spielen Übungen eine entscheidende Rolle für den Lernerfolg der Studierenden. Insbesondere Gruppenübungen ermöglichen es, einen Wissensgegenstand aus unterschiedlichen Perspektiven zu betrachten und durch das Mittel der Externalisierung besser zu verstehen. Dies setzt allerdings eine direkte Kommunikation zwischen den beteiligten Personen voraus. Im Bereich der Fernlehre sind diese Möglichkeiten aufgrund der räumlichen Trennung und der technischen Gegebenheiten nur sehr eingeschränkt gegeben. Zum einen existieren entsprechende Systeme nur für Teilaspekte, zum anderen fehlen bisher adaptive integrierte Gruppenübungen. Das in dieser Arbeit vorgestellte Communication and Tutoring System (CATS) integriert zum einen adaptive Aufgaben mit einem standardisierten Kommunikationssystem und ermöglicht dadurch die Vermittlung von Wissen zwischen den Studierenden untereinander und zwischen Studierenden und Lehrenden. Zum anderen wird ein generischer Ansatz zur technischen Abwicklung von Gruppenarbeiten in der Fernlehre vorgestellt. Sowohl die adaptiven individuellen Übungsaufgaben wie auch das Gruppenarbeitskonzept sind in verschiedenen Fächern einsetzbar. Entsprechende Hilfsprogramme unterstützen die Lehrenden bei der Erstellung von Übungsaufgaben, besondere Programmierkenntnisse sind hierzu nicht erforderlich. Damit wurde Forderungen aus der Lernpsychologie Rechnung getragen, die bisher, insbesondere im Bereich der Fernlehre, nicht erfüllt werden konnten. Bereits bei der Architektur des Systems wurde darauf geachtet, eine stabile, nachhaltige Umsetzung des Systems zu gewährleisten, ohne auf eine notwendige Flexibilität zu verzichten. Ein spezielles Integrationsmodell ermöglicht die rasche Anbindung an bestehende Lernplattformen. Im Rahmen dieser Arbeit wurde die Anbindung an die Lernplattform ".LRN" realisiert. Das Communication and Tutoring System (CATS) wurde im aktiven Übungsbetrieb zur Unterstützung der Vorlesung Rechnernetze an der Universität Mannheim in den Sommersemestern 2003 und 2004 und auch für die Vorlesung Multimediatechnik im Wintersemester 2003/04 eingesetzt. Es werden die Ergebnisse einer empirischen Untersuchung präsentiert, die zum WS 2003/04 im Rahmen der Vorlesung Multimediatechnik durchgeführt wurde. Diese betraf sowohl die Akzeptanz des Systems bei den Studierenden wie auch die Klausurergebnisse der CATS-Benutzer. Zudem werden die Examensergebnisse von Präsenzstudierenden der Rechnernetze-Vorlesung im SS 2004 mit denen der Fernstudierenden verglichen. CATS wurde außerdem in den Projekten ULI, VIROR, Winfoline und Politikon erfolgreich eingesetzt
    corecore