12,903 research outputs found

    Experiences on Using TRAKLA2 to Teach Spatial Data Algorithms

    Get PDF
    AbstractThis paper reports on the results of a two year project in which visual algorithm simulation exercises were developed for a spatial data algorithms course. The success of the project is studied from several point of views, i.e., from developer's, teachers's, and student's perspective. The amount of work, learning outcomes, and feasibility of the system has been estimated based on the data gathered during the project. The results are encouraging, which motivates to extend the concept also for other courses in the future

    Promoting Programming Learning. Engagement, Automatic Assessment with Immediate Feedback in Visualizations

    Get PDF
    The skill of programming is a key asset for every computer science student. Many studies have shown that this is a hard skill to learn and the outcomes of programming courses have often been substandard. Thus, a range of methods and tools have been developed to assist students’ learning processes. One of the biggest fields in computer science education is the use of visualizations as a learning aid and many visualization based tools have been developed to aid the learning process during last few decades. Studies conducted in this thesis focus on two different visualizationbased tools TRAKLA2 and ViLLE. This thesis includes results from multiple empirical studies about what kind of effects the introduction and usage of these tools have on students’ opinions and performance, and what kind of implications there are from a teacher’s point of view. The results from studies in this thesis show that students preferred to do web-based exercises, and felt that those exercises contributed to their learning. The usage of the tool motivated students to work harder during their course, which was shown in overall course performance and drop-out statistics. We have also shown that visualization-based tools can be used to enhance the learning process, and one of the key factors is the higher and active level of engagement (see. Engagement Taxonomy by Naps et al., 2002). The automatic grading accompanied with immediate feedback helps students to overcome obstacles during the learning process, and to grasp the key element in the learning task. These kinds of tools can help us to cope with the fact that many programming courses are overcrowded with limited teaching resources. These tools allows us to tackle this problem by utilizing automatic assessment in exercises that are most suitable to be done in the web (like tracing and simulation) since its supports students’ independent learning regardless of time and place. In summary, we can use our course’s resources more efficiently to increase the quality of the learning experience of the students and the teaching experience of the teacher, and even increase performance of the students. There are also methodological results from this thesis which contribute to developing insight into the conduct of empirical evaluations of new tools or techniques. When we evaluate a new tool, especially one accompanied with visualization, we need to give a proper introduction to it and to the graphical notation used by tool. The standard procedure should also include capturing the screen with audio to confirm that the participants of the experiment are doing what they are supposed to do. By taken such measures in the study of the learning impact of visualization support for learning, we can avoid drawing false conclusion from our experiments. As computer science educators, we face two important challenges. Firstly, we need to start to deliver the message in our own institution and all over the world about the new – scientifically proven – innovations in teaching like TRAKLA2 and ViLLE. Secondly, we have the relevant experience of conducting teaching related experiment, and thus we can support our colleagues to learn essential know-how of the research based improvement of their teaching. This change can transform academic teaching into publications and by utilizing this approach we can significantly increase the adoption of the new tools and techniques, and overall increase the knowledge of best-practices. In future, we need to combine our forces and tackle these universal and common problems together by creating multi-national and multiinstitutional research projects. We need to create a community and a platform in which we can share these best practices and at the same time conduct multi-national research projects easily.Siirretty Doriast

    Utilizing educational technology in computer science and programming courses : theory and practice

    Get PDF
    There is one thing the Computer Science Education researchers seem to agree: programming is a difficult skill to learn. Educational technology can potentially solve a number of difficulties associated with programming and computer science education by automating assessment, providing immediate feedback and by gamifying the learning process. Still, there are two very important issues to solve regarding the use of technology: what tools to use, and how to apply them? In this thesis, I present a model for successfully adapting educational technology to computer science and programming courses. The model is based on several years of studies conducted while developing and utilizing an exercise-based educational tool in various courses. The focus of the model is in improving student performance, measured by two easily quantifiable factors: the pass rate of the course and the average grade obtained from the course. The final model consists of five features that need to be considered in order to adapt technology effectively into a computer science course: active learning and continuous assessment, heterogeneous exercise types, electronic examination, tutorial-based learning, and continuous feedback cycle. Additionally, I recommend that student mentoring is provided and cognitive load of adapting the tools considered when applying the model. The features are classified as core components, supportive components or evaluation components based on their role in the complete model. Based on the results, it seems that adapting the complete model can increase the pass rate statistically significantly and provide higher grades when compared with a “traditional” programming course. The results also indicate that although adapting the model partially can create some improvements to the performance, all features are required for the full effect to take place. Naturally, there are some limits in the model. First, I do not consider it as the only possible model for adapting educational technology into programming or computer science courses. Second, there are various other factors in addition to students’ performance for creating a satisfying learning experience that need to be considered when refactoring courses. Still, the model presented can provide significantly better results, and as such, it works as a base for future improvements in computer science education.Ohjelmoinnin oppimisen vaikeus on yksi harvoja asioita, joista lähes kaikki tietojenkäsittelyn opetuksen tutkijat ovat jokseenkin yksimielisiä. Opetusteknologian avulla on mahdollista ratkaista useita ohjelmoinnin oppimiseen liittyviä ongelmia esimerkiksi hyödyntämällä automaattista arviointia, välitöntä palautetta ja pelillisyyttä. Teknologiaan liittyy kuitenkin kaksi olennaista kysymystä: mitä työkaluja käyttää ja miten ottaa ne kursseilla tehokkaasti käyttöön? Tässä väitöskirjassa esitellään malli opetusteknologian tehokkaaseen hyödyntämiseen tietojenkäsittelyn ja ohjelmoinnin kursseilla. Malli perustuu tehtäväpohjaisen oppimisjärjestelmän runsaan vuosikymmenen pituiseen kehitys- ja tutkimusprosessiin. Mallin painopiste on opiskelijoiden suoriutumisen parantamisessa. Tätä arvioidaan kahdella kvantitatiivisella mittarilla: kurssin läpäisyprosentilla ja arvosanojen keskiarvolla. Malli koostuu viidestä tekijästä, jotka on otettava huomioon tuotaessa opetusteknologiaa ohjelmoinnin kursseille. Näitä ovat aktiivinen oppiminen ja jatkuva arviointi, heterogeeniset tehtävätyypit, sähköinen tentti, tutoriaalipohjainen oppiminen sekä jatkuva palautesykli. Lisäksi opiskelijamentoroinnin järjestäminen kursseilla ja järjestelmän käyttöönottoon liittyvän kognitiivisen kuorman arviointi tukevat mallin käyttöä. Malliin liittyvät tekijät on tässä työssä lajiteltu kolmeen kategoriaan: ydinkomponentteihin, tukikomponentteihin ja arviontiin liittyviin komponentteihin. Tulosten perusteella vaikuttaa siltä, että mallin käyttöönotto parantaa kurssien läpäisyprosenttia tilastollisesti merkittävästi ja nostaa arvosanojen keskiarvoa ”perinteiseen” kurssimalliin verrattuna. Vaikka mallin yksittäistenkin ominaisuuksien käyttöönotto voi sinällään parantaa kurssin tuloksia, väitöskirjaan kuuluvien tutkimusten perusteella näyttää siltä, että parhaat tulokset saavutetaan ottamalla malli käyttöön kokonaisuudessaan. On selvää, että malli ei ratkaise kaikkia opetusteknologian käyttöönottoon liittyviä kysymyksiä. Ensinnäkään esitetyn mallin ei ole tarkoituskaan olla ainoa mahdollinen tapa hyödyntää opetusteknologiaa ohjelmoinnin ja tietojenkäsittelyn kursseilla. Toiseksi tyydyttävään oppimiskokemukseen liittyy opiskelijoiden suoriutumisen lisäksi paljon muitakin tekijöitä, jotka tulee huomioida kurssien uudelleensuunnittelussa. Esitetty malli mahdollistaa kuitenkin merkittävästi parempien tulosten saavuttamisen kursseilla ja tarjoaa sellaisena perustan entistä parempaan opetukseen

    ASSESSMENT IN DEVELOPMENT COMPUTER-AIDED INSTRUCTION

    Get PDF
    Assessment in development computer-aided instruction (CAI) is not only done on the final products, but the assessment also takes place during the development process. Similar to assessments conducted by experts (Expert Judgment), the assessment also carried out by the user to individual persons (one to one), a small group, an expanded group, and real users (in dissemination process). In developing CAI, there are several aspects to be assessed such as programming, learning design, contents, and also its visual aspect. Moreover, there are several indicators that should be included in developing CAI which are: a. Software Engineering/Programming: (1). Effectiveness and Efficiency in the development and use of instructional media; (2). Reliable; (3).Maintainable (can easily be maintained and managed); (4). Usability (easy to use and simple in operation); (5).The accuracy in selection of the type of application / software / tool for development; (6) Compatibility (learning media can be installed / run on existing hardware and software); (7). Packaging of integrated instructional media and easy in execution; (8). Completeness of instructional media program documentation which consist of: installation manual (clear, brief, and complete); troubleshooting (clear, structured and anticipated); program design (clear and complete in describing program workflow); (9) Reusable (part or all program learning media can be reused to develop other learning media). b. Aspects of Learning Design and Content Items: (1) Clarity of learning objectives (formulation, realistic); (2) The relevance of the learning objectives with SK / KD / curriculum; (3) The scope and depth of learning objectives; (4) The accuracy of the use of learning strategies; (5) Interactivity; (6) Provision of motivation to learn; (7) Contextuality and actuality; (8) The completeness and the quality of learning support materials; (9) Compliance with the aim of learning materials; (10) The depth of the material; (11) Ease to be understood; (12) The systematic, continuous, clear logic flow; (13) The clarity of description, discussion, examples, simulations, exercises; (14) Consistency of evaluation with the aim of learning; (15) The accuracy and provision of evaluation tools; (16) The provision of feedback on the evaluation results. c. Aspects of Visual Communication / Display: (1) Communicative; according to the message and can be received / in line with the expected target; (2) Creative ideas pouring in the following idea; (3) Simple and attractive; (4) Audio (narration, sound effects, back sound, music; (5) Visual (layout design, typography, colour); (6) Media movement (animation, movie); (7) Interactive Layout (navigation icons). Keywords: Assessment, Development, CAI, Programming, Design, Visua

    Digitally supported assessment

    Get PDF
    This chapter focuses on digital assessment and feedback practices in distance education. Providing evidence of learning through assessment is at the heart of students’ experience of higher education (HE), whatever their mode of study. Open and distance education-focused institutions have justifiably been proud of their technical innovation, tending to move rapidly to harness available technologies (from post to broadcast media and, most recently, online media) in their mission to enable education for remote, distributed groups of learners. In recent years, distance education courses have, in the main, moved from paper and digital media delivered physically to wholly online delivery, except where the circumstances of target learners preclude reliance on a reliable and fast internet connection. In terms of content, discussion and collaboration, where distance education has forged ahead, campus-based, blended programmes have generally followed. However, in terms of assessment and feedback, distance education has remained somewhat conservative. While most assessment in distance education has taken place online along with content and communication, there has been a tendency to replicate fairly traditional assessment formats using digital tools

    Introductory programming: a systematic literature review

    Get PDF
    As computing becomes a mainstream discipline embedded in the school curriculum and acts as an enabler for an increasing range of academic disciplines in higher education, the literature on introductory programming is growing. Although there have been several reviews that focus on specific aspects of introductory programming, there has been no broad overview of the literature exploring recent trends across the breadth of introductory programming. This paper is the report of an ITiCSE working group that conducted a systematic review in order to gain an overview of the introductory programming literature. Partitioning the literature into papers addressing the student, teaching, the curriculum, and assessment, we explore trends, highlight advances in knowledge over the past 15 years, and indicate possible directions for future research

    Kohti automaattisia vihjeitä visuaalisessa algoritmisimulaatiossa

    Get PDF
    Visual Algorithm Simulation (VAS) exercise is an interactive application, which teaches an algorithm or a data structure. The exercise shows the student a visual representation of a data structure with initial data. The student imitates the execution of the algorithm by interacting with the visual representation. The student's solution is graded automatically. A misconception about the algorithm being learned can manifest itself as systematic errors, which can be modelled as a new algorithm. It is assumed that a VAS exercise, which could detect automatically a misconception and give corrective feedback would further support learning. This thesis includes a literature review on algorithm misconceptions and an empirical study. Four VAS exercises of OpenDSA e-textbook are reviewed by their program code: Evaluating a postfix expression, Build heap, Quicksort, and Dijkstra's algorithm. A dataset of 1430 Build heap VAS submissions is analysed manually with ad-hoc software. The submissions are then automatically classified based on the misconceptions found. The main result extends the set of known misconceptions of the Build-heap VAS exercise. 52 percent of the submissions were correct, 17 percent were misconceptions and the rest 31 percent had a lo0gical explanation. 95 per cent of submissions classified as misconception have multiple explanations with heap size of 10. The thesis presents a Python software, which can automatically classify the known misconceptions. Theory on how to generate VAS inputs, which support detection of misconceptions is discussed. The theory is applied by improving the input generation of Dijkstra's algorithm exercise. The thesis concludes that studying misconceptions in the VAS exercises of OpenDSA currently requires exercise-dependent work. Not all OpenDSA VAS exercises record enough data for later analysis. Moreover, the player and analysis software must be written separately for each exercise. There is need to develop the OpenDSA related software libraries to produce detailed exercise recordings. It should be studied how the heap size in the Build-heap exercise affects detection of misconceptions.Visuaalinen algoritmisimulaatiotehtävä (VAS-tehtävä) on vuorovaikutteinen sovellus, joka opettaa algoritmin tai tietorakenteen. Tehtävä näyttää opiskelijalle kuvan tietorakenteesta lähtödatalla. Opiskelija mukailee algoritmin suoritusta vuorovaikuttamalla kuvaesityksen kanssa. Opiskelijan ratkaisu arvostellaan automaattisesti. Väärinkäsitys VAS-tehtävässä on opiskelijan järjestelmällinen väärinymmärrys, joka voidaan kuvata algoritmilla. On oletus, että VAS-tehtävä, joka tunnistaisi automaattisesti väärinkäsityksen ja antaisi korjaavaa palautetta, tukisi oppimista entisestään. Tämä opinnäytetyö sisältää kirjallisuustutkimuksen algoritmien väärinkäsityksistä sekä empiirisen tutkimuksen. Sähköisen OpenDSA-kirjan neljä VAS-tehtävää on tutkittu niiden ohjelmakoodiltaan: Postfix-lausekkeen evaluointi, binäärikeon rakentaminen, pikajärjestäminen ja Dijkstran algoritmi. Tietoaineisto, jossa on 1430 tallennetta Binäärikeon rakentaminen -tehtävästä, on analysoitu käsin tätä varten kehitetyllä ohjelmalla. Tallenteet on sitten automaattisesti luokiteltu löydettyjen väärinkäsitysten perusteella. Työn päätulos laajentaa binäärikeon rakentaminen -tehtävän väärinkäsityksien joukkoa. 52 prosenttia tehtäväpalautuksista oli oikein, 17 prosenttia väärinkäsityksiä ja loput 31 prosenttia voidaan selittää loogisesti. 95 prosentilla niistä palautuksista, jotka luokiteltiin väärinkäsitykseksi, oli useampi yhtä hyvä selitys, kun keon koko oli 10. Työ esittää Python-ohjelman, joka voi automaattisesti luokitella tunnettuja väärinkäsityksiä. Työ esittää myös teoriaa, kuinka tuottaa VAS-tehtävien lähtödataa siten, että se tukisi väärinkäsitysten tunnistamista. Teoriaa on sovellettu parantam alla Dijkstran algoritmi -tehtävän syötteen tuottamista. Johtopäätöksenä OpenDSA:n VAS-tehtävien väärinkäsitysten tutkiminen vaatii nykyisellään tehtäväkohtaista työtä. Kaikki OpenDSA:n VAS-tehtävät eivät tallenna riittävästi dataa myöhempää analyysiä varten. Lisäksi tehtävätoistin ja analyysiohjelma pitää kirjoittaa erikseen joka tehtävälle. On tarve kehittää OpenDSA:n ohjelmakirjastoja tuottamaan yksityiskohtaisia tehtävätallenteita. Binäärikeon rakentaminen -tehtävässä pitäisi tutkia keon koon vaikutusta väärinkäsitysten tunnistamiseen

    An approach to automatic learning assessment based on the computational theory of perceptions

    Get PDF
    E-learning systems output a huge quantity of data on a learning process. However, it takes a lot of specialist human resources to manually process these data and generate an assessment report. Additionally, for formative assessment, the report should state the attainment level of the learning goals defined by the instructor. This paper describes the use of the granular linguistic model of a phenomenon (GLMP) to model the assessment of the learning process and implement the automated generation of an assessment report. GLMP is based on fuzzy logic and the computational theory of perceptions. This technique is useful for implementing complex assessment criteria using inference systems based on linguistic rules. Apart from the grade, the model also generates a detailed natural language progress report on the achieved proficiency level, based exclusively on the objective data gathered from correct and incorrect responses. This is illustrated by applying the model to the assessment of Dijkstra’s algorithm learning using a visual simulation-based graph algorithm learning environment, called GRAPH
    corecore