233 research outputs found

    Analysis of Students' Programming Knowledge and Error Development

    Get PDF
    Programmieren zu lernen ist für viele eine große Herausforderung, da es unterschiedliche Fähigkeiten erfordert. Man muss nicht nur die Programmiersprache und deren Konzepte kennen, sondern es erfordert auch spezifisches Domänenwissen und eine gewisse Problemlösekompetenz. Wissen darüber, wie sich die Programmierkenntnisse Studierender entwickeln und welche Schwierigkeiten sie haben, kann dabei helfen, geeignete Lehrstrategien zu entwickeln. Durch die immer weiter steigenden Studierendenzahlen wird es jedoch zunehmend schwieriger für Lehrkräfte, die Bedürfnisse, Probleme und Schwierigkeiten der Studierenden zu erkennen. Das Ziel dieser Arbeit ist es, Einblick in die Entwicklung von Programmierkenntnissen der Studierenden anhand ihrer Lösungen zu Programmieraufgaben zu gewinnen. Wissen setzt sich aus sogenannten Wissenskomponenten zusammen. In dieser Arbeit fokussieren wir uns auf syntaktische Wissenskomponen, die aus abstrakten Syntaxbäumen abgeleitet werden können, und semantische Wissenskomponenten, die durch sogenannte Variablenrollen repräsentiert werden. Da Wissen an sich nicht direkt messbar ist, werden häufig Skill-Modelle verwendet, um den Kenntnissstand abzuschätzen. Jedoch hat die Programmierdomäne ihre eigenen speziellen Eigenschaften, die bei der Wahl eines geeigneten Skill-Modells berücksichtigt werden müssen. Eine der Haupteigenschaften in der Programmierung ist, dass die Wissenskomponenten nicht unabhängig voneinander sind. Aus diesem Grund schlagen wir ein dynamisches Bayesnetz (DBN) als Skill-Modell vor, da es erlaubt, diese Abhängigkeiten explizit zu modellieren. Neben derWahl eines passenden Skill-Modells, müssen auch bestimmte Meta-Parameter wie beispielsweise die Granularität der Wissenkomponenten festgelegt werden. Daher evaluieren wir, wie sich die Wahl von Meta-Parameters auf die Vorhersagequalität von Skill-Modellen auswirkt und wie diese Meta-Parameter gewählt werden sollten. Wir nutzen das DBN, um Lernkurven für jede Wissenskomponenten zu ermitteln und daraus Implikationen für die Lehre abzuleiten. Nicht nur das Wissen von Studierenden, sondern auch deren “Falsch”-Wissen ist von Bedeutung. Deswegen untersuchen wir zunächst manuell sämtliche Programmierfehler der Studierenden und bestimmen deren Häufigkeit, Dauer und Wiederkehrrate. Wir unterscheiden dabei zwischen den Fehlerkategorien syntaktisch, konzeptuell, strategisch, Nachlässigkeit, Fehlinterpretation und Domäne und schauen, wie sich die Fehler über die Zeit entwickeln. Außerdem verwenden wir k-means-Clustering um potentielle Muster in der Fehlerentwicklung zu finden. Die Ergebnisse unserer Fallstudien sind vielversprechend. Wir können zeigen, dass die Wahl der Meta-Parameter einen großen Einfluss auf die Vorhersagequalität von Modellen hat. Außerdem ist unser DBN vergleichbar leistungsstark wie andere Skill-Modelle, ist gleichzeitig aber besser zu interpretieren. Die Lernkurven der Wissenskomponenten und die Analyse der Programmierfehler liefern uns wertvolle Erkenntnisse, die der Kursverbesserung helfen können, z.B. dass die Studierenden mehr Übungsaufgaben benötigen oder mit welchen Konzepten sie Schwierigkeiten haben.Learning to program is a hard task since it involves different types of specialized knowledge. You do not only need knowledge about the programming language and its concepts, but also knowledge from the problem domain and general problem solving abilities. Knowing how students develop programming knowledge and where they struggle, may help in the development of suitable teaching strategies. However, the ever increasing number of students makes it more and more difficult for educators to identify students’ needs, problems, and deficiencies. The goal of the thesis is to gain insights into students programming knowledge development based on their solutions to programming exercises. Knowledge is composed of so called knowledge components (KCs). In this thesis, we focus on KCs on a syntactic level, which can be derived from abstract systax trees, e.g., loops, comparison, etc., and semantic level, represented by so called roles of variables. Since knowledge is not directly measurable, skill models are an often used for the estimation of knowledge. But, the programming domain has its own characteristics which have to be considered when selecting an appropriate skill model. One of the main characteristics of the programming domain are the dependencies between KCs. Hence, we propose and evaluate a Dynamic Bayesian Network (DBN) for skill modeling which allows to model that dependencies explicitly. Besides the choice of a concrete model, also certain metaparameters like, e.g., the granularity level of KCs, has to be set when designing a skill model. Therefore, we evaluate how meta-parameterization affects the prediction performance of skill models and which meta-parameters to choose. We use the DBN to create learning curves for each KC and deduce implications for teaching from them. But not only students knowledge but also their “mal-knowledge” is of importance. Therefore, we manually inspect students’ programming errors and determine the error’s frequency, duration, and re-occurrence. We distinguish between the error categories syntactic, conceptual, strategic, sloppiness, misinterpretation, and domain and analyze how the errors change over time. Moreover, we use k-means clustering to identify different patterns in the development of programming errors. The results of our case studies are promising. We show that the correct metaparameterization has a huge effect on the prediction performance of skill models. In addition, our DBN performs as well as the other skill models while providing better interpretability. The learning curves of KCs and the analysis of programming errors provide valuable information which can be used for course improvement, e.g., that students require more practice opportunities or are struggling with certain concepts.2022-02-0

    Recognition of pen-based music notation with finite-state machines

    Get PDF
    This work presents a statistical model to recognize pen-based music compositions using stroke recognition algorithms and finite-state machines. The series of strokes received as input is mapped onto a stochastic representation, which is combined with a formal language that describes musical symbols in terms of stroke primitives. Then, a Probabilistic Finite-State Automaton is obtained, which defines probabilities over the set of musical sequences. This model is eventually crossed with a semantic language to avoid sequences that does not make musical sense. Finally, a decoding strategy is applied in order to output a hypothesis about the musical sequence actually written. Comprehensive experimentation with several decoding algorithms, stroke similarity measures and probability density estimators are tested and evaluated following different metrics of interest. Results found have shown the goodness of the proposed model, obtaining competitive performances in all metrics and scenarios considered.This work was supported by the Spanish Ministerio de Educación, Cultura y Deporte through a FPU Fellowship (Ref. AP2012–0939) and the Spanish Ministerio de Economía y Competitividad through the TIMuL Project (No. TIN2013-48152-C2-1-R, supported by UE FEDER funds)

    Suffix Structures and Circular Pattern Problems

    Get PDF
    The suffix tree is a data structure used to represent all the suffixes in a string. However, a major problem with the suffix tree is its practical space requirement. In this dissertation, we propose an efficient data structure -- the virtual suffix tree (VST) -- which requires less space than other recently proposed data structures for suffix trees and suffix arrays. On average, the space requirement (including that for suffix arrays and suffix links) is 13.8n bytes for the regular VST, and 12.05n bytes in its compact form, where n is the length of the sequence.;Markov models are very popular for modeling complex sequences. In this dissertation, we present the probabilistic suffix array (PSA), a space-efficient alternative to the probabilistic suffix tree (PST) used to represent Markov models. The PSA provides all the capabilities of the PST, such as learning and prediction, and maintains the same linear time construction (linearity with respect to sequence length). The PSA, however, has a significantly smaller memory requirement than the PST, for both the construction stage, and at the time of usage.;Using the proposed suffix data structures, we study the circular pattern matching (CPM) problem. We provide a linear time, linear space algorithm to solve the exact circular pattern matching problem. We then present four algorithms to address the approximate circular pattern matching (ACPM) problem. Our bidirectional ACPM algorithm provides the best time complexity when compared with other algorithms proposed in the literature. Further, we define the circular pattern discovery (CPD) problem and present algorithms to solve this problem. Using the proposed circular pattern matching algorithms, we perform experiments on computational analysis and function prediction for multidomain proteins

    Automatic Video-based Analysis of Human Motion

    Get PDF

    Biophysics of Human Neutrophil Haptokinesis

    Get PDF
    Neutrophils are a type of white blood cell and first responders to tissue trauma and infection. This thesis explores the role of extracellular adhesivity in dictating neutrophil phenotype with respect to cell shape, motility, mechanical force generation, and the molecular constituents involved in these processes. The principle tool employed is microcontact printing, a powerful method to spatially organize a cell\u27s adhesive environment. We demonstrate the capacity of neutrophils to sense adhesive density on stiff substrates and differentially respond to surfaces with low and high fibronectin content. On low and moderately adhesive surfaces neutrophils assume a highly spread, uropod-absent phenotype reminiscent of keratocytes. On highly adhesive surfaces neutrophils assume the classic amoeboid morphology with an elongated cell body, narrow lamellipodium, and knob-like trailing uropod. Our work reconciles conflicting observations of these two phenotypes previously attributed solely to the underlying stiffness of substrate. The spreading and motility quantified are haptokinetic, induced through the quiescent cell\u27s interaction with immobilized adhesive ligand alone. Function blocking antibody studies implicated the promiscuous Mac-1 integrin receptor in supporting haptokinetic migration. We elucidate the density sensing length scale by presenting high and low adhesive cues to the cells simultaneously. Through rational design of the adhesive domains we conclude that neutrophils sense density at the whole cell length scale, integrating adhesive stimuli over their entire contact interface. Adhesion density sensitivity in stiff microenvironments has applicability to the study of cancer metastasis and particularly the epithelial-to-mesenchymal transition model. We also employ the microfabricated-Post-Array-Detectors (mPADs) traction platform to measure the forces associated with neutrophil spreading. We resolve with high spatial and temporal resolution a highly coordinated protrusive wave front of pN magnitude that propagates radially outwards from the cell center. Small molecule inhibitor studies establish that spreading was not analogous to lamellipodium formation but was sensitive to perturbations of actin cortical stiffness. Lastly, we apply the principles uncovered in neutrophils to the patterning of surface-active microfluidic vesicles by tuning vesicle-substrate adhesion and repulsion at the contact interface. The generation of ordered arrays of micron scale vesicles was a first of its kind

    Thinking FORTH: a language and philosophy for solving problems

    Get PDF
    XIV, 313 p. ; 24 cmLibro ElectrónicoThinking Forth is a book about the philosophy of problem solving and programming style, applied to the unique programming language Forth. Published first in 1984, it could be among the timeless classics of computer books, such as Fred Brooks' The Mythical Man-Month and Donald Knuth's The Art of Computer Programming. Many software engineering principles discussed here have been rediscovered in eXtreme Programming, including (re)factoring, modularity, bottom-up and incremental design. Here you'll find all of those and more - such as the value of analysis and design - described in Leo Brodie's down-to-earth, humorous style, with illustrations, code examples, practical real life applications, illustrative cartoons, and interviews with Forth's inventor, Charles H. Moore as well as other Forth thinkers. If you program in Forth, this is a must-read book. If you don't, the fundamental concepts are universal: Thinking Forth is meant for anyone interested in writing software to solve problems. The concepts go beyond Forth, but the simple beauty of Forth throws those concepts into stark relief. So flip open the book, and read all about the philosophy of Forth, analysis, decomposition, problem solving, style and conventions, factoring, handling data, and minimizing control structures. But be prepared: you may not be able to put it down. This book has been scanned, OCR'd, typeset in LaTeX, and brought back to print (and your monitor) by a collaborative effort under a Creative Commons license. http://thinking-forth.sourceforge.net/The Philosophy of Forth An Armchair History of Software Elegance; The Superficiality of Structure; Looking Back, and Forth; Component Programming; Hide From Whom?; Hiding the Construction of Data Structures; But Is It a High-Level Language?; The Language of Design; The Language of Performance; Summary; References Analysis The Nine Phases of the Programming Cycle; The Iterative Approach; The Value of Planning; The Limitations of Planning; The Analysis Phase; Defining the Interfaces; Defining the Rules; Defining the Data Structures; Achieving Simplicity; Budgeting and Scheduling; Reviewing the Conceptual Model; References Preliminary Design/Decomposition Decomposition by Component; Example: A Tiny Editor; Maintaining a Component-based Application; Designing and Maintaining a Traditional Application; The Interface Component; Decomposition by Sequential Complexity; The Limits of Level Thinking; Summary; For Further Thinking; Detailed Design/Problem Solving Problem-Solving Techniques; Interview with a Software Inventor; Detailed Design; Forth Syntax; Algorithms and Data Structures; Calculations vs. Data Structures vs. Logic; Solving a Problem: Computing Roman Numerals; Summary; References; For Further Thinking Implementation: Elements of Forth Style Listing Organization; Screen Layout; Comment Conventions; Vertical Format vs. Horizontal Format; Choosing Names: The Art; Naming Standards: The Science; More Tips for Readability; Summary; References Factoring Factoring Techniques; Factoring Criteria; Compile-Time Factoring; The Iterative Approach in Implementation; References Handling Data: Stacks and States The Stylish Stack; The Stylish Return Stack; The Problem With Variables; Local and Global Variables/Initialization; Saving and Restoring a State; Application Stacks; Sharing Components; The State Table; Vectored Execution; Using DOER/MAKE; Summary; References Minimizing Control Structures What’s So Bad about Control Structures?; How to Eliminate Control Structures; A Note on Tricks; Summary; References; For Further Thinking Forth’s Effect on Thinking Appendix A Overview of Forth (For Newcomers); Appendix B Defining DOER/MAKE; Appendix C Other Utilities Described in This Book; Appendix D Answers to “Further Thinking” Problems; Appendix E Summary of Style Conventions; Inde

    EDM 2011: 4th international conference on educational data mining : Eindhoven, July 6-8, 2011 : proceedings

    Get PDF
    • …
    corecore