129 research outputs found

    Evaluating the development and effectiveness of grit and growth mindset among high school students in a computer programming project

    Get PDF
    A dissertation submitted to the Faculty of Science, University of the Witwatersrand, in fulfilment of the requirements for the degree of Master of Science Johannesburg 2016.This dissertation investigates grit “passion and perseverance” for a long-term goal and growth mindset in grade 11 high school students as they code a non-trivial pro-gramming project in Java over a six-week period. Students are often challenged by the complexities of programming and can be overwhelmed when they encounter errors causing them to give up and not persevere. The programming project includes scaffolding with frequent feedback to increase the motivation of students. The study used mixed methods research that used both quantitative and qualitative data to find answers to the research questions. Whilst the correlation between grit, mindset and the project results were moderate, that students submitted their project numerous times showed an indication to perseverance. The data gathered from the interviews further indicated that the students’ perseverance led them to employ their own prob-lem-solving strategies when they encounter problems.MT 201

    The Effects of the Unit Concept on Prospective Elementary Teachers\u27 Understanding of Rational Number Concepts.

    Get PDF
    The purpose of this study was to follow and describe the cognitive processes of five prospective elementary teachers as they engaged in the formation of units and to examine the role of the unit concept as a possible link between the whole number and rational number domains. An attempt was made to gain an understanding of how the students constructed units and whether or not their attention to and understanding of the unit concept would increase their understanding of rational number concepts and operations. The rational number domain is one that causes great difficulties for students and their teachers. The complexity of this domain is revealed through the many roles in which a rational number can appear--measure, ratio, part-whole, quotient, and operator. In an effort to improve rational number understanding, focus has turned to the unit fraction and the basic concept of unit. It has been suggested that students possess intuitive or informal knowledge of unit formation and this knowledge may be used as a foundation for building rational number understanding. This study examined the role of the unit concept in bridging the gap between whole numbers and rational numbers. The students were five preservice elementary teachers enrolled in a mathematics course designed for elementary education majors. The group of five students was selected based on an inventory and personal interviews. Once selected the students participated in a teaching experiment that consisted of six lessons. Data was collected through videorecording, audiotapes, journals, essays, and students\u27 written work. Results of the study indicated: (a) Students\u27 awareness of their informal knowledge regarding the unit concept promotes understanding; (b) teachers who provide opportunities for students to build on their informal knowledge by working with various whole number units to develop unitizing and norming skills, help students develop schemes for further work with rational numbers; and (c) students who become accustomed to focusing on the unit may more readily recognize intuitive and authentic connections between natural and rational numbers

    Using Class-Level Static Properties to Predict Object Lifetimes

    Get PDF
    Today, most modern programming languages such as C # or Java use an automatic memory management system also known as a Garbage Collector (GC). Over the course of program execution, new objects are allocated in memory, and some older objects become unreachable (die). In order for the program to keep running, it becomes necessary to free the memory of dead objects; this task is performed periodically by the GC. Research has shown that most objects die young and as a result, generational collectors have become very popular over the years. Yet, these algorithms are not good at handling long-lived objects. Typically, long-lived objects would first be allocated in the nursery space and be promoted (copied) to an older generation after surviving a garbage collection, hence wasting precious time. By allocating long-lived and immortal objects directly into infrequently or never collected regions, pretenuring can reduce garbage collection costs significantly. Current state of the art methodology to predict object lifetime involves off-line profiling combined with a simple, heuristic classification. Profiling is slow (can take days), requires gathering gigabytes of data that need to be analysed (can take hours), and needs to be repeated for every previously unseen program. This thesis explores the space of lifetime predictions and shows how object lifetimes can be predicted accurately and quickly using simple program characteristics gathered within minutes. Following an innovative methodology introduced in this thesis, object lifetime predictions are fed into a specifically modified Java virtual machine. Performance tests show gains in GC times of as much as 77% for the “SPEC jvm98” benchmarks, against a generational copying collector

    A Cognitive Model for Problem Solving in Computer Science

    Get PDF
    According to industry representatives, computer science education needs to emphasize the processes involved in solving computing problems rather than their solutions. Most of the current assessment tools used by universities and computer science departments analyze student answers to problems rather than investigating the processes involved in solving them. Approaching assessment from this perspective would reveal potential errors leading to incorrect solutions. This dissertation proposes a model describing how people solve computational problems by storing, retrieving, and manipulating information and knowledge. It describes how metacognition interacts with schemata representing conceptual and procedural knowledge, as well as with the external sources of information that might be needed to arrive at a solution. Metacognition includes higher-order, executive processes responsible for controlling and monitoring schemata, which in turn represent the algorithmic knowledge needed for organizing and adapting concepts to a specificc domain. The model illustrates how metacognitive processes interact with the knowledge represented by schemata as well as the information from external sources. This research investigates the didifferences in the way computer science novices use their metacognition and schemata to solve a computer programming problem. After J. Parham and L. Gugerty reached an 85% reliability for six metacognitive processes and six domain-specific schemata for writing a computer program, the resulting vocabulary provided the foundation for supporting the existence of and the interaction between metacognition, schemata, and external sources of information in computer programming. Overall, the participants in this research used their schemata 6% more than their metacognition and their metacognitive processes to control and monitor their schemata used to write a computer program. This research has potential implications in computer science education and software development through its understanding of the cognitive behavior used to solve computational problems

    The Value of Seizure Semiology in Epilepsy Surgery: Epileptogenic-Zone Localisation in Presurgical Patients using Machine Learning and Semiology Visualisation Tool

    Get PDF
    Background Eight million individuals have focal drug resistant epilepsy worldwide. If their epileptogenic focus is identified and resected, they may become seizure-free and experience significant improvements in quality of life. However, seizure-freedom occurs in less than half of surgical resections. Seizure semiology - the signs and symptoms during a seizure - along with brain imaging and electroencephalography (EEG) are amongst the mainstays of seizure localisation. Although there have been advances in algorithmic identification of abnormalities on EEG and imaging, semiological analysis has remained more subjective. The primary objective of this research was to investigate the localising value of clinician-identified semiology, and secondarily to improve personalised prognostication for epilepsy surgery. Methods I data mined retrospective hospital records to link semiology to outcomes. I trained machine learning models to predict temporal lobe epilepsy (TLE) and determine the value of semiology compared to a benchmark of hippocampal sclerosis (HS). Due to the hospital dataset being relatively small, we also collected data from a systematic review of the literature to curate an open-access Semio2Brain database. We built the Semiology-to-Brain Visualisation Tool (SVT) on this database and retrospectively validated SVT in two separate groups of randomly selected patients and individuals with frontal lobe epilepsy. Separately, a systematic review of multimodal prognostic features of epilepsy surgery was undertaken. The concept of a semiological connectome was devised and compared to structural connectivity to investigate probabilistic propagation and semiology generation. Results Although a (non-chronological) list of patients’ semiologies did not improve localisation beyond the initial semiology, the list of semiology added value when combined with an imaging feature. The absolute added value of semiology in a support vector classifier in diagnosing TLE, compared to HS, was 25%. Semiology was however unable to predict postsurgical outcomes. To help future prognostic models, a list of essential multimodal prognostic features for epilepsy surgery were extracted from meta-analyses and a structural causal model proposed. Semio2Brain consists of over 13000 semiological datapoints from 4643 patients across 309 studies and uniquely enabled a Bayesian approach to localisation to mitigate TLE publication bias. SVT performed well in a retrospective validation, matching the best expert clinician’s localisation scores and exceeding them for lateralisation, and showed modest value in localisation in individuals with frontal lobe epilepsy (FLE). There was a significant correlation between the number of connecting fibres between brain regions and the seizure semiologies that can arise from these regions. Conclusions Semiology is valuable in localisation, but multimodal concordance is more valuable and highly prognostic. SVT could be suitable for use in multimodal models to predict the seizure focus

    Stochastic suprasegmentals: relationships between redundancy, prosodic structure and care of articulation in spontaneous speech

    Get PDF
    Within spontaneous speech there are wide variations in the articulation of the same word by the same speaker. This paper explores two related factors which influence variation in articulation, prosodic structure and redundancy. We argue that the constraint of producing robust communication while efficiently expending articulatory effort leads to an inverse relationship between language redundancy and care of articulation. The inverse relationship improves robustness by spreading the information more evenly across the speech signal leading to a smoother signal redundancy profile. We argue that prosodic prominence is a linguistic means of achieving smooth signal redundancy. Prosodic prominence increases care of articulation and coincides with unpredictable sections of speech. By doing so, prosodic prominence leads to a smoother signal redundancy. Results confirm the strong relationship between prosodic prominence and care of articulation as well as an inverse relationship between langu..

    Writing relationships: Reading students, reading ourselves

    Get PDF
    If we want to understand how students learn to write in a college composition course, we need to pay more attention to the context in which that writing and the teacher\u27s reading occurs. What we need is a definition of context broad enough to account for the interactive and dialectical nature of the composing and reading processes, but still narrow enough to tell us what not to take into account. My argument in this dissertation is that we can best accomplish this by viewing context in composition as primarily determined by the interpersonal, classroom relationships--between the student and teacher, between the student and other students, and, finally, between the teacher and other teachers--that shape the writing and reading processes. Traditionally we have considered the quality of the relationships in a writing classroom to be an effect of a student\u27s success or failure as a writer; I think that it is often the other way around, that writing students succeed when teachers establish productive relationships with--and between--their students. I am not suggesting that establishing productive classroom relationships is another nice thing to do if we have time; I am arguing that it is the primary thing we need to do if we want to succeed as writing teachers. Throughout this dissertation, I have tried to identify moments of conflict, connection, and tension, moments when authority was being asserted, resisted, and negotiated. In the first section--the teacher-student relationship--I focus on how I read student texts, how we talk about composing in class, and how tension is negotiated in the one-to-one conference; in the second section--the student-student relationship--I examine competition, identification, and collaboration between peers; and in the final section, I examine some implications of teacher-researcher writing. In order to explore interpersonal relationships, I\u27ve tried to develop an approach which reflects the multifaceted, interdisciplinary nature of my topic, one which makes use of a wide range of methods and techniques: narrative, analysis, theory, case study, self-study, and argument
    corecore