7 research outputs found

    Sympathetic Activation in Deadlines of Deskbound Research - A Study in the Wild

    Get PDF
    Paper and proposal deadlines are important milestones, conjuring up emotional memories to researchers. The question is if in the daily challenging world of scholarly research, deadlines truly incur higher sympathetic loading than the alternative. Here we report results from a longitudinal, in the wild study of n = 10 researchers working in the presence and absence of impeding deadlines. Unlike the retrospective, questionnaire-based studies of research deadlines in the past, our study is real-time and multimodal, including physiological, observational, and psychometric measurements. The results suggest that deadlines do not significantly add to the sympathetic loading of researchers. Irrespective of deadlines, the researchers' sympathetic activation is strongly associated with the amount of reading and writing they do, the extent of smartphone use, and the frequency of physical breaks they take. The latter likely indicates a natural mechanism for regulating sympathetic overactivity in deskbound research, which can inform the design of future break interfaces

    Early Developmental Activities and Computing Proficiency

    Get PDF
    As countries adopt computing education for all pupils from primary school upwards, there are challenging indicators: significant proportions of students who choose to study computing at universities fail the introductory courses, and the evidence for links between formal education outcomes and success in CS is limited. Yet, as we know, some students succeed without prior computing experience. Why is this? <br/><br/> Some argue for an innate ability, some for motivation, some for the discrepancies between the expectations of instructors and students, and some – simply – for how programming is being taught. All agree that becoming proficient in computing is not easy. Our research takes a novel view on the problem and argues that some of that success is influenced by early childhood experiences outside formal education. <br/><br/> In this study, we analyzed over 1300 responses to a multi-institutional and multi-national survey that we developed. The survey captures enjoyment of early developmental activities such as childhood toys, games and pastimes between the ages 0 — 8 as well as later life experiences with computing. We identify unifying features of the computing experiences in later life, and attempt to link these computing experiences to the childhood activities. <br/><br/> The analysis indicates that computing proficiency should be seen from multiple viewpoints, including both skill-level and confidence. It shows that particular early childhood experiences are linked to parts of computing proficiency, namely those related to confidence with problem solving using computing technology. These are essential building blocks for more complex use. We recognize issues in the experimental design that may prevent our data showing a link between early activities and more complex computing skills, and suggest adjustments. Ultimately, it is hoped that this line of research will feed in to early years and primary education, and thereby improve computing education for all

    Using Lisp-based pseudocode to probe student understanding

    Get PDF
    We describe our use of Lisp to generate teaching aids for an Algo-rithms and Data Structures course taught as part of the undergrad-uate Computer Science curriculum. Specifically, we have made use of the ease of construction of domain-specific languages in Lisp to build an restricted language with programs capable of being pretty-printed as pseudocode, interpreted as abstract instructions, and treated as data in order to produce modified distractor versions. We examine student performance, report on student and educator reflection, and discuss practical aspects of delivering using this teaching tool

    Models for Forecasting Value at Risk: A comparison of the predictive ability of different VaR models to capture market losses incurred during the 2020 pandemic recession

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Statistics and Information Management, specialization in Risk Analysis and ManagementThe purpose of this study is two-fold. First, it aims at providing a theoretical overview of the most widely adopted methods for forecasting Value-at-Risk (VaR). Second, through a practical implementation, it proposes a methodology to compare and evaluate the predictive ability of different parametric, non-parametric and semi-parametric models to capture the market losses incurred during the COVID-19 pandemic recession of 2020. To evaluate these models, it is applied a two-staged backtesting procedure based on accuracy statistical tests and loss functions. VaR forecasts are evaluated during a volatile and a stable forecasting periods. The results of the study suggest that, for the volatile period, the Extreme Value Theory with a peaks over threshold (EVT-POT) approach produces the most accurate VaR forecasts across all different methodologies. The Filtered Historical Simulation (FHS), Volatility Weighted Historical Simulation (VWHS) and the Glosten, Jagannathan and Runkle (GJR) GARCH with skewed generalized error distribution (GJR GARCH–SGED) models also produce satisfactory forecasts. Moreover, other parametric approaches, namely the GARCH and EWMA, despite less accurate, also produce reliable results. Furthermore, the overall performance of all models improves significantly during the stable forecasting period. For instance, the Historical Simulation with exponentially decreasing weights (BRW HS), one of the worst performers during the volatile forecasting period, produces the most accurate VaR forecasts, with the lowest penalty scores, during the stable forecasting period. Lastly, it was also found that as the level of conservativeness of the model increases, the overestimation of the actual incurred risk seems to a be recurrent event

    The different types of contributions to knowledge (in CER): All needed, but not all recognised

    Get PDF
    The overall aim of this paper is to stimulate discussion about the activities within CER, and to develop a more thoughtful and explicit perspective on the different types of research activity within CER, and their relationships with each other. While theories may be the most valuable outputs of research to those wishing to apply them, for researchers themselves there are other kinds of contribution important to progress in the field. This is what relates it to the immediate subject of this special journal issue on theory in CER. We adopt as our criterion for value “contribution to knowledge”. This paper’s main contributions are: • A set of 12 categories of contribution which together indicate the extent of this terrain of contributions to research. • Leading into that is a collection of ideas and misconceptions which are drawn on in defining and motivating “ground rules”, which are hints and guidance on the need for various often neglected categories. These are also helpful in justifying some additional categories which make the set as a whole more useful in combination. These are followed by some suggested uses for the categories, and a discussion assessing how the success of the paper might be judged. These are followed by some suggested uses for the categories, and a discussion assessing how the success of the paper might be judged

    Using knowledge elicitation techniques to establish a baseline of quantitative measures of computational thinking skill acquisition among university computer science students.

    Get PDF
    The purpose of this study was to establish a baseline of quantitative measures of computational thinking skill acquisition as an aid in evaluating student outcomes for programming competency. Proxy measures for the desired skill levels were identified that reliably differentiate the conceptual representations of computer science students most likely, from those least likely, to have attained the desired level of programming skill. Insights about the development of computational thinking skills across the degree program were gained by analyzing variances between these proxy measures and the conceptual representations of cross-sections of participating students partitioned by levels of coursework attainment, programming experience, and academic performance. Going forward, similar measures can provide a basis for quantitative assessment of individual attainment of the desired learning outcome. The voluntary participants for this study were students enrolled in selected undergraduate computer science courses at the University. Their conceptual representations regarding programming concepts were elicited with a repeated, open card sort task and stimuli set as used for prior studies of computer science education. A total of 135 students participated, with 124 of these providing 296 card sorts. Differences between card sorts were quantified with the edit distance metric which provided a basis for statistical analysis. Card sorts from cross-sections of participants were compared and contrasted using graph theory algorithms to calculate measures of average segment length of minimum spanning trees (orthogonality), to identify clusters of highly similar card sorts, and to reduce clusters down to individual exemplar card sorts. Variances in distance between the card sorts of cross-sections of participants and the identified exemplars were analyzed with one-way ANOVAs to evaluate differences in development of conceptual representations relative to coursework attainment and programming experience. Findings Collections of structurally similar card sorts were found to align with categorizations identified in earlier studies of computer science education. A logistic regression identified two exemplar sorts representing deep factor categorizations that reliably predicted those participants most, and least likely to have attained the desired level of programming skill. Measures of proximal distance between participants' card sorts and these two exemplars were found to decrease, indicating greater similarity, as students attained progressive coursework milestones. This finding suggests that proximal distances to exemplars of common categorizations for this stimuli set can effectively differentiate conceptual development levels of students between, as well as within, cross-sections selected by achievement of coursework milestones. Measures of proximal distances to one exemplar of deep factor categorization were found to increase, indicating less similarity, as participants’ levels of programming experience increased. This finding was contrary to the theoretical framework for skill acquisition. Further analysis found that variances in experience level as captured by the study instrument were not equally distributed among the cross-sections. The preponderance of participants reporting greater levels of experience were degree majors not required to enroll in the courses most likely to develop that specific conceptualization. Therefore, for this deep factor categorization, instruction was found to have a greater influence on conceptual development than programming experience. However, it is possible that other categorizations, such as those related to software engineering technology, may be found to be more influenced by experience. The orthogonality of participant card sorts was found to increase with each category of increase in academic performance, as in keeping with prior studies. Orthogonality also increased with greater levels of programming experience as expected by the theoretical framework. However, since experience was not equally distributed across categories of coursework achievement, the relationship between the orthogonality of participant card sorts and milestones of coursework achievement was not found to be statistically significant overall. Based on the findings, the researcher concludes that a baseline of quantitative measures of computational thinking skills can be constructed based upon categorizations of elicited conceptual representations and associated exemplar card sorts. Eleven categorizations identified in a prior study of computer science seniors appear to represent reasonable expectations for deep factor categorizations. Follow up research is recommended (a) to identify for each categorization the exemplar card sorts that may be specific to different degree majors, and (b) to identify which categorizations may be more influenced by programming experience than by instruction. Given an elicitation tool that prompts for the specific categorizations and a set of exemplar representations as proposed above, instructional programs can establish expected ranges of proximal distance measures to specific exemplars. These exemplars should be selected according to particular categorizations, degree majors, and coursework milestones. These differentiated measures will serve as evidence that students are meeting the instructional program learning objective for developing competency in the design and implementation of computer-based solutions

    Towards a Neuroscience of Computer Programming & Education:A thesis submitted in partial fulfilment of the requirements of the University of East Anglia for the degree of Doctor of Philosophy. Research undertaken in the School of Psychology, University of East Anglia.

    Get PDF
    Computer programming is fast becoming a required part of School curricula, but students find the topic challenging and university dropout rates are high. Observations suggest that hands-on keyboard typing improves learning, but quantitative evidence for this is lacking and the mechanisms are still unclear. Here we study neural and behavioral processes of programming in general, and Hands-on in particular. In project 1, we taught naïve teenagers programming in a classroom-like session, where one student in a pair typed code (Hands-on) while the other participated by discussion (Hands-off). They were scanned with fMRI 1-2 days later while evaluating written code, and their knowledge was tested again after a week. We find confidence and math grades to be important for learning, and easing of intrinsic inhibitions of parietal, temporal, and superior frontal activation to be a typical neural mechanism during programming, more so in stronger learners. Moreover, left inferior frontal cortex plays a central role; operculum integrates information from the dorsal and ventral streams and its intrinsic connectivity predicts confidence and long-term memory, while activity in Broca’s area also reflects deductive reasoning. Hands-on led to greater confidence and memory retention. In project 2, we investigated the impact of feedback on motivation and reaction time in a rule-switching task. We find that feedback targeting personal traits increasingly impair performance and motivation over the experiment, and we find that activity in precentral gyrus and anterior insula decrease linearly over time during the personal feedback condition, implicating these areas in this effect. These findings promote hands-on learning and emphasize possibilities for feedback interventions on motivation. Future studies should investigate interventions for increasing Need for Cognition, the relationship between computer programming and second language learning (L2), and the role of explicit verbalization of knowledge for successful coding, given the language-like processing of code
    corecore