58,932 research outputs found

    Introducing a framework to assess newly created questions with Natural Language Processing

    Full text link
    Statistical models such as those derived from Item Response Theory (IRT) enable the assessment of students on a specific subject, which can be useful for several purposes (e.g., learning path customization, drop-out prediction). However, the questions have to be assessed as well and, although it is possible to estimate with IRT the characteristics of questions that have already been answered by several students, this technique cannot be used on newly generated questions. In this paper, we propose a framework to train and evaluate models for estimating the difficulty and discrimination of newly created Multiple Choice Questions by extracting meaningful features from the text of the question and of the possible choices. We implement one model using this framework and test it on a real-world dataset provided by CloudAcademy, showing that it outperforms previously proposed models, reducing by 6.7% the RMSE for difficulty estimation and by 10.8% the RMSE for discrimination estimation. We also present the results of an ablation study performed to support our features choice and to show the effects of different characteristics of the questions' text on difficulty and discrimination.Comment: Accepted at the International Conference of Artificial Intelligence in Educatio

    Environmental analysis of the chemical release module

    Get PDF
    The environmental analysis of the Chemical Release Module (a free flying spacecraft deployed from the space shuttle to perform chemical release experiments) is reviewed. Considerations of possible effects of the injectants on human health, ionosphere, weather, ground based optical astronomical observations, and satellite operations are included. It is concluded that no deleterious environmental effects of widespread or long lasting nature are anticipated from chemical releases in the upper atmosphere of the type indicated for the program

    Deep Knowledge Tracing is an implicit dynamic multidimensional item response theory model

    Full text link
    Knowledge tracing consists in predicting the performance of some students on new questions given their performance on previous questions, and can be a prior step to optimizing assessment and learning. Deep knowledge tracing (DKT) is a competitive model for knowledge tracing relying on recurrent neural networks, even if some simpler models may match its performance. However, little is known about why DKT works so well. In this paper, we frame deep knowledge tracing as a encoderdecoder architecture. This viewpoint not only allows us to propose better models in terms of performance, simplicity or expressivity but also opens up promising avenues for future research directions. In particular, we show on several small and large datasets that a simpler decoder, with possibly fewer parameters than the one used by DKT, can predict student performance better.Comment: ICCE 2023 - The 31st International Conference on Computers in Education, Asia-Pacific Society for Computers in Education, Dec 2023, Matsue, Shimane, Franc

    Subjective Causality and Counterfactuals in the Social Sciences

    Get PDF
    The article explores the role that subjective evidence of causality and associated counterfactuals and counterpotentials might play in the social sciences where comparative cases are scarce. This scarcity rules out statistical inference based upon frequencies and usually invites in-depth ethnographic studies. Thus, if causality is to be preserved in such situations, a conception of ethnographic causal inference is required. Ethnographic causality inverts the standard statistical concept of causal explanation in observational studies, whereby comparison and generalization, across a sample of cases, are both necessary prerequisites for any causal inference. Ethnographic causality allows, in contrast, for causal explanation prior to any subsequent comparison or generalization

    Cold Storage Data Archives: More Than Just a Bunch of Tapes

    Full text link
    The abundance of available sensor and derived data from large scientific experiments, such as earth observation programs, radio astronomy sky surveys, and high-energy physics already exceeds the storage hardware globally fabricated per year. To that end, cold storage data archives are the---often overlooked---spearheads of modern big data analytics in scientific, data-intensive application domains. While high-performance data analytics has received much attention from the research community, the growing number of problems in designing and deploying cold storage archives has only received very little attention. In this paper, we take the first step towards bridging this gap in knowledge by presenting an analysis of four real-world cold storage archives from three different application domains. In doing so, we highlight (i) workload characteristics that differentiate these archives from traditional, performance-sensitive data analytics, (ii) design trade-offs involved in building cold storage systems for these archives, and (iii) deployment trade-offs with respect to migration to the public cloud. Based on our analysis, we discuss several other important research challenges that need to be addressed by the data management community

    Galactic and Magellanic Evolution with the SKA

    Full text link
    As we strive to understand how galaxies evolve it is crucial that we resolve physical processes and test emerging theories in nearby systems that we can observe in great detail. Our own Galaxy, the Milky Way, and the nearby Magellanic Clouds provide unique windows into the evolution of galaxies, each with its own metallicity and star formation rate. These laboratories allow us to study with more detail than anywhere else in the Universe how galaxies acquire fresh gas to fuel their continuing star formation, how they exchange gas with the surrounding intergalactic medium, and turn warm, diffuse gas into molecular clouds and ultimately stars. The λ\lambda21-cm line of atomic hydrogen (HI) is an excellent tracer of these physical processes. With the SKA we will finally have the combination of surface brightness sensitivity, point source sensitivity and angular resolution to transform our understanding of the evolution of gas in the Milky Way, all the way from the halo down to the formation of individual molecular clouds.Comment: 25 pages, from "Advancing Astrophysics with the Square Kilometre Array", to appear in Proceedings of Scienc

    Knowledge Tracing: A Review of Available Technologies

    Get PDF
    As a student modeling technique, knowledge tracing is widely used by various intelligent tutoring systems to infer and trace the individual’s knowledge state during the learning process. In recent years, various models were proposed to get accurate and easy-to-interpret results. To make sense of the wide Knowledge tracing (KT) modeling landscape, this paper conducts a systematic review to provide a detailed and nuanced discussion of relevant KT techniques from the perspective of assumptions, data, and algorithms. The results show that most existing KT models consider only a fragment of the assumptions that relate to the knowledge components within items and student’s cognitive process. Almost all types of KT models take “quize data” as input, although it is insufficient to reflect a clear picture of students’ learning process. Dynamic Bayesian network, logistic regression and deep learning are the main algorithms used by various knowledge tracing models. Some open issues are identified based on the analytics of the reviewed works and discussed potential future research directions
    • …
    corecore