217 research outputs found
Towards Personalized Learning using Counterfactual Inference for Randomized Controlled Trials
Personalized learning considers that the causal effects of a studied learning intervention may differ for the individual student (e.g., maybe girls do better with video hints while boys do better with text hints). To evaluate a learning intervention inside ASSISTments, we run a randomized control trial (RCT) by randomly assigning students into either a control condition or a treatment condition. Making the inference about causal effects of studies interventions is a central problem. Counterfactual inference answers “What if� questions, such as Would this particular student benefit more if the student were given the video hint instead of the text hint when the student cannot solve a problem? . Counterfactual prediction provides a way to estimate the individual treatment effects and helps us to assign the students to a learning intervention which leads to a better learning. A variant of Michael Jordan\u27s Residual Transfer Networks was proposed for the counterfactual inference. The model first uses feed-forward neural networks to learn a balancing representation of students by minimizing the distance between the distributions of the control and the treated populations, and then adopts a residual block to estimate the individual treatment effect. Students in the RCT usually have done a number of problems prior to participating it. Each student has a sequence of actions (performance sequence). We proposed a pipeline to use the performance sequence to improve the performance of counterfactual inference. Since deep learning has achieved a huge amount of success in learning representations from raw logged data, student representations were learned by applying the sequence autoencoder to performance sequences. Then, incorporate these representations into the model for counterfactual inference. Empirical results showed that the representations learned from the sequence autoencoder improved the performance of counterfactual inference
Recommended from our members
Building expert systems: cognitive emulation.
Chapter 1 briefly introduces the concept of cognitive emulation, and outlines its current status. Chapter 2 reviews psychological research on human expert thinking. First, the study of expert thinking is placed in the context of modern cognitive psychology. Next, the principal methods and techniques employed by psychologists examining expert cognition are examined. The remainder of the chapter is given over to a review of the published literature on the nature and development of human expertise. Chapter 3 reviews the main arguments for and against cognitive emulation in expert system design. The tentative conclusion reached is that a significant degree of emulation is inevitable, but that a pure, unselective strategy of emulation is neither realistic nor desirable. Chapter 4 examines the prospects for cognitive emulation from a more pragmatic angle. Several factors are identified that represent constraints on the usefulness of a cognitive approach. However, a second set of factors is identified which should facilitate an emulation strategy - especially in the longer term. Some guidance is given on when to seriously consider adopting an emulation strategy. Chapter 5 presents a critical survey of expert system research that has already addressed the emulation issue. Six basic approaches to cognitive emulation are distinguished and evaluated. This helps draw out in more detail the implications of an emulation strategy for knowledge acquisition, knowledge representation and system architecture. The chapter concludes by discussing the issues that arise when different approaches to emulation are combined. Some guidance is offered on how this might be achieved. Chapter 6 summarizes the main themes and issues to have emerged, the design advice contained in the thesis, and the original contributions made by the thesis
Data mining in soft computing framework: a survey
The present article provides a survey of the available literature on data mining using soft computing. A categorization has been provided based on the different soft computing tools and their hybridizations used, the data mining function implemented, and the preference criterion selected by the model. The utility of the different soft computing methodologies is highlighted. Generally fuzzy sets are suitable for handling the issues related to understandability of patterns, incomplete/noisy data, mixed media information and human interaction, and can provide approximate solutions faster. Neural networks are nonparametric, robust, and exhibit good learning and generalization capabilities in data-rich environments. Genetic algorithms provide efficient search algorithms to select a model, from mixed media data, based on some preference criterion/objective function. Rough sets are suitable for handling different types of uncertainty in data. Some challenges to data mining and the application of soft computing methodologies are indicated. An extensive bibliography is also included
- …