41,766 research outputs found
Analyzing collaborative learning processes automatically
In this article we describe the emerging area of text classification research focused on the problem of collaborative learning process analysis both from a broad perspective and more specifically in terms of a publicly available tool set called TagHelper tools. Analyzing the variety of pedagogically valuable facets of learners’ interactions is a time consuming and effortful process. Improving automated analyses of such highly valued processes of collaborative learning by adapting and applying recent text classification technologies would make it a less arduous task to obtain insights from corpus data. This endeavor also holds the potential for enabling substantially improved on-line instruction both by providing teachers and facilitators with reports about the groups they are moderating and by triggering context sensitive collaborative learning support on an as-needed basis. In this article, we report on an interdisciplinary research project, which has been investigating the effectiveness of applying text classification technology to a large CSCL corpus that has been analyzed by human coders using a theory-based multidimensional coding scheme. We report promising results and include an in-depth discussion of important issues such as reliability, validity, and efficiency that should be considered when deciding on the appropriateness of adopting a new technology such as TagHelper tools. One major technical contribution of this work is a demonstration that an important piece of the work towards making text classification technology effective for this purpose is designing and building linguistic pattern detectors, otherwise known as features, that can be extracted reliably from texts and that have high predictive power for the categories of discourse actions that the CSCL community is interested in
Hierarchical testing designs for pattern recognition
We explore the theoretical foundations of a ``twenty questions'' approach to
pattern recognition. The object of the analysis is the computational process
itself rather than probability distributions (Bayesian inference) or decision
boundaries (statistical learning). Our formulation is motivated by applications
to scene interpretation in which there are a great many possible explanations
for the data, one (``background'') is statistically dominant, and it is
imperative to restrict intensive computation to genuinely ambiguous regions.
The focus here is then on pattern filtering: Given a large set Y of possible
patterns or explanations, narrow down the true one Y to a small (random) subset
\hat Y\subsetY of ``detected'' patterns to be subjected to further, more
intense, processing. To this end, we consider a family of hypothesis tests for
Y\in A versus the nonspecific alternatives Y\in A^c. Each test has null type I
error and the candidate sets A\subsetY are arranged in a hierarchy of nested
partitions. These tests are then characterized by scope (|A|), power (or type
II error) and algorithmic cost. We consider sequential testing strategies in
which decisions are made iteratively, based on past outcomes, about which test
to perform next and when to stop testing. The set \hat Y is then taken to be
the set of patterns that have not been ruled out by the tests performed. The
total cost of a strategy is the sum of the ``testing cost'' and the
``postprocessing cost'' (proportional to |\hat Y|) and the corresponding
optimization problem is analyzed.Comment: Published at http://dx.doi.org/10.1214/009053605000000174 in the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Linking recorded data with emotive and adaptive computing in an eHealth environment
Telecare, and particularly lifestyle monitoring, currently relies on the ability to detect and respond to changes in individual behaviour using data derived from sensors around the home. This means that a significant aspect of behaviour, that of an individuals emotional state, is not accounted for in reaching a conclusion as to the form of response required. The linked concepts of emotive and adaptive computing offer an opportunity to include information about emotional state and the paper considers how current developments in this area have the potential to be integrated within telecare and other areas of eHealth. In doing so, it looks at the development of and current state of the art of both emotive and adaptive computing, including its conceptual background, and places them into an overall eHealth context for application and development
Adversarial Sample Detection for Deep Neural Network through Model Mutation Testing
Deep neural networks (DNN) have been shown to be useful in a wide range of
applications. However, they are also known to be vulnerable to adversarial
samples. By transforming a normal sample with some carefully crafted human
imperceptible perturbations, even highly accurate DNN make wrong decisions.
Multiple defense mechanisms have been proposed which aim to hinder the
generation of such adversarial samples. However, a recent work show that most
of them are ineffective. In this work, we propose an alternative approach to
detect adversarial samples at runtime. Our main observation is that adversarial
samples are much more sensitive than normal samples if we impose random
mutations on the DNN. We thus first propose a measure of `sensitivity' and show
empirically that normal samples and adversarial samples have distinguishable
sensitivity. We then integrate statistical hypothesis testing and model
mutation testing to check whether an input sample is likely to be normal or
adversarial at runtime by measuring its sensitivity. We evaluated our approach
on the MNIST and CIFAR10 datasets. The results show that our approach detects
adversarial samples generated by state-of-the-art attacking methods efficiently
and accurately.Comment: Accepted by ICSE 201
- …