62,389 research outputs found
Catastrophic forgetting: still a problem for DNNs
We investigate the performance of DNNs when trained on class-incremental
visual problems consisting of initial training, followed by retraining with
added visual classes. Catastrophic forgetting (CF) behavior is measured using a
new evaluation procedure that aims at an application-oriented view of
incremental learning. In particular, it imposes that model selection must be
performed on the initial dataset alone, as well as demanding that retraining
control be performed only using the retraining dataset, as initial dataset is
usually too large to be kept. Experiments are conducted on class-incremental
problems derived from MNIST, using a variety of different DNN models, some of
them recently proposed to avoid catastrophic forgetting. When comparing our new
evaluation procedure to previous approaches for assessing CF, we find their
findings are completely negated, and that none of the tested methods can avoid
CF in all experiments. This stresses the importance of a realistic empirical
measurement procedure for catastrophic forgetting, and the need for further
research in incremental learning for DNNs.Comment: 10 pages, 11 figures, Artificial Neural Networks and Machine Learning
- ICANN 201
Keeping Context In Mind: Automating Mobile App Access Control with User Interface Inspection
Recent studies observe that app foreground is the most striking component
that influences the access control decisions in mobile platform, as users tend
to deny permission requests lacking visible evidence. However, none of the
existing permission models provides a systematic approach that can
automatically answer the question: Is the resource access indicated by app
foreground? In this work, we present the design, implementation, and evaluation
of COSMOS, a context-aware mediation system that bridges the semantic gap
between foreground interaction and background access, in order to protect
system integrity and user privacy. Specifically, COSMOS learns from a large set
of apps with similar functionalities and user interfaces to construct generic
models that detect the outliers at runtime. It can be further customized to
satisfy specific user privacy preference by continuously evolving with user
decisions. Experiments show that COSMOS achieves both high precision and high
recall in detecting malicious requests. We also demonstrate the effectiveness
of COSMOS in capturing specific user preferences using the decisions collected
from 24 users and illustrate that COSMOS can be easily deployed on smartphones
as a real-time guard with a very low performance overhead.Comment: Accepted for publication in IEEE INFOCOM'201
A hybrid method for the analysis of learner behaviour in active learning environments
Software-mediated learning requires adjustments in the teaching and learning process. In particular active learning facilitated through interactive learning software differs from traditional instructor-oriented, classroom-based teaching. We present behaviour analysis techniques for Web-mediated learning. Motivation, acceptance of the learning approach and technology, learning organisation and actual tool usage are aspects of behaviour that require different analysis techniques to be used. A hybrid method based on a combination of survey methods and Web usage mining techniques can provide accurate and comprehensive analysis results. These techniques allow us to evaluate active learning approaches implemented in form of Web tutorials
Learning Object Categories From Internet Image Searches
In this paper, we describe a simple approach to learning models of visual object categories from images gathered from Internet image search engines. The images for a given keyword are typically highly variable, with a large fraction being unrelated to the query term, and thus pose a challenging environment from which to learn. By training our models directly from Internet images, we remove the need to laboriously compile training data sets, required by most other recognition approaches-this opens up the possibility of learning object category models “on-the-fly.” We describe two simple approaches, derived from the probabilistic latent semantic analysis (pLSA) technique for text document analysis, that can be used to automatically learn object models from these data. We show two applications of the learned model: first, to rerank the images returned by the search engine, thus improving the quality of the search engine; and second, to recognize objects in other image data sets
Increase Apparent Public Speaking Fluency By Speech Augmentation
Fluent and confident speech is desirable to every speaker. But professional
speech delivering requires a great deal of experience and practice. In this
paper, we propose a speech stream manipulation system which can help
non-professional speakers to produce fluent, professional-like speech content,
in turn contributing towards better listener engagement and comprehension. We
propose to achieve this task by manipulating the disfluencies in human speech,
like the sounds 'uh' and 'um', the filler words and awkward long silences.
Given any unrehearsed speech we segment and silence the filled pauses and
doctor the duration of imposed silence as well as other long pauses
('disfluent') by a predictive model learned using professional speech dataset.
Finally, we output a audio stream in which speaker sounds more fluent,
confident and practiced compared to the original speech he/she recorded.
According to our quantitative evaluation, we significantly increase the fluency
of speech by reducing rate of pauses and fillers
iCaRL: Incremental Classifier and Representation Learning
A major open problem on the road to artificial intelligence is the
development of incrementally learning systems that learn about more and more
concepts over time from a stream of data. In this work, we introduce a new
training strategy, iCaRL, that allows learning in such a class-incremental way:
only the training data for a small number of classes has to be present at the
same time and new classes can be added progressively. iCaRL learns strong
classifiers and a data representation simultaneously. This distinguishes it
from earlier works that were fundamentally limited to fixed data
representations and therefore incompatible with deep learning architectures. We
show by experiments on CIFAR-100 and ImageNet ILSVRC 2012 data that iCaRL can
learn many classes incrementally over a long period of time where other
strategies quickly fail.Comment: Accepted paper at CVPR 201
- …