24,879 research outputs found
Memory Aware Synapses: Learning what (not) to forget
Humans can learn in a continuous manner. Old rarely utilized knowledge can be
overwritten by new incoming information while important, frequently used
knowledge is prevented from being erased. In artificial learning systems,
lifelong learning so far has focused mainly on accumulating knowledge over
tasks and overcoming catastrophic forgetting. In this paper, we argue that,
given the limited model capacity and the unlimited new information to be
learned, knowledge has to be preserved or erased selectively. Inspired by
neuroplasticity, we propose a novel approach for lifelong learning, coined
Memory Aware Synapses (MAS). It computes the importance of the parameters of a
neural network in an unsupervised and online manner. Given a new sample which
is fed to the network, MAS accumulates an importance measure for each parameter
of the network, based on how sensitive the predicted output function is to a
change in this parameter. When learning a new task, changes to important
parameters can then be penalized, effectively preventing important knowledge
related to previous tasks from being overwritten. Further, we show an
interesting connection between a local version of our method and Hebb's
rule,which is a model for the learning process in the brain. We test our method
on a sequence of object recognition tasks and on the challenging problem of
learning an embedding for predicting triplets.
We show state-of-the-art performance and, for the first time, the ability to
adapt the importance of the parameters based on unlabeled data towards what the
network needs (not) to forget, which may vary depending on test conditions.Comment: ECCV 201
A Transfer Learning Approach for Cache-Enabled Wireless Networks
Locally caching contents at the network edge constitutes one of the most
disruptive approaches in G wireless networks. Reaping the benefits of edge
caching hinges on solving a myriad of challenges such as how, what and when to
strategically cache contents subject to storage constraints, traffic load,
unknown spatio-temporal traffic demands and data sparsity. Motivated by this,
we propose a novel transfer learning-based caching procedure carried out at
each small cell base station. This is done by exploiting the rich contextual
information (i.e., users' content viewing history, social ties, etc.) extracted
from device-to-device (D2D) interactions, referred to as source domain. This
prior information is incorporated in the so-called target domain where the goal
is to optimally cache strategic contents at the small cells as a function of
storage, estimated content popularity, traffic load and backhaul capacity. It
is shown that the proposed approach overcomes the notorious data sparsity and
cold-start problems, yielding significant gains in terms of users'
quality-of-experience (QoE) and backhaul offloading, with gains reaching up to
in a setting consisting of four small cell base stations.Comment: some small fixes in notatio
DNA Steganalysis Using Deep Recurrent Neural Networks
Recent advances in next-generation sequencing technologies have facilitated
the use of deoxyribonucleic acid (DNA) as a novel covert channels in
steganography. There are various methods that exist in other domains to detect
hidden messages in conventional covert channels. However, they have not been
applied to DNA steganography. The current most common detection approaches,
namely frequency analysis-based methods, often overlook important signals when
directly applied to DNA steganography because those methods depend on the
distribution of the number of sequence characters. To address this limitation,
we propose a general sequence learning-based DNA steganalysis framework. The
proposed approach learns the intrinsic distribution of coding and non-coding
sequences and detects hidden messages by exploiting distribution variations
after hiding these messages. Using deep recurrent neural networks (RNNs), our
framework identifies the distribution variations by using the classification
score to predict whether a sequence is to be a coding or non-coding sequence.
We compare our proposed method to various existing methods and biological
sequence analysis methods implemented on top of our framework. According to our
experimental results, our approach delivers a robust detection performance
compared to other tools
Attributes of Big Data Analytics for Data-Driven Decision Making in Cyber-Physical Power Systems
Big data analytics is a virtually new term in power system terminology. This concept delves into the way a massive volume of data is acquired, processed, analyzed to extract insight from available data. In particular, big data analytics alludes to applications of artificial intelligence, machine learning techniques, data mining techniques, time-series forecasting methods. Decision-makers in power systems have been long plagued by incapability and weakness of classical methods in dealing with large-scale real practical cases due to the existence of thousands or millions of variables, being time-consuming, the requirement of a high computation burden, divergence of results, unjustifiable errors, and poor accuracy of the model. Big data analytics is an ongoing topic, which pinpoints how to extract insights from these large data sets. The extant article has enumerated the applications of big data analytics in future power systems through several layers from grid-scale to local-scale. Big data analytics has many applications in the areas of smart grid implementation, electricity markets, execution of collaborative operation schemes, enhancement of microgrid operation autonomy, management of electric vehicle operations in smart grids, active distribution network control, district hub system management, multi-agent energy systems, electricity theft detection, stability and security assessment by PMUs, and better exploitation of renewable energy sources. The employment of big data analytics entails some prerequisites, such as the proliferation of IoT-enabled devices, easily-accessible cloud space, blockchain, etc. This paper has comprehensively conducted an extensive review of the applications of big data analytics along with the prevailing challenges and solutions
Driven by Compression Progress: A Simple Principle Explains Essential Aspects of Subjective Beauty, Novelty, Surprise, Interestingness, Attention, Curiosity, Creativity, Art, Science, Music, Jokes
I argue that data becomes temporarily interesting by itself to some
self-improving, but computationally limited, subjective observer once he learns
to predict or compress the data in a better way, thus making it subjectively
simpler and more beautiful. Curiosity is the desire to create or discover more
non-random, non-arbitrary, regular data that is novel and surprising not in the
traditional sense of Boltzmann and Shannon but in the sense that it allows for
compression progress because its regularity was not yet known. This drive
maximizes interestingness, the first derivative of subjective beauty or
compressibility, that is, the steepness of the learning curve. It motivates
exploring infants, pure mathematicians, composers, artists, dancers, comedians,
yourself, and (since 1990) artificial systems.Comment: 35 pages, 3 figures, based on KES 2008 keynote and ALT 2007 / DS 2007
joint invited lectur
- …