44,246 research outputs found
Extreme Entropy Machines: Robust information theoretic classification
Most of the existing classification methods are aimed at minimization of
empirical risk (through some simple point-based error measured with loss
function) with added regularization. We propose to approach this problem in a
more information theoretic way by investigating applicability of entropy
measures as a classification model objective function. We focus on quadratic
Renyi's entropy and connected Cauchy-Schwarz Divergence which leads to the
construction of Extreme Entropy Machines (EEM).
The main contribution of this paper is proposing a model based on the
information theoretic concepts which on the one hand shows new, entropic
perspective on known linear classifiers and on the other leads to a
construction of very robust method competetitive with the state of the art
non-information theoretic ones (including Support Vector Machines and Extreme
Learning Machines).
Evaluation on numerous problems spanning from small, simple ones from UCI
repository to the large (hundreads of thousands of samples) extremely
unbalanced (up to 100:1 classes' ratios) datasets shows wide applicability of
the EEM in real life problems and that it scales well
An Affect-Rich Neural Conversational Model with Biased Attention and Weighted Cross-Entropy Loss
Affect conveys important implicit information in human communication. Having
the capability to correctly express affect during human-machine conversations
is one of the major milestones in artificial intelligence. In recent years,
extensive research on open-domain neural conversational models has been
conducted. However, embedding affect into such models is still under explored.
In this paper, we propose an end-to-end affect-rich open-domain neural
conversational model that produces responses not only appropriate in syntax and
semantics, but also with rich affect. Our model extends the Seq2Seq model and
adopts VAD (Valence, Arousal and Dominance) affective notations to embed each
word with affects. In addition, our model considers the effect of negators and
intensifiers via a novel affective attention mechanism, which biases attention
towards affect-rich words in input sentences. Lastly, we train our model with
an affect-incorporated objective function to encourage the generation of
affect-rich words in the output responses. Evaluations based on both perplexity
and human evaluations show that our model outperforms the state-of-the-art
baseline model of comparable size in producing natural and affect-rich
responses.Comment: AAAI-1
History of art paintings through the lens of entropy and complexity
Art is the ultimate expression of human creativity that is deeply influenced
by the philosophy and culture of the corresponding historical epoch. The
quantitative analysis of art is therefore essential for better understanding
human cultural evolution. Here we present a large-scale quantitative analysis
of almost 140 thousand paintings, spanning nearly a millennium of art history.
Based on the local spatial patterns in the images of these paintings, we
estimate the permutation entropy and the statistical complexity of each
painting. These measures map the degree of visual order of artworks into a
scale of order-disorder and simplicity-complexity that locally reflects
qualitative categories proposed by art historians. The dynamical behavior of
these measures reveals a clear temporal evolution of art, marked by transitions
that agree with the main historical periods of art. Our research shows that
different artistic styles have a distinct average degree of entropy and
complexity, thus allowing a hierarchical organization and clustering of styles
according to these metrics. We have further verified that the identified groups
correspond well with the textual content used to qualitatively describe the
styles, and that the employed complexity-entropy measures can be used for an
effective classification of artworks.Comment: 10 two-column pages, 5 figures; accepted for publication in PNAS
[supplementary information available at
http://www.pnas.org/highwire/filestream/824089/field_highwire_adjunct_files/0/pnas.1800083115.sapp.pdf
- …