102,877 research outputs found
Towards responsive Sensitive Artificial Listeners
This paper describes work in the recently started project SEMAINE, which aims to build a set of Sensitive Artificial Listeners – conversational agents designed to sustain an interaction with a human user despite limited verbal skills, through robust recognition and generation of non-verbal behaviour in real-time, both when the agent is speaking and listening. We report on data collection and on the design of a system architecture in view of real-time responsiveness
Revisiting the Hierarchical Multiscale LSTM
Hierarchical Multiscale LSTM (Chung et al., 2016a) is a state-of-the-art
language model that learns interpretable structure from character-level input.
Such models can provide fertile ground for (cognitive) computational
linguistics studies. However, the high complexity of the architecture, training
procedure and implementations might hinder its applicability. We provide a
detailed reproduction and ablation study of the architecture, shedding light on
some of the potential caveats of re-purposing complex deep-learning
architectures. We further show that simplifying certain aspects of the
architecture can in fact improve its performance. We also investigate the
linguistic units (segments) learned by various levels of the model, and argue
that their quality does not correlate with the overall performance of the model
on language modeling.Comment: To appear in COLING 2018 (reproduction track
Analyzing and Interpreting Neural Networks for NLP: A Report on the First BlackboxNLP Workshop
The EMNLP 2018 workshop BlackboxNLP was dedicated to resources and techniques
specifically developed for analyzing and understanding the inner-workings and
representations acquired by neural models of language. Approaches included:
systematic manipulation of input to neural networks and investigating the
impact on their performance, testing whether interpretable knowledge can be
decoded from intermediate representations acquired by neural networks,
proposing modifications to neural network architectures to make their knowledge
state or generated output more explainable, and examining the performance of
networks on simplified or formal languages. Here we review a number of
representative studies in each category
RRL: A Rich Representation Language for the Description of Agent Behaviour in NECA
In this paper, we describe the Rich Representation Language (RRL) which is used in the NECA system. The NECA system generates interactions between two or more animated characters. The RRL is a formal framework for representing the information that is exchanged at the interfaces between the various NECA system modules
- …