8,147 research outputs found

    Musical instrument mapping design with Echo State Networks

    Get PDF
    Echo State Networks (ESNs), a form of recurrent neural network developed in the field of Reservoir Computing, show significant potential for use as a tool in the design of mappings for digital musical instruments. They have, however, seldom been used in this area, so this paper explores their possible applications. This project contributes a new open source library, which was developed to allow ESNs to run in the Pure Data dataflow environment. Several use cases were explored, focusing on addressing current issues in mapping research. ESNs were found to work successfully in scenarios of pattern classification, multiparametric control, explorative mapping and the design of nonlinearities and uncontrol. 'Un-trained' behaviours are proposed, as augmentations to the conventional reservoir system that allow the player to introduce potentially interesting non-linearities and uncontrol into the reservoir. Interactive evolution style controls are proposed as strategies to help design these behaviours, which are otherwise dependent on arbitrary values and coarse global controls. A study on sound classification showed that ESNs could reliably differentiate between two drum sounds, and also generalise to other similar input. Following evaluation of the use cases, heuristics are proposed to aid the use of ESNs in computer music scenarios

    Word Sense Disambiguation using a Bidirectional LSTM

    Full text link
    In this paper we present a clean, yet effective, model for word sense disambiguation. Our approach leverage a bidirectional long short-term memory network which is shared between all words. This enables the model to share statistical strength and to scale well with vocabulary size. The model is trained end-to-end, directly from the raw text to sense labels, and makes effective use of word order. We evaluate our approach on two standard datasets, using identical hyperparameter settings, which are in turn tuned on a third set of held out data. We employ no external resources (e.g. knowledge graphs, part-of-speech tagging, etc), language specific features, or hand crafted rules, but still achieve statistically equivalent results to the best state-of-the-art systems, that employ no such limitations
    • 

    corecore