1,057 research outputs found
Deep Learning Techniques for Music Generation -- A Survey
This paper is a survey and an analysis of different ways of using deep
learning (deep artificial neural networks) to generate musical content. We
propose a methodology based on five dimensions for our analysis:
Objective - What musical content is to be generated? Examples are: melody,
polyphony, accompaniment or counterpoint. - For what destination and for what
use? To be performed by a human(s) (in the case of a musical score), or by a
machine (in the case of an audio file).
Representation - What are the concepts to be manipulated? Examples are:
waveform, spectrogram, note, chord, meter and beat. - What format is to be
used? Examples are: MIDI, piano roll or text. - How will the representation be
encoded? Examples are: scalar, one-hot or many-hot.
Architecture - What type(s) of deep neural network is (are) to be used?
Examples are: feedforward network, recurrent network, autoencoder or generative
adversarial networks.
Challenge - What are the limitations and open challenges? Examples are:
variability, interactivity and creativity.
Strategy - How do we model and control the process of generation? Examples
are: single-step feedforward, iterative feedforward, sampling or input
manipulation.
For each dimension, we conduct a comparative analysis of various models and
techniques and we propose some tentative multidimensional typology. This
typology is bottom-up, based on the analysis of many existing deep-learning
based systems for music generation selected from the relevant literature. These
systems are described and are used to exemplify the various choices of
objective, representation, architecture, challenge and strategy. The last
section includes some discussion and some prospects.Comment: 209 pages. This paper is a simplified version of the book: J.-P.
Briot, G. Hadjeres and F.-D. Pachet, Deep Learning Techniques for Music
Generation, Computational Synthesis and Creative Systems, Springer, 201
Improving the translation environment for professional translators
When using computer-aided translation systems in a typical, professional translation workflow, there are several stages at which there is room for improvement. The SCATE (Smart Computer-Aided Translation Environment) project investigated several of these aspects, both from a human-computer interaction point of view, as well as from a purely technological side.
This paper describes the SCATE research with respect to improved fuzzy matching, parallel treebanks, the integration of translation memories with machine translation, quality estimation, terminology extraction from comparable texts, the use of speech recognition in the translation process, and human computer interaction and interface design for the professional translation environment. For each of these topics, we describe the experiments we performed and the conclusions drawn, providing an overview of the highlights of the entire SCATE project
Creativity and Machine Learning: a Survey
There is a growing interest in the area of machine learning and creativity.
This survey presents an overview of the history and the state of the art of
computational creativity theories, machine learning techniques, including
generative deep learning, and corresponding automatic evaluation methods. After
presenting a critical discussion of the key contributions in this area, we
outline the current research challenges and emerging opportunities in this
field.Comment: 25 pages, 3 figures, 2 table
Dysgraphia detection based on convolutional neural networks and child-robot interaction
Dysgraphia is a disorder of expression with the writing of letters, words, and numbers. Dysgraphia is one of the learning disabilities attributed to the educational sector, which has a strong impact on the academic, motor, and emotional aspects of the individual. The purpose of this study is to identify dysgraphia in children by creating an engaging robot-mediated activity, to collect a new dataset of Latin digits written exclusively by children aged 6 to 12 years. An interactive scenario that explains and demonstrates the steps involved in handwriting digits is created using the verbal and non-verbal behaviors of the social humanoid robot Nao. Therefore, we have collected a dataset that contains 11,347 characters written by 174 participants with and without dysgraphia. And through the advent of deep learning technologies and their success in various fields, we have developed an approach based on these methods. The proposed approach was tested on the generated database. We performed a classification with a convolutional neural network (CNN) to identify dysgraphia in children. The results show that the performance of our model is promising, reaching an accuracy of 91%
Supporting Human Cognitive Writing Processes: Towards a Taxonomy of Writing Support Systems
In the field of natural language processing (NLP), advances in transformer architectures and large-scale language models have led to a plethora of designs and research on a new class of information systems (IS) called writing support systems, which help users plan, write, and revise their texts. Despite the growing interest in writing support systems in research, there needs to be more common knowledge about the different design elements of writing support systems. Our goal is, therefore, to develop a taxonomy to classify writing support systems into three main categories (technology, task/structure, and user). We evaluated and refined our taxonomy with seven interviewees with domain expertise, identified three clusters in the reviewed literature, and derived five archetypes of writing support system applications based on our categorization. Finally, we formulate a new research agenda to guide researchers in the development and evaluation of writing support systems
- …