216,103 research outputs found
Recommended from our members
Deep neural networks with voice entry estimation heuristics for voice separation in symbolic music representations
In this study we explore the use of deep feedforward neural networks for voice separation in symbolic music representations. We experiment with different network architectures, varying the number and size of the hidden layers, and with dropout. We integrate two voice entry estimation heuristics that estimate the entry points of the individual voices in the polyphonic fabric into the models. These heuristics serve to reduce error propagation at the beginning of a piece, which, as we have shown in previous work, can seriously hamper model performance.
The models are evaluated on the 48 fugues from Johann Sebastian Bach’s The Well-Tempered Clavier and his 30 inventions—a dataset that we curated and make publicly available. We find that a model with two hidden layers yields the best results. Using more layers does not lead to a significant performance improvement. Furthermore, we find that our voice entry estimation heuristics are highly effective in the reduction of error propagation, improving performance significantly. Our best-performing model outperforms our previous models, where the difference is significant, and, depending on the evaluation metric, performs close to or better than the reported state of the art
Methodological Issues in Building, Training, and Testing Artificial Neural Networks
We review the use of artificial neural networks, particularly the feedforward
multilayer perceptron with back-propagation for training (MLP), in ecological
modelling. Overtraining on data or giving vague references to how it was
avoided is the major problem. Various methods can be used to determine when to
stop training in artificial neural networks: 1) early stopping based on
cross-validation, 2) stopping after a analyst defined error is reached or after
the error levels off, 3) use of a test data set. We do not recommend the third
method as the test data set is then not independent of model development. Many
studies used the testing data to optimize the model and training. Although this
method may give the best model for that set of data it does not give
generalizability or improve understanding of the study system. The importance
of an independent data set cannot be overemphasized as we found dramatic
differences in model accuracy assessed with prediction accuracy on the training
data set, as estimated with bootstrapping, and from use of an independent data
set. The comparison of the artificial neural network with a general linear
model (GLM) as a standard procedure is recommended because a GLM may perform as
well or better than the MLP. MLP models should not be treated as black box
models but instead techniques such as sensitivity analyses, input variable
relevances, neural interpretation diagrams, randomization tests, and partial
derivatives should be used to make the model more transparent, and further our
ecological understanding which is an important goal of the modelling process.
Based on our experience we discuss how to build a MLP model and how to optimize
the parameters and architecture.Comment: 22 pages, 2 figures. Presented in ISEI3 (2002). Ecological Modelling
in pres
- …