92,572 research outputs found

    A Step Toward AI Tools for Quality Control and Musicological Analysis of Digitized Analogue Recordings: Recognition of Audio Tape Equalizations

    Get PDF
    Historical analogue audio documents are indissolubly linked to their physical carriers on which they are recorded. Because of their short life expectancy these documents have to be digitized. During this process, the document may be altered with the result that the digital copy is not reliable from the authenticity point of view. This happens because digitization process is not completely automatized and sometimes it is influenced by human subjective choices. Artificial intelligence can help operators to avoid errors, enhancing reliability and accuracy, and becoming the base for quality control tools. Furthermore, this kind of algorithms could be part of new instruments aiming to ease and to enrich musicological studies. This work focuses the attention on the equalization recognition problem in the audio tape recording field. The results presented in this paper, highlight that, using machine learning algorithms, is possible to recognize the pre-emphasis equalization used to record an audio tape

    Embedding Feature Selection for Large-scale Hierarchical Classification

    Full text link
    Large-scale Hierarchical Classification (HC) involves datasets consisting of thousands of classes and millions of training instances with high-dimensional features posing several big data challenges. Feature selection that aims to select the subset of discriminant features is an effective strategy to deal with large-scale HC problem. It speeds up the training process, reduces the prediction time and minimizes the memory requirements by compressing the total size of learned model weight vectors. Majority of the studies have also shown feature selection to be competent and successful in improving the classification accuracy by removing irrelevant features. In this work, we investigate various filter-based feature selection methods for dimensionality reduction to solve the large-scale HC problem. Our experimental evaluation on text and image datasets with varying distribution of features, classes and instances shows upto 3x order of speed-up on massive datasets and upto 45% less memory requirements for storing the weight vectors of learned model without any significant loss (improvement for some datasets) in the classification accuracy. Source Code: https://cs.gmu.edu/~mlbio/featureselection.Comment: IEEE International Conference on Big Data (IEEE BigData 2016
    • …
    corecore