280 research outputs found

    Cassette Liner Notes

    Get PDF
    https://orc.library.atu.edu/atu_cass110/1016/thumbnail.jp

    Cassette Liner Notes

    Get PDF
    https://orc.library.atu.edu/atu_cass116/1025/thumbnail.jp

    Sam\u27s Laugh / words by Ed O\u27Connor

    Get PDF
    Cover: drawing of a portly African American male engaged in a hearty laugh; description reads characteristic march and two-step; Publisher: Chas I. Davis Music Publisher (Detroit)https://egrove.olemiss.edu/sharris_b/1071/thumbnail.jp

    Cassette Liner Notes

    Get PDF
    https://orc.library.atu.edu/atu_cass114/1012/thumbnail.jp

    LP Liner Notes

    Get PDF
    https://orc.library.atu.edu/atu_cass113/1016/thumbnail.jp

    Cassette Liner Notes

    Get PDF
    https://orc.library.atu.edu/atu_cass117/1012/thumbnail.jp

    Faddeev eigenfunctions for two-dimensional Schrodinger operators via the Moutard transformation

    Full text link
    We demonstrate how the Moutard transformation of two-dimensional Schrodinger operators acts on the Faddeev eigenfunctions on the zero energy level and present some explicitly computed examples of such eigenfunctions for smooth fast decaying potentials of operators with non-trivial kernel and for deformed potentials which correspond to blowing up solutions of the Novikov-Veselov equation.Comment: 11 pages, final remarks are adde

    Contrastive audio-language learning for music

    Get PDF
    As one of the most intuitive interfaces known to humans, natural language has the potential to mediate many tasks that involve human-computer interaction, especially in application-focused fields like Music Information Retrieval. In this work, we explore cross-modal learning in an attempt to bridge audio and language in the music domain. To this end, we propose MusCALL, a framework for Music Contrastive Audio-Language Learning. Our approach consists of a dual-encoder architecture that learns the alignment between pairs of music audio and descriptive sentences, producing multimodal embeddings that can be used for text-to-audio and audio-to-text retrieval out-of-the-box. Thanks to this property, MusCALL can be transferred to virtually any task that can be cast as text-based retrieval. Our experiments show that our method performs significantly better than the baselines at retrieving audio that matches a textual description and, conversely, text that matches an audio query. We also demonstrate that the multimodal alignment capability of our model can be successfully extended to the zero-shot transfer scenario for genre classification and auto-tagging on two public datasets

    ST-ITO: Controlling audio effects for style transfer with inference-time optimization

    Get PDF
    Audio production style transfer is the task of processing an input to impart stylistic elements from a reference recording. Existing approaches often train a neural network to estimate control parameters for a set of audio effects. However, these approaches are limited in that they can only control a fixed set of effects, where the effects must be differentiable or otherwise employ specialized training techniques. In this work, we introduce ST-ITO, Style Transfer with Inference-Time Optimization, an approach that instead searches the parameter space of an audio effect chain at inference. This method enables control of arbitrary audio effect chains, including unseen and non-differentiable effects. Our approach employs a learned metric of audio production style, which we train through a simple and scalable self-supervised pretraining strategy, along with a gradient-free optimizer. Due to the limited existing evaluation methods for audio production style transfer, we introduce a multi-part benchmark to evaluate audio production style metrics and style transfer systems. This evaluation demonstrates that our audio representation better captures attributes related to audio production and enables expressive style transfer via control of arbitrary audio effects
    corecore