2 research outputs found

    Sensoring a Generative System to Create User-Controlled Melodies

    Get PDF
    [EN] The automatic generation of music is an emergent field of research that has attracted the attention of countless researchers. As a result, there is a broad spectrum of state of the art research in this field. Many systems have been designed to facilitate collaboration between humans and machines in the generation of valuable music. This research proposes an intelligent system that generates melodies under the supervision of a user, who guides the process through a mechanical device. The mechanical device is able to capture the movements of the user and translate them into a melody. The system is based on a Case-Based Reasoning (CBR) architecture, enabling it to learn from previous compositions and to improve its performance over time. The user uses a device that allows them to adapt the composition to their preferences by adjusting the pace of a melody to a specific context or generating more serious or acute notes. Additionally, the device can automatically resist some of the user’s movements, this way the user learns how they can create a good melody. Several experiments were conducted to analyze the quality of the system and the melodies it generates. According to the users’ validation, the proposed system can generate music that follows a concrete style. Most of them also believed that the partial control of the device was essential for the quality of the generated music

    Proceedings of the Ubiquitous Music Symposium - ubimus2022

    Get PDF
    Following the nice experiences of hybrid and remote events held in Porto Seguro, Bahia, Brazil in 2020 and Porto, Portugal, in 2021, this year our community decided to fully embrace the remote modality. The event was hosted by our partners at the State University of Paraná (Unespar), located in Curitiba, Brazil, under the able coordination of Felipe de Almeida Ribeir
    corecore