4,667 research outputs found

    Designing and evaluating the usability of a machine learning API for rapid prototyping music technology

    Get PDF
    To better support creative software developers and music technologists' needs, and to empower them as machine learning users and innovators, the usability of and developer experience with machine learning tools must be considered and better understood. We review background research on the design and evaluation of application programming interfaces (APIs), with a focus on the domain of machine learning for music technology software development. We present the design rationale for the RAPID-MIX API, an easy-to-use API for rapid prototyping with interactive machine learning, and a usability evaluation study with software developers of music technology. A cognitive dimensions questionnaire was designed and delivered to a group of 12 participants who used the RAPID-MIX API in their software projects, including people who developed systems for personal use and professionals developing software products for music and creative technology companies. The results from the questionnaire indicate that participants found the RAPID-MIX API a machine learning API which is easy to learn and use, fun, and good for rapid prototyping with interactive machine learning. Based on these findings, we present an analysis and characterization of the RAPID-MIX API based on the cognitive dimensions framework, and discuss its design trade-offs and usability issues. We use these insights and our design experience to provide design recommendations for ML APIs for rapid prototyping of music technology. We conclude with a summary of the main insights, a discussion of the merits and challenges of the application of the CDs framework to the evaluation of machine learning APIs, and directions to future work which our research deems valuable

    The sound motion controller: a distributed system for interactive music performance

    Get PDF
    We developed an interactive system for music performance, able to control sound parameters in a responsive way with respect to the user’s movements. This system is conceived as a mobile application, provided with beat tracking and an expressive parameter modulation, interacting with motion sensors and effector units, which are connected to a music output, such as synthesizers or sound effects. We describe the various types of usage of our system and our achievements, aimed to increase the expression of music performance and provide an aid to music interaction. The results obtained outline a first level of integration and foresee future cognitive and technological research related to it

    The design of sonically-enhanced widgets

    Get PDF
    This paper describes the design of user-interface widgets that include non-speech sound. Previous research has shown that the addition of sound can improve the usability of human–computer interfaces. However, there is little research to show where the best places are to add sound to improve usability. The approach described here is to integrate sound into widgets, the basic components of the human–computer interface. An overall structure for the integration of sound is presented. There are many problems with current graphical widgets and many of these are difficult to correct by using more graphics. This paper presents many of the standard graphical widgets and describes how sound can be added. It describes in detail usability problems with the widgets and then the non-speech sounds to overcome them. The non-speech sounds used are earcons. These sonically-enhanced widgets allow designers who are not sound experts to create interfaces that effectively improve usability and have coherent and consistent sounds

    DLI-2: Creating the Digital Music Library: Final Report to the National Science Foundation

    Get PDF
    Indiana University’s Variations2 Digital Music Library project focused on three chief areas of research and development: system architecture, including content representation and metadata standards; component-based application architecture; and network services. We tested and evaluated commercial technologies, primarily for multimedia and storage management; developed custom software solutions for the needs of the music library community; integrated commercial and custom software products; and tested and evaluated prototype systems for music instruction and library services, locally at Indiana University, and at a number of satellite sites, in the U.S. and overseas. This document is the project's final report to the National Science Foundation.This work was sponsored by the National Science Foundation under award no. 9909068, as part of the DLI-2 initiative

    Analysis and evaluation of mobile rhythm games : Game structure and playability

    Get PDF
    The rhythm game is an action simulation game adapted to the presented music. While it is expected to have an educational effect as a functional game, the relationship between the operability and rhythm education under the mobile platform is still questionable. In Korea, it seems that mobile rhythm game is a minority maniac genre that are played mostly among teenagers and early twenties. In this paper, we select three mobile rhythm games that are most played by Korean gamers in analysis. First, we analyze the user interface layout, note control, evaluation style and level of difficulty for three games – Deeno, Cytus, and Lanota. Then, we take a user survey in order to evaluate the playability of those games. All three games obtain high scores but there exust several statistically significant differences among games in analysis

    Integrating musicology's heterogeneous data sources for better exploration

    No full text
    Musicologists have to consult an extraordinarily heterogeneous body of primary and secondary sources during all stages of their research. Many of these sources are now available online, but the historical dispersal of material across libraries and archives has now been replaced by segregation of data and metadata into a plethora of online repositories. This segregation hinders the intelligent manipulation of metadata, and means that extracting large tranches of basic factual information or running multi-part search queries is still enormously and needlessly time consuming. To counter this barrier to research, the “musicSpace” project is experimenting with integrating access to many of musicology’s leading data sources via a modern faceted browsing interface that utilises Semantic Web and Web2.0 technologies such as RDF and AJAX. This will make previously intractable search queries tractable, enable musicologists to use their time more efficiently, and aid the discovery of potentially significant information that users did not think to look for. This paper outlines our work to date

    Virtual Reality Rhythm Game

    Get PDF
    Virtual reality headsets such as the HTC Vive and Oculus Rift bring robust virtual reality technology in the hands of consumers. However, virtual reality technology is still a very new and unexplored domain with a dearth of compelling software that takes advantage of what virtual reality has to offer. Current rhythm games on the virtual reality platform lack a sense of immersion for the player. These games also require players to remain stationary during gameplay. Our solution is a game where players have to hit musical notes that appear in a trail around them. The trail will move in different directions and players have to move and turn around accordingly in order to hit every note and pass a song

    A Semantic Web Annotation Tool for a Web-Based Audio Sequencer

    Get PDF
    Music and sound have a rich semantic structure which is so clear to the composer and the listener, but that remains mostly hidden to computing machinery. Nevertheless, in recent years, the introduction of software tools for music production have enabled new opportunities for migrating this knowledge from humans to machines. A new generation of these tools may exploit sound samples and semantic information coupling for the creation not only of a musical, but also of a "semantic" composition. In this paper we describe an ontology driven content annotation framework for a web-based audio editing tool. In a supervised approach, during the editing process, the graphical web interface allows the user to annotate any part of the composition with concepts from publicly available ontologies. As a test case, we developed a collaborative web-based audio sequencer that provides users with the functionality to remix the audio samples from the Freesound website and subsequently annotate them. The annotation tool can load any ontology and thus gives users the opportunity to augment the work with annotations on the structure of the composition, the musical materials, and the creator's reasoning and intentions. We believe this approach will provide several novel ways to make not only the final audio product, but also the creative process, first class citizens of the Semantic We
    • 

    corecore