12 research outputs found
Facilitating Team-Based Programming Learning with Web Audio
In this paper, we present a course of audio programming using web audio technologies addressed to an interdisciplinary group of master students who are mostly novices in programming. This course is held in two connected university campuses through a portal space and the students are expected to work in cross-campus teams. The course promotes both individual and group work and is based on ideas from science, technology, engineering, arts and mathematics (STEAM) education, team-based learning (TBL) and project-based learning. We show the outcomes of this course, discuss the students’ feedback and reflect on the results. We found that it is important to provide individual vs. group work, to use the same code editor for consistent follow-up and to be able to share the screen to solve individual questions. Other aspects inherent to the master (e.g. intensity of the courses, coding in a research-oriented program) and to prior knowledge (e.g. web technologies) should be reconsidered. We conclude with a wider reflection on the challenges and potentials of using web audio as a programming environment for novices in TBL cross-campus courses and how to foster effective novices
A User-Adaptive Automated DJ Web App with Object-Based Audio and Crowd-Sourced Decision Trees
We describe the concepts behind a web-based minimal-UI DJ system that adapts to the user’s preference via sim- ple interactive decisions and feedback on taste. Starting from a preset decision tree modeled on common DJ prac- tice, the system can gradually learn a more customised and user-specific tree. At the core of the system are structural representations of the musical content based on semantic au- dio technologies and inferred from features extracted from the audio directly in the browser. These representations are gradually combined into a representation of the mix which could then be saved and shared with other users. We show how different types of transitions can be modeled using sim- ple musical constraints. Potential applications of the system include crowd-sourced data collection, both on temporally aligned playlisting and musical preference
Towards a Framework for the Discovery of Collections of Live Music Recordings and Artefacts on the Semantic Web
This paper introduces a platform for the representation and discovery of live music recordings and associated artefacts based on a dedicated data model. We demonstrate our technology by implementing a Web-based discovery tool for the Grateful Dead collection of the Internet Archive, a large collection of concert recordings annotated with editorial metadata. We represent this information using a Linked Data model complemented with data aggregated from several additional Web resources discussing and describing these events. These data include descriptions and images of physical artefacts such as tickets, posters and fan photos, as well as other information, e.g. about location and weather. The system uses signal processing techniques for the analysis and alignment of the digital recordings. During the discovery, users can juxtapose and compare different recordings of a given concert, or different performances of a given song by interactively blending between them
Exploring Musical Expression on the Web: Deforming, Exaggerating, and Blending Decomposed Recordings
We introduce a prototype of an educational web application
for comparative performance analysis based on source separation and object-based audio techniques. The underlying
system decomposes recordings of classical music performances
into note events using score-informed source separation and
represents the decomposed material using semantic web technologies. In a visual and interactive way, users can explore
individual performances by highlighting specific musical aspects directly within the audio and by altering the temporal
characteristics to obtain versions in which the micro-timing
is exaggerated or suppressed. Multiple performances of the
same work can be compared by juxtaposing and blending
between the corresponding recordings. Finally, by adjusting
the timing of events, users can generate intermediates of
multiple performances to investigate their commonalities and
di↵erences
Exploring Real-time Visualisations to Support Chord Learning with a Large Music Collection
A common problem in music education is finding varied and engaging material that is suitable for practising a specific musical concept or technique. At the same time, a number of large music collections are available under a Creative Commons (CC) licence (e.g. Jamendo, ccMixter), but their potential is largely untapped because of the relative obscurity of their content. In this paper, we present *Jam with Jamendo*, a web application that allows novice and expert learners of musical instruments to query songs by chord content from a large music collection, and practise the chords present in the retrieved songs by playing along. Its goal is twofold: the learners get a larger variety of practice material, while the artists receive increased exposure. We experimented with two visualisation modes. The first is a linear visualisation based on a moving time axis, the second is a circular visualisation inspired by the chromatic circle. We conducted a small-scale thinking-aloud user study with seven participants based on a hands-on practice with the web app. Through this pilot study, we obtained a qualitative understanding of the potentials and challenges of each visualisation, which will be used to inform the next design iteration of the web app
pywebaudioplayer: Bridging the gap between audio processing code and attractive visualisations based on web technology
Lately, a number of audio players based on web technology have made it possible for researchers to present their audio-related work in an attractive manner. Tools such as "wavesurfer.js", "waveform-playlist" and "trackswitch.js" provide highly-configurable players, allowing a more interactive exploration of scientific results that goes beyond simple linear playback. However, the audio output to be presented is in many cases not generated by the same web technologies. The process of preparing audio data for display therefore requires manual intervention, in order to bridge the resulting gap between programming languages. While this is acceptable for one-time events, such as the preparation of final results, it prevents the usage of such players during the iterative development cycle. Having access to rich audio players already during development would allow researchers to get more instantaneous feedback. The current workflow consists of repeatedly importing audio into a digital audio workstation in order to achieve similar capabilities, a repetitive and time-consuming process. In order to address these needs, we present "pywebaudioplayer", a Python package that automates the generation of code snippets for the each of the three aforementioned web audio players. It is aimed at use-cases where audio development in Python is combined with web visualisation. Notable examples are "Jupyter Notebook" and WSGI-compatible web frameworks such as "Flask" or "Django"
Web Audio Evaluation Tool: A framework for subjective assessment of audio
publicationstatus: publishedPerceptual listening tests are commonplace in audio research and a vital form of evaluation. While a large number of tools exist to run such tests, many feature just one test type, are platform dependent, run on proprietary software, or require considerable configuration and programming. Using Web Audio, the Web Audio Evaluation Tool (WAET) addresses these concerns by having one toolbox which can be configured to run many different tests, perform it through a web browser and without needing proprietary software or computer programming knowledge. In this paper the role of the Web Audio API in giving WAET key functionalities are shown. The paper also highlights less common features, available to web based tools, such as easy remote testing environment and in-browser analytics
