5,664 research outputs found
Utilization of Recycled Filament for 3D Printing for Consumer Goods
The 3D printing market has been used in a wide variety of manufacturing industries including textile and apparel. Many consumers can now own a personal 3D printer at home for recreational printing. There are even websites dedicated to 3D printing patterns made by consumers. However, the materials used in the 3D printing process pose a problem for the environment due to their plastic-based nature. 3D printing is a layered process with each layer being printed depending on the layer below it for strength and stability. During the 3D printing process, great amounts of waste are produced as a result of printing errors that, having occurred, cannot be reused. This waste is plastic based and therefore does not readily biodegrade. Using 3D printing filament created from recycled materials (i.e. plastic bottles) could transform the waste into new re-useable materials which ultimately could reduce the harmful effect of plastic products on the environment over time. One such plastic product is plastic instrument mouthpieces. The current plastic mouthpieces on the market are not created using recycled plastics, so when they break they only contribute to the plastic waste in landfills. Therefore, the study focused on creating functional 3D printed mouthpieces from rPETG filament (Recycled Polyethylene Terephthalate Glycol-modified filament) for the University of Arkansas Hogwild Band brass players to be used during performances. A total of 29 mouthpieces were created for trumpet, trombone, and tuba players in the band and were utilized for the 2020 Hogwild season. Participants were then asked to share their feedback about the performance of the mouthpieces for the final part of the study
Pitch-Informed Solo and Accompaniment Separation
ï»żDas Thema dieser Dissertation ist die Entwicklung eines Systems zur
Tonhöhen-informierten Quellentrennung von Musiksignalen in Soloinstrument
und Begleitung. Dieses ist geeignet, die dominanten Instrumente aus einem
MusikstĂŒck zu isolieren, unabhĂ€ngig von der Art des Instruments, der
Begleitung und Stilrichtung. Dabei werden nur einstimmige
Melodieinstrumente in Betracht gezogen. Die Musikaufnahmen liegen monaural
vor, es kann also keine zusÀtzliche Information aus der Verteilung der
Instrumente im Stereo-Panorama gewonnen werden.
Die entwickelte Methode nutzt Tonhöhen-Information als Basis fĂŒr eine
sinusoidale Modellierung der spektralen Eigenschaften des Soloinstruments
aus dem Musikmischsignal. Anstatt die spektralen Informationen pro Frame zu
bestimmen, werden in der vorgeschlagenen Methode Tonobjekte fĂŒr die
Separation genutzt. Tonobjekt-basierte Verarbeitung ermöglicht es,
zusÀtzlich die NotenanfÀnge zu verfeinern, transiente Artefakte zu
reduzieren, gemeinsame Amplitudenmodulation (Common Amplitude Modulation
CAM) einzubeziehen und besser nichtharmonische Elemente der Töne
abzuschÀtzen. Der vorgestellte Algorithmus zur Quellentrennung von
Soloinstrument und Begleitung ermöglicht eine Echtzeitverarbeitung und ist
somit relevant fĂŒr den praktischen Einsatz.
Ein Experiment zur besseren Modellierung der ZusammenhÀnge zwischen
Magnitude, Phase und Feinfrequenz von isolierten Instrumententönen wurde
durchgefĂŒhrt. Als Ergebnis konnte die KontinuitĂ€t der zeitlichen
EinhĂŒllenden, die InharmonizitĂ€t bestimmter Musikinstrumente und die
Auswertung des Phasenfortschritts fĂŒr die vorgestellte Methode ausgenutzt
werden. ZusĂ€tzlich wurde ein Algorithmus fĂŒr die Quellentrennung in
perkussive und harmonische Signalanteile auf Basis des Phasenfortschritts
entwickelt. Dieser erreicht ein verbesserte perzeptuelle QualitÀt der
harmonischen und perkussiven Signale gegenĂŒber vergleichbaren Methoden nach
dem Stand der Technik.
Die vorgestellte Methode zur Klangquellentrennung in Soloinstrument und
Begleitung wurde zu den Evaluationskampagnen SiSEC 2011 und SiSEC 2013
eingereicht. Dort konnten vergleichbare Ergebnisse im Hinblick auf
perzeptuelle BewertungsmaĂe erzielt werden. Die QualitĂ€t eines
Referenzalgorithmus im Hinblick auf den in dieser Dissertation
beschriebenen Instrumentaldatensatz ĂŒbertroffen werden.
Als ein Anwendungsszenario fĂŒr die Klangquellentrennung in Solo und
Begleitung wurde ein Hörtest durchgefĂŒhrt, der die QualitĂ€tsanforderungen
an Quellentrennung im Kontext von Musiklernsoftware bewerten sollte. Die
Ergebnisse dieses Hörtests zeigen, dass die Solo- und Begleitspur gemĂ€Ă
unterschiedlicher QualitÀtskriterien getrennt werden sollten. Die
Musiklernsoftware Songs2See integriert die vorgestellte
Klangquellentrennung bereits in einer kommerziell erhÀltlichen Anwendung.This thesis addresses the development of a system for pitch-informed solo
and accompaniment separation capable of separating main instruments from
music accompaniment regardless of the musical genre of the track, or type
of music accompaniment. For the solo instrument, only pitched monophonic
instruments were considered in a single-channel scenario where no panning
or spatial location information is available.
In the proposed method, pitch information is used as an initial stage of a
sinusoidal modeling approach that attempts to estimate the spectral
information of the solo instrument from a given audio mixture. Instead of
estimating the solo instrument on a frame by frame basis, the proposed
method gathers information of tone objects to perform separation.
Tone-based processing allowed the inclusion of novel processing stages for
attack refinement, transient interference reduction, common amplitude
modulation (CAM) of tone objects, and for better estimation of non-harmonic
elements that can occur in musical instrument tones. The proposed solo and
accompaniment algorithm is an efficient method suitable for real-world
applications.
A study was conducted to better model magnitude, frequency, and phase of
isolated musical instrument tones. As a result of this study, temporal
envelope smoothness, inharmonicty of musical instruments, and phase
expectation were exploited in the proposed separation method. Additionally,
an algorithm for harmonic/percussive separation based on phase expectation
was proposed. The algorithm shows improved perceptual quality with respect
to state-of-the-art methods for harmonic/percussive separation.
The proposed solo and accompaniment method obtained perceptual quality
scores comparable to other state-of-the-art algorithms under the SiSEC 2011
and SiSEC 2013 campaigns, and outperformed the comparison algorithm on the
instrumental dataset described in this thesis.As a use-case of solo and
accompaniment separation, a listening test procedure was conducted to
assess separation quality requirements in the context of music education.
Results from the listening test showed that solo and accompaniment tracks
should be optimized differently to suit quality requirements of music
education. The Songs2See application was presented as commercial music
learning software which includes the proposed solo and accompaniment
separation method
A realtime feedback learning tool to visualize sound quality in violin performances
The assessment of the sound properties of a performed mu- sical note has been widely studied in the past. Although a consensus exist on what is a good or a bad musical performance, there is not a formal definition of performance tone quality due to its subjectivity. In this study we present a computational approach for the automatic assess- ment of violin sound production. We investigate the correlations among extracted features from audio performances and the perceptual quality of violin sounds rated by listeners using machine learning techniques. The obtained models are used for implementing a real-time feedback learning system
Whoâs playing? Towards machine-assisted identification of jazz trumpeters by timbre
The goal of our proposed study is to contribute to the growing research in machine-assisted identification of jazz performers. In particular, we seek to identify unknown jazz trumpeters. We plan to take an approach that has not received recent attention; namely, using human observation to compare spectrograms and other data representing musical timbre. We believe that human observation, when combined with machine learning, will improve accuracy of timbre recognition and performer identification. We will collect 100 music samples: five each from 20 trumpeters. We will manually sort spectrograms and other data in order to distinguish the most salient timbre characteristics. Once we choose those features, we will use a computer to filter for them. If our approach is successful, we will develop a larger database of trumpet solos
Music Information Retrieval Meets Music Education
This paper addresses the use of Music Information Retrieval (MIR) techniques in music education and their integration in learning software. A general overview of systems that are either commercially available or in research stage is presented. Furthermore, three well-known MIR methods used in music learning systems and their state-of-the-art are described: music transcription, solo and accompaniment track creation, and generation of performance instructions. As a representative example of a music learning system developed within the MIR community, the Songs2See software is outlined. Finally, challenges and directions for future research are described
A microtonal wind controller building on Yamahaâs technology to facilitate the performance of music based on the â19-EDOâ scale
We describe a project in which several collaborators adapted an existing instrument to make
it capable of playing expressively in music based on the microtonal scale characterised by equal
divsion of the octave into 19 tones (â19-EDOâ). Our objective was not just to build this instrument,
however, but also to produce a well-formed piece of music which would exploit it
idiomatically, in a performance which would provide listeners with a pleasurable and satisfying
musical experience. Hence, consideration of the extent and limits of the playing-techniques of
the resulting instrument (a âWind-Controllerâ) and of appropriate approaches to the composition
of music for it were an integral part of the project from the start. Moreover, the intention
was also that the piece, though grounded in the musical characteristics of the 19-EDO scale,
would nevertheless have a recognisable relationship with what Dimitri Tymoczko (2010) has
called the âExtended Common Practiceâ of the last millennium. So the article goes on to consider
these matters, and to present a score of the resulting new piece, annotated with comments
documenting some of the performance issues which it raises. Thus, bringing the project to
fruition involved elements of composition, performance, engineering and computing, and the
article describes how such an inter-disciplinary, multi-disciplinary and cross-disciplinary collaboration
was co-ordinated in a unified manner to achieve the envisaged outcome. Finally, we
consider why the building of microtonal instruments is such a problematic issue in a contemporary
(âhigh-techâ) society like ours
Evaluation of Music Performance: Computerized Assessment Versus Human Judges.
Ph.D. Thesis. University of HawaiÊ»i at MÄnoa 2018
Extended playing techniques: The next milestone in musical instrument recognition
The expressive variability in producing a musical note conveys information
essential to the modeling of orchestration and style. As such, it plays a
crucial role in computer-assisted browsing of massive digital music corpora.
Yet, although the automatic recognition of a musical instrument from the
recording of a single "ordinary" note is considered a solved problem, automatic
identification of instrumental playing technique (IPT) remains largely
underdeveloped. We benchmark machine listening systems for query-by-example
browsing among 143 extended IPTs for 16 instruments, amounting to 469 triplets
of instrument, mute, and technique. We identify and discuss three necessary
conditions for significantly outperforming the traditional mel-frequency
cepstral coefficient (MFCC) baseline: the addition of second-order scattering
coefficients to account for amplitude modulation, the incorporation of
long-range temporal dependencies, and metric learning using large-margin
nearest neighbors (LMNN) to reduce intra-class variability. Evaluating on the
Studio On Line (SOL) dataset, we obtain a precision at rank 5 of 99.7% for
instrument recognition (baseline at 89.0%) and of 61.0% for IPT recognition
(baseline at 44.5%). We interpret this gain through a qualitative assessment of
practical usability and visualization using nonlinear dimensionality reduction.Comment: 10 pages, 9 figures. The source code to reproduce the experiments of
this paper is made available at:
https://www.github.com/mathieulagrange/dlfm201
- âŠ