287,369 research outputs found

    TONE DETECTION ON TERANIKA MUSICAL INSTRUMENT USING DISCRETE WAVELET TRANSFORM AND DECISION TREE CLASSIFICATION

    Get PDF
    Musical instruments are one of the cultures that must be preserved. Teranika is one of the traditional musical instruments from the Majalengka area, which is made of clay. Currently, the manufacture of conventional musical instruments is still done manually, so there are still differences in the tone produced. Meanwhile, the quality of a musical instrument is determined by the accuracy of the technique. Therefore, we need a system that can accurately detect the method's accuracy. The author designed a tone detection system for Teranika musical instruments to help artisans carry out quality control. This system will detect whether or not the musical instrument is successfully matched with the right tone and agent. The technique contained in this musical instrument is Do Re Mi Fa So La Si Do high. To overcome these problems, the author makes this tone detection system using the Discrete Wavelet Transform method and the Decision Tree classification. The working principle of this system is that the recorded sound of musical instruments will be transmitted to this system. Then the sound will be processed as input and matched with the essential voice in the database. The output of this system produces samples according to the sampling frequency used. The test results show the best results at decomposition level 6, a thresholding value of 0.05, and a Fine Tree classification type with an accuracy of 87.5

    XR, music and neurodiversity: design and application of new mixed reality technologies that facilitate musical intervention for children with autism spectrum conditions

    Get PDF
    This thesis, accompanied by the practice outputs,investigates sensory integration, social interaction and creativity through a newly developed VR-musical interface designed exclusively for children with a high-functioning autism spectrum condition (ASC).The results aim to contribute to the limited expanse of literature and research surrounding Virtual Reality (VR) musical interventions and Immersive Virtual Environments (IVEs) designed to support individuals with neurodevelopmental conditions. The author has developed bespoke hardware, software and a new methodology to conduct field investigations. These outputs include a Virtual Immersive Musical Reality Intervention (ViMRI) protocol, a Supplemental Personalised, immersive Musical Experience(SPiME) programme, the Assisted Real-time Three-dimensional Immersive Musical Intervention System’ (ARTIMIS) and a bespoke (and fully configurable) ‘Creative immersive interactive Musical Software’ application (CiiMS). The outputs are each implemented within a series of institutional investigations of 18 autistic child participants. Four groups are evaluated using newly developed virtual assessment and scoring mechanisms devised exclusively from long-established rating scales. Key quantitative indicators from the datasets demonstrate consistent findings and significant improvements for individual preferences (likes), fear reduction efficacy, and social interaction. Six individual case studies present positive qualitative results demonstrating improved decision-making and sensorimotor processing. The preliminary research trials further indicate that using this virtual-reality music technology system and newly developed protocols produces notable improvements for participants with an ASC. More significantly, there is evidence that the supplemental technology facilitates a reduction in psychological anxiety and improvements in dexterity. The virtual music composition and improvisation system presented here require further extensive testing in different spheres for proof of concept

    Conveying Audience Emotions through Humanoid Robot Gestures to an Orchestra during a Live Musical Exhibition

    Get PDF
    In the last twenty years, robotics have been applied in many heterogeneous contexts. Among them, the use of humanoid robots during musical concerts have been proposed and investigated by many authors. In this paper, we propose a contribution in the area of robotics application in music, consisting of a system for conveying audience emotions during a live musical exhibition, by means of a humanoid robot. In particular, we provide all spectators with a mobile app, by means of which they can select a specific color while listening to a piece of music (act). Each color is mapped to an emotion, and the audience preferences are then processed in order to select the next act to be played. This decision, based on the overall emotion felt by the audience, is then communicated by the robot through body gestures to the orchestra. Our first results show that spectators enjoy such kind of interactive musical performance, and are encouraging for further investigations

    Music-aided affective interaction between human and service robot

    Get PDF
    This study proposes a music-aided framework for affective interaction of service robots with humans. The framework consists of three systems, respectively, for perception, memory, and expression on the basis of the human brain mechanism. We propose a novel approach to identify human emotions in the perception system. The conventional approaches use speech and facial expressions as representative bimodal indicators for emotion recognition. But, our approach uses the mood of music as a supplementary indicator to more correctly determine emotions along with speech and facial expressions. For multimodal emotion recognition, we propose an effective decision criterion using records of bimodal recognition results relevant to the musical mood. The memory and expression systems also utilize musical data to provide natural and affective reactions to human emotions. For evaluation of our approach, we simulated the proposed human-robot interaction with a service robot, iRobiQ. Our perception system exhibited superior performance over the conventional approach, and most human participants noted favorable reactions toward the music-aided affective interaction.open0

    Interactive Musical Partner: A System for Human/Computer Duo Improvisations

    Get PDF
    This research is centered on the creation of a computer program that will make music with a human improviser. This Interactive Musical Partner (IMP) is designed for duo improvisations, with one human improviser and one instance of IMP, focusing on a freely improvised duo aesthetic. IMP has Musical Personality Settings (MPS) that can be set prior to performance, and these MPS guide the way IMP responds to musical input from the human. The MPS also govern the probability of particular outcomes from IMP’s creative algorithms. IMP uses audio data feature extraction methods to listen to the human partner, and react to, or ignore, the human’s musical input, based on the current MPS. This course of research presents a number of problems. Parameters for the Musical Personality Settings (MPS) must be defined, and then those parameters must be mapped to extractable audio features. A system for musical decision-making and reaction/interaction (action/interaction module) must be in place, and a synthesis module that allows for MPS control must be deployed. Designing a program intended to play with an improviser, and then improvising with that program has caused me to assess every aspect of my practice as an improviser. Not only has this research expanded my understanding of the technologies involved and made me a better technologist, but striving to get the technology to be musical has made me look at all sides of the music I make, resulting in a better improvising artist

    A global method for music symbol recognition in typeset music sheets

    Get PDF
    International audienceThis paper presents an optical music recognition (OMR) system that can automatically recognize the main musical symbols of a scanned paper-based music score. Two major stages are distinguished: the first one, using low-level pre-processing, detects the isolated objects and outputs some hypotheses about them; the second one has to take the final correct decision, through high-level processing including contextual information and music writing rules. This article exposes both stages of the method: after explaining in detail the first one, the symbol analysis process, it shows through first experiments that its outputs can efficiently be used as inputs for a high-level decision process

    IDENTIFIKASI CIRI MUSIK DENGAN MENGGUNAKAN MEL-FREKUENSI CEPSTRAL COEFFICIENTS (MFCCs)

    Get PDF
    Indonesian Art Robot Contest (KRSI) is a new division in the series of events of the Indonesian Robot Contest. In this competition each of robot is required to be able to dance following the music. So that the robot can do the job well in the contest, then needed a system that can recognize the special characteristics of the musical accompaniment. To recognize the special characteristics of the musical accompaniment, in this final project is created system identification of musical characteristic with mel-frequency cepstral coefficients (MFCCs). TMS320C6713 DSK is used as a voice signal processing system. Voice signals are processed with filter bank, then be synthesized to obtain a combined frequency which have been isolated. Signals are processed further, with framing windowing then the MFCCs (FFT, Log, IFFT, Lifter, cepstrum FFT) process. The results of the MFCCs is normalized, so that could be used as input for artificial neural network (ANN), ANN output is used to give a decision in calling a stored memory dance motion on the microcontroller Based on the results of testing, a robot or machine can be run with the sounds of music. Motion of robot based on coefficient patterns of MFCC. There are 12% coefficient patterns that can be used for ANN learning with convergent error. MFCC coefficient pattern, which is not used ANN learning can still be recognizable, it shows a similar with patternt that used for learning. Suitability of dance motion with music is 37%, it shows the suitability of motion is less. Key words: signal, filter bank, frequency, microcontroller, ANN, coefficient, MFCC

    Expression in process music: possibility or paradox?

    Get PDF
    Algorithmic composition progressed throughout the 20th century, as modernism became the dominant aesthetic, until finally ‘process music’ arrived, where the single remaining compositional decision related to what sonic resources to use. Composition pedagogy in the late 20th century did not explicitly include ‘expression’. This raises a second question, as repeatedly, musical discourse refers to ‘expression’, which seems to be something that audiences desire and to which they respond. Addressing these questions has lead me to reconsider the way music itself, and compositional processes, are characterised in music analysis. The outcome of my research is a new theory of music, using Fuzzy Logic principles. I am using this theory to build an algorithmic compositional decision-making system which can create specific aesthetic experiences
    • 

    corecore