41 research outputs found

    ANALYSIS OF SOUNDPAINTING SIGN LANGUAGE VISUALS

    Get PDF
    Soundpainting constructed by composer Walter Thompson is a simultaneous universal composition sign language which is made for musicians, dancers, poets and visual artists operated in improvisational environment. Today soundpainting language contains more than 1500 signs. In this research it is aimed to analyze (converting visual texts to writing texts) the gestures (moving visuals) in 1st level of soundpainting. Content analysis was made in this study. After the data analysis, it was discovered that soundpainting sign language gestures have common features in real life body language. Also it was seen that these gestures have universal communication language which has no boundaries like language or hearing deprivation

    Vers la transcription automatique de gestes du soundpainting pour l'analyse de performances interactives

    Get PDF
    L'analyse objective et la documentation de performances interactives est souvent délicate car extrêmement complexe. Le Soundpainting, langage gestuel dédié à l'improvisation guidée de musiciens, d'acteurs, ou de danseurs, peut constituer un terrain privilégié pour cette analyse. Des gestes prédéfinis sont produits pour indiquer aux improvisateurs le type de matériel souhaité. La transcription des gestes en vue de la documentation de performances semble tout à fait réalisable mais très fastidieuse. Dans cet article, nous présentons un outil de reconnaissance automatique de gestes dédié à l'annotation d'une performance de soundpainting. Un premier prototype a été développé pour reconnaître les gestes filmé par une caméra de type Kinect. La transcription automatique de gestes pourrait ainsi mener à diverses applications, notamment l'analyse de la pratique du soundpainting en général, mais également la compréhension et la modélisation de performances musicales interactives

    Une étude sur la prise en compte simultanée de deux modalités pour la reconnaissance de gestes de SoundPainting

    Get PDF
    National audienceNowadays, gestures are being adopted as a new modality in the field of Human-Computer Interaction (HMI), where the physical movements of the whole body can perform unlimited actions. Soundpainting is a language of artistic composition used for more than forty years. However, the work on the recognition of SoundPainting gestures is limited and they do not take into account the movements of the fingers and the hand in the gestures which constitute an essential part of SoundPainting. In this context, we con- ducted a study to explore the combination of 3D postures and muscle activity for the recognition of SoundPainting gestures. In order to carry out this study, we created a Sound- Painting database of 17 gestures with data from two sensors (Kinect and Myo). We formulated four hypotheses concerning the accuracy of recognition. The results allowed to characterize the best sensor according to the typology of the gesture, to show that a "simple" combination of the two sensors does not necessarily improves the recognition, that a combination of features is not necessarily more efficient than taking into account a single well-chosen feature, finally, that changing the frequency of the data acquisition provided by these sensors does not have a significant impact on the recognition of gestures.Actuellement, les gestes sont adoptés comme une nouvelle modalité dans le domaine de l'interaction homme-machine, où les mouvements physiques de tout le corps peuvent effectuer des actions quasi-illimitées. Le Soundpainting est un langage de composition artistique utilisé depuis plus de quarante ans. Pourtant, les travaux sur la reconnaissance des gestes SoundPainting sont limités et ils ne prennent pas en compte les mouvements des doigts et de la main dans les gestes qui constituent une partie essentielle de SoundPainting. Dans ce contexte, nous avons réalisé une étude pour explorer la combinaison de postures 3D et de l'activité musculaire pour la reconnaissance des gestes SoundPainting. Pour réaliser cette étude, nous avons créé une base de données SoundPainting de 17 gestes avec les données provenant de deux capteurs (Kinect et Myo). Nous avons formulé quatre hypothèses portant sur la précision de la reconnaissance. Les résultats ont permis de caractériser le meilleur capteur en fonction de la typologie du geste, de montrer qu'une "simple" combinaison des deux capteurs n'entraîne pas forcément une amélioration de la reconnaissance, de même une combinaisons de caractéristiques n'est pas forcément plus performante que la prise en compte d'une seule caractéristique bien choisie, enfin, que le changement de la cadence d'acquisition des données fournies par ces capteurs n'a pas un impact significatif sur la reconnaissance des gestes

    Automatic recognition of Soundpainting for the Generation of Electronic Music Sounds

    Get PDF
    This work aims to explore the use of a new gesture-based interaction built on automatic recognition of Soundpainting structured gestural language. In the proposed approach, a composer (called Soundpainter) performs Soundpainting gestures facing a Kinect sensor (Microsoft). Then, a gesture recognition system captures gestures that are sent to a sound generator software. The proposed method was used to stage an artistic show in which a Soundpainter had to improvise with 6 different gestures to generate a musical composition from different sounds in real time. The accuracy of the gesture recognition system was evaluated as well as Soundpainter's user experience. In addition, a user evaluation study for using our proposed system in a learning context was also conducted. Current results open up perspectives for the design of new artistic expressions based on the use of automatic gestural recognition supported by Soundpainting language

    Distributed Networks of Listening and Sounding: 20 Years of Telematic Musicking

    Get PDF
    This paper traces a twenty-year arc of my performance and compositional practice in the medium of telematic music, focusing on a distinct approach to fostering interdependence and emergence through the integration of listening strategies, electroacoustic improvisation, pre-composed structures, blended real/virtual acoustics, networked mutual-influence, shared signal transformations, gesture-concepts and machine agencies. Communities of collaboration and exchange over this time period are discussed, which span both pre- and post-pandemic approaches to the medium that range from metaphors of immersion and dispersion to diffraction

    How performance thinks

    Get PDF
    This volume questions how performance thinks from a wide range of overlapping perspectives and contexts including practice-as-research, professional practice and the emerging sub-field of ‘performance & philosophy’. Can performance be understood as a kind of thinking in its own right? What value might such an understanding have for performance and philosophical research, for academia and for practices operating outside the academy

    Conducted Improvisation

    Get PDF
    The purpose of this thesis Conducted Improvisation is to study how a musical sign language can effect creativity, musical interaction and sense of freedom in ensemble playing by analyzing Extemporize, a sign system I have developed, with the further goal to explore the pedagogical potential in this practice. The research questions that drive this study are: In what ways does the sign system effect creativity and playfulness in improvised performance? In what ways does the sign system affect the sense of freedom in ensemble performance? In what ways does the sign system effect musical interaction? The material for the survey has been acquired through video documentation of rehearsals and concerts and interviews with the participants. 15 musicians took part in this study whereof 10 of them were interviewed. The results of the study indicate that sign language is effective in working with improvisation in ensemble and can have an impact on both freedom and constraint, it has the possibility to strengthen the voice of the musicians and push them in other musical directions than normally. But conducted improvisation can also create frustrations in the musicians, especially if the sign system is not rehearsed enough.Formålet med denne afhandling Conducted Improvisation er at undersøge, hvordan et musikalsk tegnsprog kan påvirke kreativitet, musikalsk interaktion og følelse af frihed i sammenspilssituationer ved at analysere Extemporize, et tegnsprog jeg har udarbejdet, med yderligere det formål at udforske det pædagogiske potentiale i denne praksis. Forskningsspørgsmålene, der driver denne undersøgelse er: På hvilke måder påvirker tegnsproget kreativitet og leg i improvisation? På hvilke måder påvirker tegnsproget følelsen af frihed i sammenspilssituationer? Hvilken effekt har tegnsproget på musikalsk interaktion? Materialet til undersøgelsen er erhvervet gennem video dokumentation af øvere og koncerter og interviews med deltagerne. 15 musikere deltog i denne undersøgelse, hvoraf 10 af dem blev interviewet. Resultaterne af undersøgelsen viser, at tegnsproget er effektiv i arbejdet med improvisation i ensemble og kan have en indvirkning på både frihed og tvang, det har mulighed for at styrke musikernes stemmer og skubbe dem i andre musikalske retninger end normalt. Men dirigeret improvisation kan også skabe frustrationer i musikerne, især hvis tegnsproget ikke er øvet nok

    SketchSynth: a browser-based sketching interface for sound control

    Get PDF
    SketchSynth is an interface that allows users to create mappings between synthesised sound and a graphical sketch input based on human cross-modal perception. The project is rooted in the authors' research which collected 2692 sound-sketches from 178 participants representing their associations with various sounds. The interface extracts sketch features in real-time that were shown to correlate with sound characteristics and can be mapped to synthesis and audio effect parameters via Open Sound Control (OSC). This modular approach allows for an easy integration into an existing workflow and can be tailored to individual preferences. The interface can be accessed online through a web-browser on a computer, laptop, smartphone or tablet and does not require specialised hard- or software. We demonstrate SketchSynth with an iPad for sketch input to control synthesis and audio effect parameters in the Ableton Live digital audio workstation (DAW). A MIDI controller is used to play notes and trigger pre-recorded accompaniment. This work serves as an example of how perceptual research can help create strong, meaningful gesture-to-sound mappings
    corecore