799 research outputs found

    Music conducting pedagogy and technology : a document analysis on best practices

    Get PDF
    This document analysis was designed to investigate pedagogical practices of music conducting teachers in conjunction with research of technologists on the use of various technologies as teaching tools. I sought to discern how conducting teachers and pedagogues are applying recent technological advancements into their teaching strategies. I also sought to understand what paths research is taking about the use of software, hardware, and computer systems applied to the teaching of music conducting technique. This dissertation was guided by four main research questions: (1) How has technology been used to aid in the teaching of conducting? (2) What is the role of technology in the context of conducting pedagogy? (3) Given that conducting is a performative act, how can it be developed through technological means? (4) What technological possibilities exist in the teaching of music conducting technique? Data were collected through music conducting syllabi, conducting textbooks, and research articles. Documents were selected through purposive sampling procedures. Analysis of documents through the constant comparative approach identified emerging themes and differences across the three types of documents. Based on a synthesis of information, I discussed implications for conducting pedagogy and made suggestions for conducting educators.Includes bibliographical references

    Sharing musical expression through embodied listening: a case study based on Chinese guqin music

    Get PDF
    In this study we report on the result of an experiment in which a guqin music performance was recorded and individual listeners were asked to move their arm along with the music that they heard. Movement velocity patterns were extracted from both the musician and the listeners. The analysis reveals that the listeners’ movement velocity patterns tend to correlate with each other, and with the movement velocity patterns of the player’s shoulders.The findings support the hypothesis that listeners and player share, to a certain degree, a sensitivity for musical expression and its associated corporeal intentionality

    A statistical framework for embodied music cognition

    Get PDF

    Robust correlated and individual component analysis

    Get PDF
    © 1979-2012 IEEE.Recovering correlated and individual components of two, possibly temporally misaligned, sets of data is a fundamental task in disciplines such as image, vision, and behavior computing, with application to problems such as multi-modal fusion (via correlated components), predictive analysis, and clustering (via the individual ones). Here, we study the extraction of correlated and individual components under real-world conditions, namely i) the presence of gross non-Gaussian noise and ii) temporally misaligned data. In this light, we propose a method for the Robust Correlated and Individual Component Analysis (RCICA) of two sets of data in the presence of gross, sparse errors. We furthermore extend RCICA in order to handle temporal incongruities arising in the data. To this end, two suitable optimization problems are solved. The generality of the proposed methods is demonstrated by applying them onto 4 applications, namely i) heterogeneous face recognition, ii) multi-modal feature fusion for human behavior analysis (i.e., audio-visual prediction of interest and conflict), iii) face clustering, and iv) thetemporal alignment of facial expressions. Experimental results on 2 synthetic and 7 real world datasets indicate the robustness and effectiveness of the proposed methodson these application domains, outperforming other state-of-the-art methods in the field

    Human-Computer Music Performance: From Synchronized Accompaniment to Musical Partner

    Get PDF
    Live music performance with computers has motivated many research projects in science, engineering, and the arts. In spite of decades of work, it is surprising that there is not more technology for, and a better understanding of the computer as music performer. We review the development of techniques for live music performance and outline our efforts to establish a new direction, Human-Computer Music Performance (HCMP), as a framework for a variety of coordinated studies. Our work in this area spans performance analysis, synchronization techniques, and interactive performance systems. Our goal is to enable musicians to ncorporate computers into performances easily and effectively through a better understanding of requirements, new techniques, and practical, performance-worthy implementations. We conclude with directions for future work

    An original framework for understanding human actions and body language by using deep neural networks

    Get PDF
    The evolution of both fields of Computer Vision (CV) and Artificial Neural Networks (ANNs) has allowed the development of efficient automatic systems for the analysis of people's behaviour. By studying hand movements it is possible to recognize gestures, often used by people to communicate information in a non-verbal way. These gestures can also be used to control or interact with devices without physically touching them. In particular, sign language and semaphoric hand gestures are the two foremost areas of interest due to their importance in Human-Human Communication (HHC) and Human-Computer Interaction (HCI), respectively. While the processing of body movements play a key role in the action recognition and affective computing fields. The former is essential to understand how people act in an environment, while the latter tries to interpret people's emotions based on their poses and movements; both are essential tasks in many computer vision applications, including event recognition, and video surveillance. In this Ph.D. thesis, an original framework for understanding Actions and body language is presented. The framework is composed of three main modules: in the first one, a Long Short Term Memory Recurrent Neural Networks (LSTM-RNNs) based method for the Recognition of Sign Language and Semaphoric Hand Gestures is proposed; the second module presents a solution based on 2D skeleton and two-branch stacked LSTM-RNNs for action recognition in video sequences; finally, in the last module, a solution for basic non-acted emotion recognition by using 3D skeleton and Deep Neural Networks (DNNs) is provided. The performances of RNN-LSTMs are explored in depth, due to their ability to model the long term contextual information of temporal sequences, making them suitable for analysing body movements. All the modules were tested by using challenging datasets, well known in the state of the art, showing remarkable results compared to the current literature methods

    Designing a Sensor-Based Wearable Computing System for Custom Hand Gesture Recognition Using Machine Learning

    Get PDF
    This thesis investigates how assistive technology can be made to facilitate communication for people that are unable to or have difficulty communicating via vocal speech, and how this technology can be made more universal and compatible with the many different types of sign language that they use. Through this research, a fully customisable and stand-alone wearable device was developed, that employs machine learning techniques to recognise individual hand gestures and translate them into text, images and speech. The device can recognise and translate custom hand gestures by training a personal classifier for each user, relying on a small training sample size, that works online on an embedded system or mobile device, with a classification accuracy rate of up to 99%. This was achieved through a series of iterative case studies, with user testing carried out by real users in their every day environments and in public spaces

    The analysis of bodily gestures in response to music : methods for embodied music cognition based on machine learning

    Get PDF

    Enabling mobile microinteractions

    Get PDF
    While much attention has been paid to the usability of desktop computers, mobile com- puters are quickly becoming the dominant platform. Because mobile computers may be used in nearly any situation--including while the user is actually in motion, or performing other tasks--interfaces designed for stationary use may be inappropriate, and alternative interfaces should be considered. In this dissertation I consider the idea of microinteractions--interactions with a device that take less than four seconds to initiate and complete. Microinteractions are desirable because they may minimize interruption; that is, they allow for a tiny burst of interaction with a device so that the user can quickly return to the task at hand. My research concentrates on methods for applying microinteractions through wrist- based interaction. I consider two modalities for this interaction: touchscreens and motion- based gestures. In the case of touchscreens, I consider the interface implications of making touchscreen watches usable with the finger, instead of the usual stylus, and investigate users' performance with a round touchscreen. For gesture-based interaction, I present a tool, MAGIC, for designing gesture-based interactive system, and detail the evaluation of the tool.Ph.D.Committee Chair: Starner, Thad; Committee Member: Abowd, Gregory; Committee Member: Isbell, Charles; Committee Member: Landay, james; Committee Member: McIntyre, Blai
    • …
    corecore