51 research outputs found

    Auditory masking and the Precedence Effect in Studies of Musical Timekeeping

    Get PDF
    Musical timekeeping is an important and evolving area of research with applications in a variety of music education and performance situations. Studies in this field are of ten concerned with being able to measure the accuracy or consistency of human participants, for whatever purpose is being investigated. Our initial explorations suggest that little has been done to consider the role that auditory masking, specifically the precedence effect, plays in the study of human timekeeping tasks. In this paper, we highlight the importance of integrating masking into studies of timekeeping and suggest areas for discussion and future research, to address shortfalls in the literature

    The design of future music technologies: ‘sounding out’ AI, immersive experiences & brain controlled interfaces

    Get PDF
    This paper outlines some of the issues that we will be discussing in the workshop “The Design of Future Music Technologies: ‘Sounding Out’ AI, Immersive Experiences & Brain Controlled Interfaces.” Musical creation, performance and consumption is at a crossroads, how will future technologies be affected by exciting and innovative new developments in artificial intelligence, immersive technologies and developing mechanisms for interfacing with music, such as Brain Controlled systems. In many respects this document acts as a mini survey, made up of supporting material, a bibliography of works and offers a series of quotes from work that has mainly emerged from the FAST Project – see: www.semanticaudio.co.uk

    Real time Pattern Based Melodic Query for Music Continuation System

    Get PDF
    This paper presents a music continuation system using pattern matching to find patterns within a library of MIDI files using a realtime algorithm to build a system which can be used as interactive DJ system. This paper also looks at the influence of different kinds of pattern matching on MIDI file analysis. Many pattern-matching algorithms have been developed for text analysis, voice recognition and Bio-informatics but as the domain knowledge and nature of the problems are different these algorithms are not ideally suitable for real time MIDI processing for interactive music continuation system. By taking patterns in real-time, via MIDI keyboard, the system searches patterns within a corpus of MIDI files and continues playing from the user's musical input. Four different types of pattern matching are used in this system (i.e. exact pattern matching, reverse pattern matching, pattern matching with mismatch and combinatorial pattern matching in a single system). After computing the results of the four types of pattern matching of each MIDI file, the system compares the results and locates the highest pattern matching possibility MIDI file within the library

    Collaborative Artificial Intelligence in Music Production

    Get PDF
    The use of technology has revolutionized the process of music composition, recording, and production in the last 30 years. One fusion of technology and music that has been longstanding is the use of artificial intelligence in the process of music composition. However, much less attention has been given to the application of AI in the process of collaboratively composing and producing a piece of recorded music. The aim of this project is to explore such use of artificial intelligence in music production. The research presented here includes discussion of an auto ethnographic study of the interactions between songwriters, with the intention that these can be used to model the collaborative process and that a computational system could be trained using this information. The research indicated that there were repeated patterns that occurred in relation to the interactions of the participating songwriters

    High-Level Analysis of Audio Features for Identifying Emotional Valence in Human Singing

    Get PDF
    Emotional analysis continues to be a topic that receives much attention in the audio and music community. The potential to link together human affective state and the emotional content or intention of musical audio has a variety of application areas in fields such as improving user experience of digital music libraries and music therapy. Less work has been directed into the emotional analysis of human acapella singing. Recently, the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) was released, which includes emotionally validated human singing samples. In this work, we apply established audio analysis features to determine if these can be used to detect underlying emotional valence in human singing. Results indicate that the short-term audio features of: energy; spectral centroid (mean); spectral centroid (spread); spectral entropy; spectral flux; spectral rolloff; and fundamental frequency can be useful predictors of emotion, although their efficacy is not consistent across positive and negative emotions

    Exploration of the characteristics and trends of electric vehicle crashes: a case study in Norway

    Get PDF
    With the rapid growth of electric vehicles (EVs) in the past decade, many new traffic safety challenges are also emerging. With the crash data of Norway from 2011 to 2018, this study gives an overview of the status quo of EV crashes. In the survey period, the proportion of EV crashes in total traffic crashes had risen from zero to 3.11% in Norway. However, in terms of severity, EV crashes do not show statistically significant differences from the Internal Combustion Engine Vehicle (ICEV) crashes. Compared to ICEV crashes, the occurrence of EV crashes features on weekday peak hours, urban areas, roadway junctions, low-speed roadways, and good visibility scenarios, which can be attributed to the fact that EVs are mainly used for urban local commuting travels in Norway. Besides, EVs are confirmed to be much more likely to collide with cyclists and pedestrians, probably due to their low-noise engines. Then, the separate logistic regression models are built to identify important factors influencing the severity of ICEV and EV crashes, respectively. Many factors show very different effects on ICEV and EV crashes, which implies the necessity of reevaluating many current traffic safety strategies in the face of the EV era. Although the Norway data is analyzed here, the findings are expected to provide new insights to other countries also in the process of the complete automotive electrification

    NON-VERBAL COMMUNICATION WITH PHYSIOLOGICAL SENSORS. THE AESTHETIC DOMAIN OF WEARABLES AND NEURAL NETWORKS

    Get PDF
    Historically, communication implies the transfer of information between bodies, yet this phenomenon is constantly adapting to new technological and cultural standards. In a digital context, it’s commonplace to envision systems that revolve around verbal modalities. However, behavioural analysis grounded in psychology research calls attention to the emotional information disclosed by non-verbal social cues, in particular, actions that are involuntary. This notion has circulated heavily into various interdisciplinary computing research fields, from which multiple studies have arisen, correlating non-verbal activity to socio-affective inferences. These are often derived from some form of motion capture and other wearable sensors, measuring the ‘invisible’ bioelectrical changes that occur from inside the body. This thesis proposes a motivation and methodology for using physiological sensory data as an expressive resource for technology-mediated interactions. Initialised from a thorough discussion on state-of-the-art technologies and established design principles regarding this topic, then applied to a novel approach alongside a selection of practice works to compliment this. We advocate for aesthetic experience, experimenting with abstract representations. Atypically from prevailing Affective Computing systems, the intention is not to infer or classify emotion but rather to create new opportunities for rich gestural exchange, unconfined to the verbal domain. Given the preliminary proposition of non-representation, we justify a correspondence with modern Machine Learning and multimedia interaction strategies, applying an iterative, human-centred approach to improve personalisation without the compromising emotional potential of bodily gesture. Where related studies in the past have successfully provoked strong design concepts through innovative fabrications, these are typically limited to simple linear, one-to-one mappings and often neglect multi-user environments; we foresee a vast potential. In our use cases, we adopt neural network architectures to generate highly granular biofeedback from low-dimensional input data. We present the following proof-of-concepts: Breathing Correspondence, a wearable biofeedback system inspired by Somaesthetic design principles; Latent Steps, a real-time auto-encoder to represent bodily experiences from sensor data, designed for dance performance; and Anti-Social Distancing Ensemble, an installation for public space interventions, analysing physical distance to generate a collective soundscape. Key findings are extracted from the individual reports to formulate an extensive technical and theoretical framework around this topic. The projects first aim to embrace some alternative perspectives already established within Affective Computing research. From here, these concepts evolve deeper, bridging theories from contemporary creative and technical practices with the advancement of biomedical technologies.Historicamente, os processos de comunicação implicam a transferência de informação entre organismos, mas este fenómeno está constantemente a adaptar-se a novos padrões tecnológicos e culturais. Num contexto digital, é comum encontrar sistemas que giram em torno de modalidades verbais. Contudo, a análise comportamental fundamentada na investigação psicológica chama a atenção para a informação emocional revelada por sinais sociais não verbais, em particular, acções que são involuntárias. Esta noção circulou fortemente em vários campos interdisciplinares de investigação na área das ciências da computação, dos quais surgiram múltiplos estudos, correlacionando a actividade nãoverbal com inferências sócio-afectivas. Estes são frequentemente derivados de alguma forma de captura de movimento e sensores “wearable”, medindo as alterações bioeléctricas “invisíveis” que ocorrem no interior do corpo. Nesta tese, propomos uma motivação e metodologia para a utilização de dados sensoriais fisiológicos como um recurso expressivo para interacções mediadas pela tecnologia. Iniciada a partir de uma discussão aprofundada sobre tecnologias de ponta e princípios de concepção estabelecidos relativamente a este tópico, depois aplicada a uma nova abordagem, juntamente com uma selecção de trabalhos práticos, para complementar esta. Defendemos a experiência estética, experimentando com representações abstractas. Contrariamente aos sistemas de Computação Afectiva predominantes, a intenção não é inferir ou classificar a emoção, mas sim criar novas oportunidades para uma rica troca gestual, não confinada ao domínio verbal. Dada a proposta preliminar de não representação, justificamos uma correspondência com estratégias modernas de Machine Learning e interacção multimédia, aplicando uma abordagem iterativa e centrada no ser humano para melhorar a personalização sem o potencial emocional comprometedor do gesto corporal. Nos casos em que estudos anteriores demonstraram com sucesso conceitos de design fortes através de fabricações inovadoras, estes limitam-se tipicamente a simples mapeamentos lineares, um-para-um, e muitas vezes negligenciam ambientes multi-utilizadores; com este trabalho, prevemos um potencial alargado. Nos nossos casos de utilização, adoptamos arquitecturas de redes neurais para gerar biofeedback altamente granular a partir de dados de entrada de baixa dimensão. Apresentamos as seguintes provas de conceitos: Breathing Correspondence, um sistema de biofeedback wearable inspirado nos princípios de design somaestético; Latent Steps, um modelo autoencoder em tempo real para representar experiências corporais a partir de dados de sensores, concebido para desempenho de dança; e Anti-Social Distancing Ensemble, uma instalação para intervenções no espaço público, analisando a distância física para gerar uma paisagem sonora colectiva. Os principais resultados são extraídos dos relatórios individuais, para formular um quadro técnico e teórico alargado para expandir sobre este tópico. Os projectos têm como primeiro objectivo abraçar algumas perspectivas alternativas às que já estão estabelecidas no âmbito da investigação da Computação Afectiva. A partir daqui, estes conceitos evoluem mais profundamente, fazendo a ponte entre as teorias das práticas criativas e técnicas contemporâneas com o avanço das tecnologias biomédicas
    corecore