40 research outputs found
Stay True to the Sound of History: Philology, Phylogenetics and Information Engineering in Musicology
This work investigates computational musicology for the study of tape music works tackling the problems concerning stemmatics. These philological problems have been analyzed with an innovative approach considering the peculiarities of audio tape recordings. The paper presents a phylogenetic reconstruction strategy that relies on digitizing the analyzed tapes and then converting each audio track into a two-dimensional spectrogram. This conversion allows adopting a set of computer vision tools to align and equalize different tracks in order to infer the most likely transformation that converts one track into another. In the presented approach, the main editing techniques, intentional and unintentional alterations and different configurations of a tape recorded are estimated in phylogeny analysis. The proposed solution presents a satisfying robustness to the adoption of the wrong reading setup together with a good reconstruction accuracy of the phylogenetic tree. The reconstructed dependencies proved to be correct or plausible in 90% of the experimental cases
Reconhecimento de padrões em expressões faciais : algoritmos e aplicações
Orientador: HĂ©lio PedriniTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: O reconhecimento de emoções tem-se tornado um tĂłpico relevante de pesquisa pela comunidade cientĂfica, uma vez que desempenha um papel essencial na melhoria contĂnua dos sistemas de interação humano-computador. Ele pode ser aplicado em diversas áreas, tais como medicina, entretenimento, vigilância, biometria, educação, redes sociais e computação afetiva. Há alguns desafios em aberto relacionados ao desenvolvimento de sistemas emocionais baseados em expressões faciais, como dados que refletem emoções mais espontâneas e cenários reais. Nesta tese de doutorado, apresentamos diferentes metodologias para o desenvolvimento de sistemas de reconhecimento de emoções baseado em expressões faciais, bem como sua aplicabilidade na resolução de outros problemas semelhantes. A primeira metodologia Ă© apresentada para o reconhecimento de emoções em expressões faciais ocluĂdas baseada no Histograma da Transformada Census (CENTRIST). Expressões faciais ocluĂdas sĂŁo reconstruĂdas usando a Análise Robusta de Componentes Principais (RPCA). A extração de caracterĂsticas das expressões faciais Ă© realizada pelo CENTRIST, bem como pelos Padrões Binários Locais (LBP), pela Codificação Local do Gradiente (LGC) e por uma extensĂŁo do LGC. O espaço de caracterĂsticas gerado Ă© reduzido aplicando-se a Análise de Componentes Principais (PCA) e a Análise Discriminante Linear (LDA). Os algoritmos K-Vizinhos mais PrĂłximos (KNN) e Máquinas de Vetores de Suporte (SVM) sĂŁo usados para classificação. O mĂ©todo alcançou taxas de acerto competitivas para expressões faciais ocluĂdas e nĂŁo ocluĂdas. A segunda Ă© proposta para o reconhecimento dinâmico de expressões faciais baseado em Ritmos Visuais (VR) e Imagens da HistĂłria do Movimento (MHI), de modo que uma fusĂŁo de ambos descritores codifique informações de aparĂŞncia, forma e movimento dos vĂdeos. Para extração das caracterĂsticas, o Descritor Local de Weber (WLD), o CENTRIST, o Histograma de Gradientes Orientados (HOG) e a Matriz de CoocorrĂŞncia em NĂvel de Cinza (GLCM) sĂŁo empregados. A abordagem apresenta uma nova proposta para o reconhecimento dinâmico de expressões faciais e uma análise da relevância das partes faciais. A terceira Ă© um mĂ©todo eficaz apresentado para o reconhecimento de emoções audiovisuais com base na fala e nas expressões faciais. A metodologia envolve uma rede neural hĂbrida para extrair caracterĂsticas visuais e de áudio dos vĂdeos. Para extração de áudio, uma Rede Neural Convolucional (CNN) baseada no log-espectrograma de Mel Ă© usada, enquanto uma CNN construĂda sobre a Transformada de Census Ă© empregada para a extração das caracterĂsticas visuais. Os atributos audiovisuais sĂŁo reduzidos por PCA e LDA, entĂŁo classificados por KNN, SVM, RegressĂŁo LogĂstica (LR) e Gaussian NaĂŻve Bayes (GNB). A abordagem obteve taxas de reconhecimento competitivas, especialmente em dados espontâneos. A penĂşltima investiga o problema de detectar a sĂndrome de Down a partir de fotografias. Um descritor geomĂ©trico Ă© proposto para extrair caracterĂsticas faciais. Experimentos realizados em uma base de dados pĂşblica mostram a eficácia da metodologia desenvolvida. A Ăşltima metodologia trata do reconhecimento de sĂndromes genĂ©ticas em fotografias. O mĂ©todo visa extrair atributos faciais usando caracterĂsticas de uma rede neural profunda e medidas antropomĂ©tricas. Experimentos sĂŁo realizados em uma base de dados pĂşblica, alcançando taxas de reconhecimento competitivasAbstract: Emotion recognition has become a relevant research topic by the scientific community, since it plays an essential role in the continuous improvement of human-computer interaction systems. It can be applied in various areas, for instance, medicine, entertainment, surveillance, biometrics, education, social networks, and affective computing. There are some open challenges related to the development of emotion systems based on facial expressions, such as data that reflect more spontaneous emotions and real scenarios. In this doctoral dissertation, we propose different methodologies to the development of emotion recognition systems based on facial expressions, as well as their applicability in the development of other similar problems. The first is an emotion recognition methodology for occluded facial expressions based on the Census Transform Histogram (CENTRIST). Occluded facial expressions are reconstructed using an algorithm based on Robust Principal Component Analysis (RPCA). Extraction of facial expression features is then performed by CENTRIST, as well as Local Binary Patterns (LBP), Local Gradient Coding (LGC), and an LGC extension. The generated feature space is reduced by applying Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). K-Nearest Neighbor (KNN) and Support Vector Machine (SVM) algorithms are used for classification. This method reached competitive accuracy rates for occluded and non-occluded facial expressions. The second proposes a dynamic facial expression recognition based on Visual Rhythms (VR) and Motion History Images (MHI), such that a fusion of both encodes appearance, shape, and motion information of the video sequences. For feature extraction, Weber Local Descriptor (WLD), CENTRIST, Histogram of Oriented Gradients (HOG), and Gray-Level Co-occurrence Matrix (GLCM) are employed. This approach shows a new direction for performing dynamic facial expression recognition, and an analysis of the relevance of facial parts. The third is an effective method for audio-visual emotion recognition based on speech and facial expressions. The methodology involves a hybrid neural network to extract audio and visual features from videos. For audio extraction, a Convolutional Neural Network (CNN) based on log Mel-spectrogram is used, whereas a CNN built on Census Transform is employed for visual extraction. The audio and visual features are reduced by PCA and LDA, and classified through KNN, SVM, Logistic Regression (LR), and Gaussian NaĂŻve Bayes (GNB). This approach achieves competitive recognition rates, especially in a spontaneous data set. The second last investigates the problem of detecting Down syndrome from photographs. A geometric descriptor is proposed to extract facial features. Experiments performed on a public data set show the effectiveness of the developed methodology. The last methodology is about recognizing genetic disorders in photos. This method focuses on extracting facial features using deep features and anthropometric measurements. Experiments are conducted on a public data set, achieving competitive recognition ratesDoutoradoCiĂŞncia da ComputaçãoDoutora em CiĂŞncia da Computação140532/2019-6CNPQCAPE
An Investigation into the Use of Artificial Intelligence Techniques for the Analysis and Control of Instrumental Timbre and Timbral Combinations
Researchers have investigated harnessing computers as a tool to aid in the composition of music for over 70 years. In major part, such research has focused on creating algorithms to work with pitches and rhythm, which has resulted in a selection of sophisticated systems. Although the musical possibilities of these systems are vast, they are not directly considering another important characteristic of sound. Timbre can be defined as all the sound attributes, except pitch, loudness and duration, which allow us to distinguish and recognize that two sounds are dissimilar. This feature plays an essential role in combining instruments as it involves mixing instrumental properties to create unique textures conveying specific sonic qualities. Within this thesis, we explore harnessing techniques for the analysis and control of instrumental timbre and timbral combinations.
This thesis begins with investigating the link between musical timbre, auditory perception and psychoacoustics for sounds emerging from instrument mixtures. It resulted in choosing to use verbal descriptors of timbral qualities to represent auditory perception of instrument combination sounds. Therefore, this thesis reports on the developments of methods and tools designed to automatically retrieve and identify perceptual qualities of timbre within audio files, using specific musical acoustic features and artificial intelligence algorithms. Different perceptual experiments have been conducted to evaluate the correlation between selected acoustics cues and humans' perception. Results of these evaluations confirmed the potential and suitability of the presented approaches. Finally, these developments have helped to design a perceptually-orientated generative system harnessing aspects of artificial intelligence to combine sampled instrument notes.
The findings of this exploration demonstrate that an artificial intelligence approach can help to harness the perceptual aspect of instrumental timbre and timbral combinations. This investigation suggests that established methods of measuring timbral qualities, based on a diverse selection of sounds, also work for sounds created by combining instrument notes. The development of tools designed to automatically retrieve and identify perceptual qualities of timbre also helped in designing a comparative scale that goes towards standardising metrics for comparing timbral attributes. Finally, this research demonstrates that perceptual characteristics of timbral qualities, using verbal descriptors as a representation, can be implemented in an intelligent computing system designed to combine sampled instrument notes conveying specific perceptual qualities.Arts and Humanities Research Council funded 3D3 Centre for Doctoral Trainin
Cultural Context-Aware Models and IT Applications for the Exploitation of Musical Heritage
Information engineering has always expanded its scope by inspiring innovation in different scientific disciplines. In particular, in the last sixty years, music and engineering have forged a strong connection in the discipline known as “Sound and Music Computing”. Musical heritage is a paradigmatic case that includes several multi-faceted cultural artefacts and traditions. Several issues arise from the analog-digital transfer of cultural objects, concerning their creation, preservation, access, analysis and experiencing. The keystone is the relationship of these digitized cultural objects with their carrier and cultural context. The terms “cultural context” and “cultural context awareness” are delineated, alongside the concepts of contextual information and metadata. Since they maintain the integrity of the object, its meaning and cultural context, their role is critical. This thesis explores three main case studies concerning historical audio recordings and ancient musical instruments, aiming to delineate models to preserve, analyze, access and experience the digital versions of these three prominent examples of musical heritage.
The first case study concerns analog magnetic tapes, and, in particular, tape music, a particular experimental music born in the second half of the XX century. This case study has relevant implications from the musicology, philology and archivists’ points of view, since the carrier has a paramount role and the tight connection with its content can easily break during the digitization process or the access phase. With the aim to help musicologists and audio technicians in their work, several tools based on Artificial Intelligence are evaluated in tasks such as the discontinuity detection and equalization recognition. By considering the peculiarities of tape music, the philological problem of stemmatics in digitized audio documents is tackled: an algorithm based on phylogenetic techniques is proposed and assessed, confirming the suitability of these techniques for this task. Then, a methodology for a historically faithful access to digitized tape music recordings is introduced, by considering contextual information and its relationship with the carrier and the replay device. Based on this methodology, an Android app which virtualizes a tape recorder is presented, together with its assessment. Furthermore, two web applications are proposed to faithfully experience digitized 78 rpm discs and magnetic tape recordings, respectively. Finally, a prototype of web application for musicological analysis is presented. This aims to concentrate relevant part of the knowledge acquired in this work into a single interface.
The second case study is a corpus of Arab-Andalusian music, suitable for computational research, which opens new opportunities to musicological studies by applying data-driven analysis. The description of the corpus is based on the five criteria formalized in the CompMusic project of the University Pompeu Fabra of Barcelona: purpose, coverage, completeness, quality and re-usability. Four Jupyter notebooks were developed with the aim to provide a useful tool for computational musicologists for analyzing and using data and metadata of such corpus.
The third case study concerns an exceptional historical musical instrument: an ancient Pan flute exhibited at the Museum of Archaeological Sciences and Art of the University of Padova. The final objective was the creation of a multimedia installation to valorize this precious artifact and to allow visitors to interact with the archaeological find and to learn its history. The case study provided the opportunity to study a methodology suitable for the valorization of this ancient musical instrument, but also extendible to other artifacts or museum collections. Both the methodology and the resulting multimedia installation are presented, followed by the assessment carried out by a multidisciplinary group of experts