108 research outputs found

    Melody as Prosody: Toward a Usage-Based Theory of Music

    Get PDF
    MELODY AS PROSODY: TOWARD A USAGE-BASED THEORY OF MUSIC Thomas M. Pooley Gary A. Tomlinson Rationalist modes of inquiry have dominated the cognitive science of music over the past several decades. This dissertation contests many rationalist assumptions, including its core tenets of nativism, modularity, and computationism, by drawing on a wide range of evidence from psychology, neuroscience, linguistics, and cognitive music theory, as well as original data from a case study of Zulu song prosody. An alternative biocultural approach to the study of music and mind is outlined that takes account of musical diversity by attending to shared cognitive mechanisms. Grammar emerges through use, and cognitive categories are learned and constructed in particular social contexts. This usage-based theory of music shows how domain-general cognitive mechanisms for patterning-finding and intention-reading are crucial to acquisition, and how Gestalt principles are invoked in perception. Unlike generative and other rationalist approaches that focus on a series of idealizations, and the cognitive `competences\u27 codified in texts and musical scores, the usage-based approach investigates actual performances in everyday contexts by using instrumental measures of process. The study focuses on song melody because it is a property of all known musics. Melody is used for communicative purposes in both song and speech. Vocalized pitch patterning conveys a wide range of affective, propositional, and syntactic information through prosodic features that are shared by the two domains. The study of melody as prosody shows how gradient pitch features are crucial to the design and communicative functions of song melodies. The prosodic features shared by song and speech include: speech tone, intonation, and pitch-accent. A case study of ten Zulu memulo songs shows that pitch is not used in the discrete or contrastive fashion proposed by many cognitive music theorists and most (generative) phonologists. Instead there are a range of pitch categories that include pitch targets, glides, and contours. These analyses also show that song melody has a multi-dimensional pitch structure, and that it is a dynamic adaptive system that is irreducible in its complexity

    Temporal integration of loudness as a function of level

    Get PDF

    Faces and hands : modeling and animating anatomical and photorealistic models with regard to the communicative competence of virtual humans

    Get PDF
    In order to be believable, virtual human characters must be able to communicate in a human-like fashion realistically. This dissertation contributes to improving and automating several aspects of virtual conversations. We have proposed techniques to add non-verbal speech-related facial expressions to audiovisual speech, such as head nods for of emphasis. During conversation, humans experience shades of emotions much more frequently than the strong Ekmanian basic emotions. This prompted us to develop a method that interpolates between facial expressions of emotions to create new ones based on an emotion model. In the area of facial modeling, we have presented a system to generate plausible 3D face models from vague mental images. It makes use of a morphable model of faces and exploits correlations among facial features. The hands also play a major role in human communication. Since the basis for every realistic animation of gestures must be a convincing model of the hand, we devised a physics-based anatomical hand model, where a hybrid muscle model drives the animations. The model was used to visualize complex hand movement captured using multi-exposure photography.Um überzeugend zu wirken, müssen virtuelle Figuren auf dieselbe Art wie lebende Menschen kommunizieren können. Diese Dissertation hat das Ziel, verschiedene Aspekte virtueller Unterhaltungen zu verbessern und zu automatisieren. Wir führten eine Technik ein, die es erlaubt, audiovisuelle Sprache durch nichtverbale sprachbezogene Gesichtsausdrücke zu bereichern, wie z.B. Kopfnicken zur Betonung. Während einer Unterhaltung empfinden Menschen weitaus öfter Emotionsnuancen als die ausgeprägten Ekmanschen Basisemotionen. Dies bewog uns, eine Methode zu entwickeln, die Gesichtsausdrücke für neue Emotionen erzeugt, indem sie, ausgehend von einem Emotionsmodell, zwischen bereits bekannten Gesichtsausdrücken interpoliert. Auf dem Gebiet der Gesichtsmodellierung stellten wir ein System vor, um plausible 3D-Gesichtsmodelle aus vagen geistigen Bildern zu erzeugen. Dieses System basiert auf einem Morphable Model von Gesichtern und nutzt Korrelationen zwischen Gesichtszügen aus. Auch die Hände spielen ein große Rolle in der menschlichen Kommunikation. Da der Ausgangspunkt für jede realistische Animation von Gestik ein überzeugendes Handmodell sein muß, entwikkelten wir ein physikbasiertes anatomisches Handmodell, bei dem ein hybrides Muskelmodell die Animationen antreibt. Das Modell wurde verwendet, um komplexe Handbewegungen zu visualisieren, die aus mehrfach belichteten Photographien extrahiert worden waren

    Electroacoustical simulation of listening room acoustics for project ARCHIMEDES

    Get PDF

    Proceedings of the Fifth Italian Conference on Computational Linguistics CLiC-it 2018 : 10-12 December 2018, Torino

    Get PDF
    On behalf of the Program Committee, a very warm welcome to the Fifth Italian Conference on Computational Linguistics (CLiC-­‐it 2018). This edition of the conference is held in Torino. The conference is locally organised by the University of Torino and hosted into its prestigious main lecture hall “Cavallerizza Reale”. The CLiC-­‐it conference series is an initiative of the Italian Association for Computational Linguistics (AILC) which, after five years of activity, has clearly established itself as the premier national forum for research and development in the fields of Computational Linguistics and Natural Language Processing, where leading researchers and practitioners from academia and industry meet to share their research results, experiences, and challenges
    corecore