9,987 research outputs found

    SMA Technical Report

    Get PDF
    Technical report from pilot studies in the Sensing Music-related Actions group. The report presents simple motion sensor technology and issues regarding pre-processing of music-related motion data. In cognitive music research, ones main focus is the relationship between music and human beings. This involves emotions, moods, perception, expression, interaction with other people, interaction with musical instruments and other interfaces, among many other things. Due to the nature of music as a subjective experience, verbal utterances on these aspects tend to be coloured by the person who makes them. Such utterances are limited by the vocabulary of the person, and by the process of consciously transforming these inner feelings and experiences to words (Leman 2007: 5f). Thus, gesture research has become extensively popular among researchers wanting a deeper understanding of how people interact with music. In this kind of research, several different methods are used, using for example infrared-sensitive cameras (Wiesendanger et al. 2006) or video recordings in combination with MIDI (Jabusch 2006). This paper presents methods being used in a pilot study for the Sensing Music-related Actions project at the Department of Musicology and the Department of Informatics at the University of Oslo. Here I will discuss the methods for apprehending and analysing gestural data in this project, especially looking into use of sensors for measuring movement and tracking absolute position. In this project, a superior goal is to develop methods for studying gestures in musical performance. In a large view this involves gathering data, analysing the data and organizing the data in such a way that we ourselves and others easily can find and understand the data

    Experimental Object-Oriented Modelling

    Get PDF
    This thesis examines object-oriented modelling in experimental system development. Object-oriented modelling aims at representing concepts and phenomena of a problem domain in terms of classes and objects. Experimental system development seeks active experimentation in a system development project through, e.g., technical prototyping and active user involvement. We introduce and examine "experimental object-oriented modelling" as the intersection of these practices

    On the analysis of musical performance by computer

    Get PDF
    Existing automatic methods of analysing musical performance can generally be described as music-oriented DSP analysis. However, this merely identifies attributes, or artefacts which can be found within the performance. This information, though invaluable, is not an analysis of the performance process. The process of performance first involves an analysis of the score (whether from a printed sheet or from memory), and through this analysis, the performer decides how to perform the piece. Thus, an analysis of the performance process requires an analysis of the performance attributes and artefacts in the context of the musical score. With this type analysis it is possible to ask profound questions such as “why or when does a performer use this technique”. The work presented in this thesis provides the tools which are required to investigate these performance issues. A new computer representation, Performance Markup Language (PML) is presented which combines the domains of the musical score, performance information and analytical structures. This representation provides the framework with which information within these domains can be cross-referenced internally, and the markup of information in external files. Most importantly, the rep resentation defines the relationship between performance events and the corresponding objects within the score, thus facilitating analysis of performance information in the context of the score and analyses of the score. To evaluate the correspondences between performance notes and notes within the score, the performance must be analysed using a score-performance matching algorithm. A new score-performance matching algorithm is presented in this document which is based on Dynamic Programming. In score-performance matching there are situations where dynamic programming alone is not sufficient to accurately identify correspondences. The algorithm presented here makes use of analyses of both the score and the performance to overcome the inherent shortcomings of the DP method and to improve the accuracy and robustness of DP matching in the presence of performance errors and expressive timing. Together with the musical score and performance markup, the correspondences identified by the matching algorithm provide the minimum information required to investigate musical performance, and forms the foundation of a PML representation. The Microtonalism project investigated the issues surrounding the performance of microtonal music on conventional (i.e. non microtonal specific) instruments, namely voice. This included the automatic analysis of vocal performances to extract information regarding pitch accuracy. This was possible using tools developed using the performance representation and the matching algorithm

    A system for the analysis of musical data

    Get PDF
    The role of music analysis is to enlighten our understanding of a piece of music. The role of musical performance analysis is to help us understand how a performer interprets a piece of music. The current work provides a tool which combines music analysis with performance analysis. By combining music and performance analysis in one system new questions can be asked of a piece of music: how is the structure of a piece reflected in the performance and how can the performance enlighten our understanding of the piece's structure? The current work describes a unified database which can store and present musical score alongside associated performance data and musical analysis. Using a general purpose representation language, Performance Mark-up Language (PML), aspects of performance are recorded and analysed. Data thus acquired from one project is made available to others. Presentation involves high-quality scores suitably annotated with the requested information. Such output is easily and directly accessible to musicians, performance scientists and analysts. We define a set of data structures and operators which can operate on musical pitch and musical time, and use them to form the basis of a query language for a musical database. The database can store musical information (score, gestural data, etc.). Querying the database results in annotations of the musical score. The database is capable of storing musical score information and performance data and cross-referencing them. It is equipped with the necessary primitives to execute music-analytical queries, and highlight notes identified from the score and display performance data alongside the score

    An XML Coding Scheme for Multimodal Corpus Annotation

    No full text
    International audienceMultimodality has become one of today's most crucial challenges both for linguistics and computer science, entailing theoretical issues as well as practical ones (verbal interaction description, human-machine dialogues, virtual reality etc...). Understanding interaction processes is one of the main targets of these sciences, and requires to take into account the whole set of modalities and the way they interact.From a linguistic standpoint, language and speech analysis are based on studies of distinct research fields, such as phonetics, phonemics, syntax, semantics, pragmatics or gesture studies. Each of them have been investigated in the past either separately or in relation with another field that was considered as closely connected (e.g. syntax and semantics, prosody and syntax, etc.). The perspective adopted by modern linguistics is a considerably broader one: even though each domain reveals a certain degree of autonomy, it cannot be accounted for independently from its interactions with the other domains. Accordingly, the study of the interaction between the fields appears to be as important as the study of each distinct field. This is a pre-requisite for an elaboration of a valid theory of language. However, as important as the needs in this area might be, high level multimodal resources and adequate methods in order to construct them are scarce and unequally developed. Ongoing projects mainly focus on one modality as a main target, with an alternate modality as an optional complement. Moreover, coding standards in this field remain very partial and do not cover all the needs in terms of multimodal annotation. One of the first issues we have to face is the definition of a coding scheme providing adequate responses to the needs of the various levels encompassed, from phonetics to pragmatics or syntax. While working in the general context of international coding standards, we plan to create a specific coding standard designed to supply proper responses to the specific needs of multimodal annotation, as available solutions in the area do not seem to be totally satisfactory. <BR /

    An XML Coding Scheme for Multimodal Corpus Annotation

    No full text
    International audienceMultimodality has become one of today's most crucial challenges both for linguistics and computer science, entailing theoretical issues as well as practical ones (verbal interaction description, human-machine dialogues, virtual reality etc...). Understanding interaction processes is one of the main targets of these sciences, and requires to take into account the whole set of modalities and the way they interact.From a linguistic standpoint, language and speech analysis are based on studies of distinct research fields, such as phonetics, phonemics, syntax, semantics, pragmatics or gesture studies. Each of them have been investigated in the past either separately or in relation with another field that was considered as closely connected (e.g. syntax and semantics, prosody and syntax, etc.). The perspective adopted by modern linguistics is a considerably broader one: even though each domain reveals a certain degree of autonomy, it cannot be accounted for independently from its interactions with the other domains. Accordingly, the study of the interaction between the fields appears to be as important as the study of each distinct field. This is a pre-requisite for an elaboration of a valid theory of language. However, as important as the needs in this area might be, high level multimodal resources and adequate methods in order to construct them are scarce and unequally developed. Ongoing projects mainly focus on one modality as a main target, with an alternate modality as an optional complement. Moreover, coding standards in this field remain very partial and do not cover all the needs in terms of multimodal annotation. One of the first issues we have to face is the definition of a coding scheme providing adequate responses to the needs of the various levels encompassed, from phonetics to pragmatics or syntax. While working in the general context of international coding standards, we plan to create a specific coding standard designed to supply proper responses to the specific needs of multimodal annotation, as available solutions in the area do not seem to be totally satisfactory. <BR /

    Stereoscopic Sketchpad: 3D Digital Ink

    Get PDF
    --Context-- This project looked at the development of a stereoscopic 3D environment in which a user is able to draw freely in all three dimensions. The main focus was on the storage and manipulation of the ‘digital ink’ with which the user draws. For a drawing and sketching package to be effective it must not only have an easy to use user interface, it must be able to handle all input data quickly and efficiently so that the user is able to focus fully on their drawing. --Background-- When it comes to sketching in three dimensions the majority of applications currently available rely on vector based drawing methods. This is primarily because the applications are designed to take a users two dimensional input and transform this into a three dimensional model. Having the sketch represented as vectors makes it simpler for the program to act upon its geometry and thus convert it to a model. There are a number of methods to achieve this aim including Gesture Based Modelling, Reconstruction and Blobby Inflation. Other vector based applications focus on the creation of curves allowing the user to draw within or on existing 3D models. They also allow the user to create wire frame type models. These stroke based applications bring the user closer to traditional sketching rather than the more structured modelling methods detailed. While at present the field is inundated with vector based applications mainly focused upon sketch-based modelling there are significantly less voxel based applications. The majority of these applications focus on the deformation and sculpting of voxmaps, almost the opposite of drawing and sketching, and the creation of three dimensional voxmaps from standard two dimensional pixmaps. How to actually sketch freely within a scene represented by a voxmap has rarely been explored. This comes as a surprise when so many of the standard 2D drawing programs in use today are pixel based. --Method-- As part of this project a simple three dimensional drawing program was designed and implemented using C and C++. This tool is known as Sketch3D and was created using a Model View Controller (MVC) architecture. Due to the modular nature of Sketch3Ds system architecture it is possible to plug a range of different data structures into the program to represent the ink in a variety of ways. A series of data structures have been implemented and were tested for efficiency. These structures were a simple list, a 3D array, and an octree. They have been tested for: the time it takes to insert or remove points from the structure; how easy it is to manipulate points once they are stored; and also how the number of points stored effects the draw and rendering times. One of the key issues brought up by this project was devising a means by which a user is able to draw in three dimensions while using only two dimensional input devices. The method settled upon and implemented involves using the mouse or a digital pen to sketch as one would in a standard 2D drawing package but also linking the up and down keyboard keys to the current depth. This allows the user to move in and out of the scene as they draw. A couple of user interface tools were also developed to assist the user. A 3D cursor was implemented and also a toggle, which when on, highlights all of the points intersecting the depth plane on which the cursor currently resides. These tools allow the user to see exactly where they are drawing in relation to previously drawn lines. --Results-- The tests conducted on the data structures clearly revealed that the octree was the most effective data structure. While not the most efficient in every area, it manages to avoid the major pitfalls of the other structures. The list was extremely quick to render and draw to the screen but suffered severely when it comes to finding and manipulating points already stored. In contrast the three dimensional array was able to erase or manipulate points effectively while the draw time rendered the structure effectively useless, taking huge amounts of time to draw each frame. The focus of this research was on how a 3D sketching package would go about storing and accessing the digital ink. This is just a basis for further research in this area and many issues touched upon in this paper will require a more in depth analysis. The primary area of this future research would be the creation of an effective user interface and the introduction of regular sketching package features such as the saving and loading of images

    The Lexicon Graph Model : a generic model for multimodal lexicon development

    Get PDF
    Trippel T. The Lexicon Graph Model : a generic model for multimodal lexicon development. Bielefeld (Germany): Bielefeld University; 2006.Das Lexicon Graph Model stellt ein Modell für Lexika dar, die korpusbasiert sein können und multimodale Informationen enthalten. Hierbei wird die Perspektive der Lexikontheorie eingenommen, wobei die zugrundeliegenden Datenstrukturen sowohl vom Lexikon als auch von Annotationen betrachtet werden. Letztere fallen dadurch in das Blickfeld, weil sie als Grundlage für die Erstellung von Lexika gesehen werden. Der Begriff des Lexikons bezieht sich hier sowohl auf den Bereich des Wörterbuchs als auch der in elektronischen Applikationen integrierten Lexikondatenbanken. Die existierenden Formalismen und Ansätze der Lexikonentwicklung zeigen verschiedene Probleme im Zusammenhang mit Lexika auf, etwa die Zusammenfassung von existierenden Lexika zu einem, die Disambiguierung von Mehrdeutigkeiten im Lexikon auf verschiedenen lexikalischen Ebenen, die Repräsentation von anderen Modalitäten im Lexikon, die Selektion des lexikalischen Schlüsselbegriffs für Lexikonartikel, etc. Der vorliegende Ansatz geht davon aus, dass sich Lexika zwar in ihrem Inhalt, nicht aber in einer grundlegenden Struktur unterscheiden, so dass verschiedenartige Lexika im Rahmen eines Unifikationsprozesses dublettenfrei miteinander verbunden werden können. Hieraus resultieren deklarative Lexika. Für Lexika können diese Graphen mit dem Lexikongraph-Modell wie hier dargestellt modelliert werden. Dabei sind Lexikongraphen analog den von Bird und Libermann beschriebenen Annotationsgraphen gesehen und können daher auch ähnlich verarbeitet werden. Die Untersuchung des Lexikonformalismus beruht auf vier Schritten. Zunächst werden existierende Lexika analysiert und beschrieben. Danach wird mit dem Lexikongraph-Modell eine generische Darstellung von Lexika vorgestellt, die auch implementiert und getestet wird. Basierend auf diesem Formalismus wird die Beziehung zu Annotationsgraphen hergestellt, wobei auch beschrieben wird, welche Maßstäbe an angemessene Annotationen für die Verwendung zur Lexikonentwicklung angelegt werden müssen.The Lexicon Graph Model provides a model and framework for lexicons that can be corpus based and contain multimodal information. The focus is more from the lexicon theory perspective, looking at the underlying data structures that are part of existing lexicons and corpora. The term lexicon in linguistics and artificial intelligence is used in different ways, including traditional print dictionaries in book form, CD-ROM editions, Web based versions of the same, but also computerized resources of similar structures to be used by applications. These applications cover systems for human-machine communication as well as spell checkers. The term lexicon in this work is used as the most generic term covering all lexical applications. Existing formalisms in lexicon development show different problems with lexicons, for example combining different kinds of lexical resources, disambiguation on different lexical levels, the representation of different modalities in a lexicon. The Lexicon Graph Model presupposes that lexicons can have different structures but have fundamentally a similar structure, making it possible to combine lexicons in a unification process, resulting in a declarative lexicon. The underlying model is a graph, the Lexicon Graph, which is modeled similar to Annotation Graphs as described by Bird and Libermann. The investigation of the lexicon formalism contains four steps, that is the analysis of existing lexicons, the introduction of the Lexicon Graph Model as a generic representation for lexicons, the implementation of the formalism in different contexts and an evaluation of the formalism. It is shown that Annotation Graphs and Lexicon Graphs are indeed related not only in their formalism and it is shown, what standards have to be applied to annotations to be usable for lexicon development
    corecore