1,394 research outputs found

    Bertso transformation with pattern-based sampling

    Get PDF
    This paper presents a method to generate new melodies, based on conserving the semiotic structure of a template piece. A pattern discovery algorithm is applied to a template piece to extract significant segments: those that are repeated and those that are transposed in the piece. Two strategies are combined to describe the semiotic coherence structure of the template piece: inter-segment coherence and intra-segment coherence. Once the structure is described it is used as a template for new musical content that is generated using a statistical model created from a corpus of bertso melodies and iteratively improved using a stochastic optimization method. Results show that the method presented here effectively describes a coherence structure of a piece by discovering repetition and transposition relations between segments, and also by representing the relations among notes within the segments. For bertso generation the method correctly conserves all intra and inter-segment coherence of the template, and the optimization method produces coherent generated melodies

    Automatic Phrase Continuation from Guitar and Bass guitar Melodies

    Get PDF

    Implicit learning of recursive context-free grammars

    Get PDF
    Context-free grammars are fundamental for the description of linguistic syntax. However, most artificial grammar learning experiments have explored learning of simpler finite-state grammars, while studies exploring context-free grammars have not assessed awareness and implicitness. This paper explores the implicit learning of context-free grammars employing features of hierarchical organization, recursive embedding and long-distance dependencies. The grammars also featured the distinction between left- and right-branching structures, as well as between centre- and tail-embedding, both distinctions found in natural languages. People acquired unconscious knowledge of relations between grammatical classes even for dependencies over long distances, in ways that went beyond learning simpler relations (e.g. n-grams) between individual words. The structural distinctions drawn from linguistics also proved important as performance was greater for tail-embedding than centre-embedding structures. The results suggest the plausibility of implicit learning of complex context-free structures, which model some features of natural languages. They support the relevance of artificial grammar learning for probing mechanisms of language learning and challenge existing theories and computational models of implicit learning

    Data-driven, memory-based computational models of human segmentation of musical melody

    Get PDF
    When listening to a piece of music, listeners often identify distinct sections or segments within the piece. Music segmentation is recognised as an important process in the abstraction of musical contents and researchers have attempted to explain how listeners perceive and identify the boundaries of these segments.The present study seeks the development of a system that is capable of performing melodic segmentation in an unsupervised way, by learning from non-annotated musical data. Probabilistic learning methods have been widely used to acquire regularities in large sets of data, with many successful applications in language and speech processing. Some of these applications have found their counterparts in music research and have been used for music prediction and generation, music retrieval or music analysis, but seldom to model perceptual and cognitive aspects of music listening.We present some preliminary experiments on melodic segmentation, which highlight the importance of memory and the role of learning in music listening. These experiments have motivated the development of a computational model for melodic segmentation based on a probabilistic learning paradigm.The model uses a Mixed-memory Markov Model to estimate sequence probabilities from pitch and time-based parametric descriptions of melodic data. We follow the assumption that listeners' perception of feature salience in melodies is strongly related to expectation. Moreover, we conjecture that outstanding entropy variations of certain melodic features coincide with segmentation boundaries as indicated by listeners.Model segmentation predictions are compared with results of a listening study on melodic segmentation carried out with real listeners. Overall results show that changes in prediction entropy along the pieces exhibit significant correspondence with the listeners' segmentation boundaries.Although the model relies only on information theoretic principles to make predictions on the location of segmentation boundaries, it was found that most predicted segments can be matched with boundaries of groupings usually attributed to Gestalt rules.These results question previous research supporting a separation between learningbased and innate bottom-up processes of melodic grouping, and suggesting that some of these latter processes can emerge from acquired regularities in melodic data

    Perception and modeling of segment boundaries in popular music

    Get PDF

    Content-based retrieval of melodies using artificial neural networks

    Get PDF
    Human listeners are capable of spontaneously organizing and remembering a continuous stream of musical notes. A listener automatically segments a melody into phrases, from which an entire melody may be learnt and later recognized. This ability makes human listeners ideal for the task of retrieving melodies by content. This research introduces two neural networks, known as SONNETMAP and _ReTREEve, which attempt to model this behaviour. SONNET-MAP functions as a melody segmenter, whereas ReTREEve is specialized towards content-based retrieval (CBR). Typically, CBR systems represent melodies as strings of symbols drawn from a finite alphabet, thereby reducing the retrieval process to the task of approximate string matching. SONNET-MAP and ReTREEwe, which are derived from Nigrin’s SONNET architecture, offer a novel approach to these traditional systems, and indeed CBR in general. Based on melodic grouping cues, SONNETMAP segments a melody into phrases. Parallel SONNET modules form independent, sub-symbolic representations of the pitch and rhythm dimensions of each phrase. These representations are then bound using associative maps, forming a two-dimensional representation of each phrase. This organizational scheme enables SONNET-MAP to segment melodies into phrases using both the pitch and rhythm features of each melody. The boundary points formed by these melodic phrase segments are then utilized to populate the iieTREEve network. ReTREEw is organized in the same parallel fashion as SONNET-MAP. However, in addition, melodic phrases are aggregated by an additional layer; thus forming a two-dimensional, hierarchical memory structure of each entire melody. Melody retrieval is accomplished by matching input queries, whether perfect (for example, a fragment from the original melody) or imperfect (for example, a fragment derived from humming), against learned phrases and phrase sequence templates. Using a sample of fifty melodies composed by The Beatles , results show th a t the use of both pitch and rhythm during the retrieval process significantly improves retrieval results over networks that only use either pitch o r rhythm. Additionally, queries that are aligned along phrase boundaries are retrieved using significantly fewer notes than those that are not, thus indicating the importance of a human-based approach to melody segmentation. Moreover, depending on query degradation, different melodic features prove more adept at retrieval than others. The experiments presented in this thesis represent the largest empirical test of SONNET-based networks ever performed. As far as we are aware, the combined SONNET-MAP and -ReTREEue networks constitute the first self-organizing CBR system capable of automatic segmentation and retrieval of melodies using various features of pitch and rhythm

    Melody generator: A device for algorithmic music construction

    Get PDF
    This article describes the development of an application for generating tonal melodies. The goal of the project is to ascertain our current understanding of tonal music by means of algorithmic music generation. The method followed consists of four stages: 1) selection of music-theoretical insights, 2) translation of these insights into a set of principles, 3) conversion of the principles into a computational model having the form of an algorithm for music generation, 4) testing the “music ” generated by the algorithm to evaluate the adequacy of the model. As an example, the method is implemented in Melody Generator, an algorithm for generating tonal melodies. The program has a structure suited for generating, displaying, playing and storing melodies, functions which are all accessible via a dedicated interface. The actual generation of melodies, is based in part on constraints imposed by the tonal context, i.e. by meter and key, the settings of which are controlled by means of parameters on the interface. For another part, it is based upon a set of construction principles including the notion of a hierarchical organization, and the idea that melodies consist of a skeleton that may be elaborated in various ways. After these aspects were implemented as specific sub-algorithms, the device produces simple but well-structured tonal melodies

    Infants' perception of sound patterns in oral language play

    Get PDF

    Perception of structure in auditory patterns

    Get PDF
    The present research utilised five tasks to investigate non-musicians' perception of phrase, rhythm, pitch and beat structure in unaccompanied Gaelic melodies and musical sequences. Perception of phrase structure was examined using: i) a segmentation task in which listeners segmented Gaelic melodies into a series of meaningful units and ii) a novel click localisation task whereby listeners indicated where they perceived a superimposed click in the melody had occurred. Listeners consistently segmented the melodies into units of 2.4 - 5.4 seconds. Clicks which were positioned before and after perceived boundaries (identified by segmentation) were perceptually migrated towards the boundary. These results suggest that listeners perceptually differentiate between phrasal groups in melodies (See Sloboda & Gregory, 1980; Stoffer, 1985, for similar results with musicians). Short term memory for rhythmic structure was examined using rhythm recall of computer generated sequences and Gaelic melodies. Computer generated rhythms with small tonal pitch intervals (1 - 4 semitones) were easier to recall than large atonal intervals (predominantly greater than 4 semitones). Recall of Gaelic melodies, containing repetitive rhythmic units, was better than recall of computer sequences. Pitch reversal of Gaelic melodies did not effect recall. Beat-tapping with three Gaelic melodies revealed that the majority of listeners established the underlying beat 1.5 - 3 seconds (5 - 6 notes) after the start of the melodies. Perception of meaning and content in two note melodic intervals and three Gaelic melodies was examined using an adjective pair two-alternative forced choice task. Responses to musical intervals showed evidence of perceptual similarity based mainly on interval size. Perceived information content in the melodies increased significantly by the fourth note. The results suggest that the amounts of Gaelic melody which are: i) required to establish an underlying beat, ii) remembered after one hearing, and iii) perceptually grouped into a meaningful unit, include the unit of melody which is necessary to establish a basic meaning
    corecore