25 research outputs found
A constraint-based framework to model harmony for algorithmic composition
Music constraint systems provide a rule-based approach to composition. Existing systems allow users to constrain the harmony, but the constrainable harmonic information is restricted to pitches and intervals between pitches. More abstract analytical information such as chord or scale types, their root, scale degrees, enharmonic note representations, whether a note is the third or fifth of a chord and so forth are not supported. However, such information is important for modelling various music theories.
This research proposes a framework for modelling harmony at a high level of abstraction. It explicitly represents various analytical information to allow for complex theories of harmony. It is designed for efficient propagation-based constraint solvers. The framework supports the common 12-tone equal temperament, and arbitrary other equal temperaments. Users develop harmony models by applying user-defined constraints to its music representation.
Three examples demonstrate the expressive power of the framework: (1) an automatic melody harmonisation with a simple harmony model; (2) a more complex model implementing large parts of Schoenberg’s tonal theory of harmony; and (3) a composition in extended tonality. Schoenberg’s comprehensive theory of harmony has not been computationally modelled before, neither with constraints programming nor in any other way.
Diagnosing interoperability problems and debugging models by enhancing constraint satisfaction with case -based reasoning
Modeling, Diagnosis, and Model Debugging are the three main areas presented in this dissertation to automate the process of Interoperability Testing of networking protocols. The dissertation proposes a framework that uses the Constraint Satisfaction Problem (CSP) paradigm to define a modeling language and problem solving mechanism for interoperability testing, and uses Case-Based Reasoning (CBR) for debugging interoperability test cases.
The dissertation makes three primary contributions: (1) Definition of a new modeling language using CSP and Object-Oriented Programming. This language is simple, declarative, and transparent. It provides a tool for testers to implement models of interoperability test cases. The dissertation introduces the notions of metavariables, metavalues and optional metavariables to improve the modeling language capabilities. It proposes modeling of test cases from test suite specifications that are usually used in interoperability testing performed manually by testers. Test suite specifications are written by organizations or individuals and break down the testing into modules of test cases that make diagnosis of problems more meaningful to testers. (2) Diagnosis of interoperability problems using search supplemented by consistency inference methods in a CSP context to support explanations of the problem solving behavior. These methods are adapted to the OO-based CSP context. Testers can then generate reports for individual test cases and for test groups from a test suite specification. (3) Detection and debugging of incompleteness and incorrectness in CSP models of interoperability test cases. This is done through the integration of two modes of reasoning, namely CBR and CSP. CBR manages cases that store information about updating models as well as cases that are related to interoperability problems where diagnosis fails to generate a useful explanation. For the latter cases, CBR recalls previous similar useful explanations
Twenty years of coordination technologies: State-of-the-art and perspectives
Since complexity of inter- and intra-systems interactions is steadily increasing in modern application scenarios (e.g., the IoT), coordination technologies are required to take a crucial step towards maturity. In this paper we look back at the history of the COORDINATION conference in order to shed light on the current status of the coordination technologies there proposed throughout the years, in an attempt to understand success stories, limitations, and possibly reveal the gap between actual technologies, theoretical models, and novel application needs
Consistency Modeling in a Multi-Model Architecture : Integrate and Celebrate Diversity
Central to Model-Driven Engineering (MDE) is seeing models as objects that can be handled and organized into metamodel stacks and multi-model architectures. This work contributes with a unique way of doing consistency modeling where the involved models are explicitly organized in a multi-model architecture; a general model for creating multi-model architectures that allows semantics to be attached is defined and applied; explicit attachment of semantics is demonstrated by attaching Java classes that implement different instantiation semantics in order to realize the consistency modeling and the automatic generation of consistency data.
The kind of consistency addressed concerns relations between data residing in legacy databases defined by different schemas. The consistency modeling is meant to solve the problem of exposing inconsistencies by relating the data. The consistency modeling combines in a practical way visual modeling and logic (OCL). The approach is not limited to exposing inconsistencies, but may also be used to derive more general information given one or more data sets.
The consistency is modeled by defining a consistency model that relates elements of two given legacy models. The consistency model is expressed in a language specially designed for consistency modeling. The language allows definition of classes, associations and invariants expressed in OCL. The interpretation of the language is special: Given one conforming data set for each of the legacy models, the consistency model may then be automatically instantiated to consistency data that tells if the data sets are consistent or not. The invariants are used to decide what instances to generate when making the consistency data. The amount of consistency data to create is finite and limited by the given data sets. The consistency model is instantiated until no more elements can be added without breaking some invariant or multiplicity. The consistency data is presented as a model which can be investigated by the user
Génération automatique de mélodie par la programmation par contraintes
La programmation par contraintes est un type de programmation déclarative, un paradigme naturellement adapté au traitement de problèmes musicaux. En effet, la composition musicale s’apparente à un processus déclaratif pendant lequel le compositeur travaille pour créer
de la musique qui respecte les règles générales de l’art et les critères plus spécifiques du style adopté tout en y incorporant ses propres contraintes. Le parallèle entre cet exercice et la résolution d’un problème de satisfaction de contraintes se fait donc instinctivement. La principale difficulté se trouve au niveau de la modélisation du problème. Une pièce musicale est composée de plusieurs dimensions entre lesquelles existent beaucoup d’interactions. Il est pratiquement impossible pour un système informatique de représenter précisément toutes
ces dépendances. Les systèmes de contraintes conçus pour traiter de problèmes musicaux se concentrent alors sur des dimensions en particulier. Parmi ces problèmes, on retrouve la génération de mélodie qui concerne donc les hauteurs et
les durées des notes d’une ligne mélodique accompagnée par une suite d’accords. La modélisation d’un tel problème se concentre sur une séquence de notes et ne présente donc aucun élément de polyphonie ou d’instrumentation par exemple, ce qui simplifie la situation. L’objectif
de ce projet est de concevoir un système de génération automatique de mélodie selon une suite d’accords donnée qui utilise les informations d’un corpus pour guider la composition. Deux des principaux défis de ce type de problème sont l’organisation des variables et le contrôle de la structure globale de la mélodie générée. Pour relever le premier, nous avons émis l’hypothèse qu’un système structuré hiérarchiquement offrait le plus de flexibilité et
permettrait donc d’exprimer les contraintes plus facilement. En ce qui concerne la structure du résultat, nous avons mis au point un algorithme de détection de patrons répétitifs basé sur des arbres des suffixes qui permet au système de répliquer les éléments de la structure d’une mélodie existante.----------ABSTRACT: Constraint programming belongs to the declarative programming paradigm which is naturally
suited to tackle musical problems. Musical composition can be seen as a declarative process during which the composer works to create music respecting the general and specific
rules of the chosen style and also adds his own touch. The connection between this process and resolving a constraint satisfaction problem is made instinctively. The main challenge of this field is modeling the problem because of all the different dimensions which interact together in a music piece. It is virtually impossible for a computer-based system to provide a view of the same quality a human composer would have. Thus, constraint systems designed
to tackle musical problems usually focus on specific dimensions. One of these problems consists of generating a melody given a chord sequence, which only involves note durations and pitches, there is no concept of polyphony or instrumentation, for example. The goal of this project is to design and implement a system able to generate a melody given a chord sequence, using information from a corpus to guide composition. Two of the main challenges of this kind of problems are the variables arrangement and the control
of the global structure of the melody. Regarding variables, we made the assumption that a hierarchical organization would improve the system’s flexibility which would make it easier to express constraints. For the structure, we designed an algorithm which uses suffix trees to detect repeating patterns in existing melodies and made the system able to replicate them in the result. Our system is made of hierarchically organized blocs. The melody is made of bars which contain chords under which are located the notes. Each block has a variable number of notes which needs to be fixed first in order to instantiate the corresponding variables. This means that the system has to work in two phases. The first one assigns a rhythm pattern to every bar, which decides both the number of notes and their durations. The second phase fixes the pitch of every note of the melody
Diagnostic process monitoring with temporally uncertain models
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.Includes bibliographical references (leaves 67-68).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.This thesis develops a real-time trend detection and monitoring system based on previous work by Haimowitz, Le, and DeSouza [3, 5, 2]. The monitor they designed, TrenDx, used trend templates in which the temporal points where data patterns change are variable with respect to the actual process data. This thesis uses similar models to construct a monitoring system that is able to run in real time, based on a continuous, linearly segmented process data input stream. The instantiation of temporally significant template points against the process data is determined through a simulated annealing algorithm. The rankings of competing hypotheses in the monitor set is based on the distance of these template points from their expected temporal values, along with the area between the process data measurements and the value constraints placed on those parameters. The feasibility of the real-time monitor was evaluated in the domain of pediatric growth, particularly in comparison to previous versions of TrenDx, using an expert gold standard of the diagnoses of pediatric endocrinologists. Real-time TrenDx shows promise in its monitoring abilities and should be evaluated in other domains which are more suited to its continuous data stream input model.by Steven M. Bull.M.Eng
Run-time Variability with Roles
Adaptability is an intrinsic property of software systems that require adaptation to cope with dynamically changing environments. Achieving adaptability is challenging. Variability is a key solution as it enables a software system to change its behavior which corresponds to a specific need. The abstraction of variability is to manage variants, which are dynamic parts to be composed to the base system. Run-time variability realizes these variant compositions dynamically at run time to enable adaptation. Adaptation, relying on variants specified at build time, is called anticipated adaptation, which allows the system behavior to change with respect to a set of predefined execution environments. This implies the inability to solve practical problems in which the execution environment is not completely fixed and often unknown until run time. Enabling unanticipated adaptation, which allows variants to be dynamically added at run time, alleviates this inability, but it holds several implications yielding system instability such as inconsistency and run-time failures. Adaptation should be performed only when a system reaches a consistent state to avoid inconsistency. Inconsistency is an effect of adaptation happening when the system changes the state and behavior while a series of methods is still invoking. A software bug is another source of system instability. It often appears in a variant composition and is brought to the system during adaptation. The problem is even more critical for unanticipated adaptation as the system has no prior knowledge of the new variants.
This dissertation aims to achieve anticipated and unanticipated adaptation. In achieving adaptation, the issues of inconsistency and software failures, which may happen as a consequence of run-time adaptation, are evidently addressed as well. Roles encapsulate dynamic behavior used to adapt players representing the base system, which is the rationale to select roles as the software system's variants. Based on the role concept, this dissertation presents three mechanisms to comprehensively address adaptation. First, a dynamic instance binding mechanism is proposed to loosely bind players and roles. Dynamic binding of roles enables anticipated and unanticipated adaptation. Second, an object-level tranquility mechanism is proposed to avoid inconsistency by allowing a player object to adapt only when its consistent state is reached. Last, a rollback recovery mechanism is proposed as a proactive mechanism to embrace and handle failures resulting from a defective composition of variants. A checkpoint of a system configuration is created before adaptation. If a specialized bug sensor detects a failure, the system rolls back to the most recent checkpoint. These mechanisms are integrated into a role-based runtime, called LyRT.
LyRT was validated with three case studies to demonstrate the practical feasibility. This validation showed that LyRT is more advanced than the existing variability approaches with respect to adaptation due to its consistency control and failure handling. Besides, several benchmarks were set up to quantify the overhead of LyRT concerning the execution time of adaptation. The results revealed that the overhead introduced to achieve anticipated and unanticipated adaptation to be small enough for practical use in adaptive software systems. Thus, LyRT is suitable for adaptive software systems that frequently require the adaptation of large sets of objects
Composable Probabilistic Inference with Blaise
Probabilistic inference provides a unified, systematic framework for specifying and solving these problems. Recent work has demonstrated the great value of probabilistic models defined over complex, structured domains. However, our ability to imagine probabilistic models has far outstripped our ability to programmatically manipulate them and to effectively implement inference, limiting the complexity of the problems that we can solve in practice.This thesis presents Blaise, a novel framework for composable probabilistic modeling and inference, designed to address these limitations. Blaise has three components: * The Blaise State-Density-Kernel (SDK) graphical modeling language that generalizes factor graphs by: (1) explicitly representing inference algorithms (and their locality) using a new type of graph node, (2) representing hierarchical composition and repeated substructures in the state space, the interest distribution, and the inference procedure, and (3) permitting the structure of the model to change during algorithm execution. * A suite of SDK graph transformations that may be used to extend a model (e.g. to construct a mixture model from a model of a mixture component), or to make inference more effective (e.g. by automatically constructing a parallel tempered version of an algorithm or by exploiting conjugacy in a model). * The Blaise Virtual Machine, a runtime environment that can efficiently execute the stochastic automata represented by Blaise SDK graphs. Blaise encourages the construction of sophisticated models by composing simpler models, allowing the designer to implement and verify small portions of the model and inference method, and to reuse model components from one task to another. Blaise decouples the implementation of the inference algorithm from the specification of the interest distribution, even in cases (such as Gibbs sampling) where the shape of the interest distribution guides the inference. This gives modelers the freedom to explore alternate models without slow, error-prone reimplementation. The compositional nature of Blaise enables novel reinterpretations of advanced Monte Carlo inference techniques (such as parallel tempering) as simple transformations of Blaise SDK graphs.In this thesis, I describe each of the components of the Blaise modeling framework, as well as validating the Blaise framework by highlighting a variety of contemporary sophisticated models that have been developed by the Blaise user community. I also present several surprising findings stemming from the Blaise modeling framework, including that an Infinite Relational Model can be built using exactly the same inference methods as a simple mixture model, that constructing a parallel tempered inference algorithm should be a point-and-click/one-line-of-code operation, and that Markov chain Monte Carlo for probabilistic models with complicated long-distance dependencies, such as a stochastic version of Scheme, can be managed using standard Blaise mechanisms
Composable probabilistic inference with BLAISE
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references (p. 185-190).If we are to understand human-level cognition, we must understand how the mind finds the patterns that underlie the incomplete, noisy, and ambiguous data from our senses and that allow us to generalize our experiences to new situations. A wide variety of commercial applications face similar issues: industries from health services to business intelligence to oil field exploration critically depend on their ability to find patterns in vast amounts of data and use those patterns to make accurate predictions. Probabilistic inference provides a unified, systematic framework for specifying and solving these problems. Recent work has demonstrated the great value of probabilistic models defined over complex, structured domains. However, our ability to imagine probabilistic models has far outstripped our ability to programmatically manipulate them and to effectively implement inference, limiting the complexity of the problems that we can solve in practice. This thesis presents BLAISE, a novel framework for composable probabilistic modeling and inference, designed to address these limitations. BLAISE has three components: * The BLAISE State-Density-Kernel (SDK) graphical modeling language that generalizes factor graphs by: (1) explicitly representing inference algorithms (and their locality) using a new type of graph node, (2) representing hierarchical composition and repeated substructures in the state space, the interest distribution, and the inference procedure, and (3) permitting the structure of the model to change during algorithm execution. * A suite of SDK graph transformations that may be used to extend a model (e.g. to construct a mixture model from a model of a mixture component), or to make inference more effective (e.g. by automatically constructing a parallel tempered version of an algorithm or by exploiting conjugacy in a model).(cont.) * The BLAISE Virtual Machine, a runtime environment that can efficiently execute the stochastic automata represented by BLAISE SDK graphs. BLAISE encourages the construction of sophisticated models by composing simpler models, allowing the designer to implement and verify small portions of the model and inference method, and to reuse mode components from one task to another. BLAISE decouples the implementation of the inference algorithm from the specification of the interest distribution, even in cases (such as Gibbs sampling) where the shape of the interest distribution guides the inference. This gives modelers the freedom to explore alternate models without slow, error-prone reimplementation. The compositional nature of BLAISE enables novel reinterpretations of advanced Monte Carlo inference techniques (such as parallel tempering) as simple transformations of BLAISE SDK graphs. In this thesis, I describe each of the components of the BLAISE modeling framework, as well as validating the BLAISE framework by highlighting a variety of contemporary sophisticated models that have been developed by the BLAISE user community. I also present several surprising findings stemming from the BLAISE modeling framework, including that an Infinite Relational Model can be built using exactly the same inference methods as a simple mixture model, that constructing a parallel tempered inference algorithm should be a point-and-click/one-line-of-code operation, and that Markov chain Monte Carlo for probabilistic models with complicated long-distance dependencies, such as a stochastic version of Scheme, can be managed using standard BLAISE mechanisms.by Keith Allen Bonawitz.Ph.D