599 research outputs found

    Encoding Polyphony from Medieval Manuscripts Notated in Mensural Notation

    Get PDF
    This panel submission for the 2021 Music Encoding Conference brings together five short papers that focus on the making of computer-readable encodings of polyphony in the notational style – mensural notation – in which it was originally copied. Mensural notation was used in the medieval West to encode polyphony from the late thirteenth to sixteenth centuries. The Measuring Polyphony (MP) Online Editor, funded by an NEH Digital Humanities Advancement Grant, is a software that enables non-technical users to make Humdrum and MEI encodings of mensural notation, and links these encodings to digital images of the manuscripts in which these compositions were first notated. Topics explored by the authors include: the processes of, and the goals informing, the linking of manuscript images to music encodings; choices and compromises made in the development process of the MP Editor in order to facilitate its rapid deployment; and the implications of capturing dual encodings – a parts-based encoding that reflects the layout of the original source, and a score-based encoding. Having two encodings of the music data is useful for a variety of activities, including performance and analysis, but also within the editorial process, and for sharing data with other applications. The authors present two case studies that document the possibilities and potential in the interchange of music data between the MP Editor and other applications, specifically, MuRET, an optical music recognition (OMR) tool, and Humdrum analysis tools

    Assessing interpretability of visual symbols of varied colors across demographic profiles

    Get PDF
    Visual symbols are often ambiguous. An icon is meant to convey a particular meaning, but viewers may interpret the image differently. This thesis shows how a viewer`s demographic background and icon color can affect their interpretation of a symbol. A website survey, featuring a library of icons, asked users of varied demographic profiles to interpret each figure presented randomly in one of five colors: black, blue, red, green, and orange. The qualitative text data from the participants` interpretations were compared to the quantitative data from icon and demographic information by means of multinomial logit analysis. The experiment found numerous noteworthy correlations, showing that the color of an icon and person`s background can have a significant and oftentimes predictable influence on interpretation. Icon designers can use this approach to see which icon would be best used to serve certain purposes

    Configurable nD-visualization for complex Building Information Models

    Get PDF
    With the ongoing development of building information modelling (BIM) towards a comprehensive coverage of all construction project information in a semantically explicit way, visual representations became decoupled from the building information models. While traditional construction drawings implicitly contained the visual representation besides the information, nowadays they are generated on the fly, hard-coded in software applications dedicated to other tasks such as analysis, simulation, structural design or communication. Due to the abstract nature of information models and the increasing amount of digital information captured during construction projects, visual representations are essential for humans in order to access the information, to understand it, and to engage with it. At the same time digital media open up the new field of interactive visualizations. The full potential of BIM can only be unlocked with customized task-specific visualizations, with engineers and architects actively involved in the design and development process of these visualizations. The visualizations must be reusable and reliably reproducible during communication processes. Further, to support creative problem solving, it must be possible to modify and refine them. This thesis aims at reconnecting building information models and their visual representations: on a theoretic level, on the level of methods and in terms of tool support. First, the research seeks to improve the knowledge about visualization generation in conjunction with current BIM developments such as the multimodel. The approach is based on the reference model of the visualization pipeline and addresses structural as well as quantitative aspects of the visualization generation. Second, based on the theoretic foundation, a method is derived to construct visual representations from given visualization specifications. To this end, the idea of a domain-specific language (DSL) is employed. Finally, a software prototype proofs the concept. Using the visualization framework, visual representations can be generated from a specific building information model and a specific visualization description.Mit der fortschreitenden Entwicklung des Building Information Modelling (BIM) hin zu einer umfassenden Erfassung aller Bauprojektinformationen in einer semantisch expliziten Weise werden Visualisierungen von den GebĂ€udeinformationen entkoppelt. WĂ€hrend traditionelle Architektur- und Bauzeichnungen die visuellen ReprĂ€Ìˆsentationen implizit als TrĂ€ger der Informationen enthalten, werden sie heute on-the-fly generiert. Die Details ihrer Generierung sind festgeschrieben in Softwareanwendungen, welche eigentlich fĂŒr andere Aufgaben wie Analyse, Simulation, Entwurf oder Kommunikation ausgelegt sind. Angesichts der abstrakten Natur von Informationsmodellen und der steigenden Menge digitaler Informationen, die im Verlauf von Bauprojekten erfasst werden, sind visuelle ReprĂ€sentationen essentiell, um sich die Information erschließen, sie verstehen, durchdringen und mit ihnen arbeiten zu können. Gleichzeitig entwickelt sich durch die digitalen Medien eine neues Feld der interaktiven Visualisierungen. Das volle Potential von BIM kann nur mit angepassten aufgabenspezifischen Visualisierungen erschlossen werden, bei denen Ingenieur*innen und Architekt*innen aktiv in den Entwurf und die Entwicklung dieser Visualisierungen einbezogen werden. Die Visualisierungen mĂŒssen wiederverwendbar sein und in Kommunikationsprozessen zuverlĂ€ssig reproduziert werden können. Außerdem muss es möglich sein, Visualisierungen zu modifizieren und neu zu definieren, um das kreative Problemlösen zu unterstĂŒtzen. Die vorliegende Arbeit zielt darauf ab, GebĂ€udemodelle und ihre visuellen ReprĂ€sentationen wieder zu verbinden: auf der theoretischen Ebene, auf der Ebene der Methoden und hinsichtlich der unterstĂŒtzenden Werkzeuge. Auf der theoretischen Ebene trĂ€gt die Arbeit zunĂ€chst dazu bei, das Wissen um die Erstellung von Visualisierungen im Kontext von Bauprojekten zu erweitern. Der verfolgte Ansatz basiert auf dem Referenzmodell der Visualisierungspipeline und geht dabei sowohl auf strukturelle als auch auf quantitative Aspekte des Visualisierungsprozesses ein. Zweitens wird eine Methode entwickelt, die visuelle ReprĂ€sentationen auf Basis gegebener Visualisierungsspezifikationen generieren kann. Schließlich belegt ein Softwareprototyp die Realisierbarkeit des Konzepts. Mit dem entwickelten Framework können visuelle ReprĂ€sentationen aus jeweils einem spezifischen GebĂ€udemodell und einer spezifischen Visualisierungsbeschreibung generiert werden

    A comparative study of West Slope pottery productions in the Hellenistic world

    Get PDF

    Induction and interaction in the evolution of language and conceptual structure

    Get PDF
    Languages evolve in response to various pressures, and this thesis adopts the view that two pressures are especially important. Firstly, the process of learning a language functions as a pressure for greater simplicity due to a domain-general cognitive preference for simple structure. Secondly, the process of using a language in communicative scenarios functions as a pressure for greater informativeness because ultimately languages are only useful to the extent that they allow their users to express – or indeed represent – nuanced meaning distinctions. These two fundamental properties of language – simplicity and informativeness – are often, but not always, in conflict with each other. In general, a simple language cannot be informative and an informative language cannot be simple, resulting in the simplicity–informativeness tradeoff. Typological studies in several domains, including colour, kinship, and spatial relations, have demonstrated that languages find optimal solutions to this tradeoff – optimal solutions to the problem of balancing, on the one hand, the need for simplicity, and on the other, the need for informativeness. More specifically, the thesis explores how inductive reasoning and communicative interaction contribute to simple and informative structure respectively, with a particular emphasis on how a continuous space of meanings, such as the colour spectrum, may be divided into discrete labelled categories. The thesis first describes information-theoretic perspectives on learning and communication and highlights the fact that one of the hallmark feature of conceptual structure – which I term compactness – is not subject to the simplicity–informativeness tradeoff, since it confers advantages on both learning and use. This means it is unclear whether compact structure derives from a learning pressure or from a communicative pressure. To complicate matters further, some researchers view learning as a pressure for simplicity, as outlined above, while others have argued that learning might function as a pressure for informativeness in the sense that learners might have an a-priori expectation that languages ought to be informative. The thesis attempts to resolve this by formalizing these different perspectives in a model of an idealized Bayesian learner, and this model is used to make specific predictions about how these perspectives will play out during individual concept induction and also during the evolution of conceptual structure over time. Experimental testing of these predictions reveals overwhelming support for the simplicity account: Learners have a preference for simplicity, and over generational time, this preference becomes amplified, ultimately resulting in maximally simple, but nevertheless compact, conceptual structure. This emergent compact structure remains limited, however, because it only permits the expression of a small number of meaning distinctions – the emergent systems become degenerate. This issue is addressed in the second part of the thesis, which compares the outcomes of three experiments. The first replicates the finding above – compact categorical structure emerges from learning; the second and third experiments compare artificial and genuine pressures for expressivity, and they show that it is only in the presence of a live communicative task that higher level structure – a kind of statistical compositionality – can emerge. Working together, the low-level compact categorical structure, derived from learning, and the high-level compositional structure, derived from communicative interaction, provide a solution to the simplicity–informativeness tradeoff, expanding on and lending support to various claims in the literature

    The Manifold of Neural Responses Informs Physiological Circuits in the Visual System

    Get PDF
    The rapid development of multi-electrode and imaging techniques is leading to a data explosion in neuroscience, opening the possibility of truly understanding the organization and functionality of our visual systems. Furthermore, the need for more natural visual stimuli greatly increases the complexity of the data. Together, these create a challenge for machine learning. Our goal in this thesis is to develop one such technique. The central pillar of our contribution is designing a manifold of neurons, and providing an algorithmic approach to inferring it. This manifold is functional, in the sense that nearby neurons on the manifold respond similarly (in time) to similar aspects of the stimulus ensemble. By organizing the neurons, our manifold differs from other, standard manifolds as they are used in visual neuroscience which instead organize the stimuli. Our contributions to the machine learning component of the thesis are twofold. First, we develop a tensor representation of the data, adopting a multilinear view of potential circuitry. Tensor factorization then provides an intermediate representation between the neural data and the manifold. We found that the rank of the neural factor matrix can be used to select an appropriate number of tensor factors. Second, to apply manifold learning techniques, a similarity kernel on the data must be defined. Like many others, we employ a Gaussian kernel, but refine it based on a proposed graph sparsification technique—this makes the resulting manifolds less sensitive to the choice of bandwidth parameter. We apply this method to neuroscience data recorded from retina and primary visual cortex in the mouse. For the algorithm to work, however, the underlying circuitry must be exercised to as full an extent as possible. To this end, we develop an ensemble of flow stimuli, which simulate what the mouse would \u27see\u27 running through a field. Applying the algorithm to the retina reveals that neurons form clusters corresponding to known retinal ganglion cell types. In the cortex, a continuous manifold is found, indicating that, from a functional circuit point of view, there may be a continuum of cortical function types. Interestingly, both manifolds share similar global coordinates, which hint at what the key ingredients to vision might be. Lastly, we turn to perhaps the most widely used model for the cortex: deep convolutional networks. Their feedforward architecture leads to manifolds that are even more clustered than the retina, and not at all like that of the cortex. This suggests, perhaps, that they may not suffice as general models for Artificial Intelligence

    Connecting Mathematics and Mathematics Education

    Get PDF
    This open access book features a selection of articles written by Erich Ch. Wittmann between 1984 to 2019, which shows how the “design science conception” has been continuously developed over a number of decades. The articles not only describe this conception in general terms, but also demonstrate various substantial learning environments that serve as typical examples. In terms of teacher education, the book provides clear information on how to combine (well-understood) mathematics and methods courses to benefit of teachers. The role of mathematics in mathematics education is often explicitly and implicitly reduced to the delivery of subject matter that then has to be selected and made palpable for students using methods imported from psychology, sociology, educational research and related disciplines. While these fields have made significant contributions to mathematics education in recent decades, it cannot be ignored that mathematics itself, if well understood, provides essential knowledge for teaching mathematics beyond the pure delivery of subject matter. For this purpose, mathematics has to be conceived of as an organism that is deeply rooted in elementary operations of the human mind, which can be seamlessly developed to higher and higher levels so that the full richness of problems of various degrees of difficulty, and different means of representation, problem-solving strategies, and forms of proof can be used in ways that are appropriate for the respective level. This view of mathematics is essential for designing learning environments and curricula, for conducting empirical studies on truly mathematical processes and also for implementing the findings of mathematics education in teacher education, where it is crucial to take systemic constraints into account

    Development and use of bioanalytical instrumentation and signal analysis methods for rapid sampling microdialysis monitoring of neuro-intensive care patients

    No full text
    This thesis focuses on the development and use of analysis tools to monitor brain injury patients. For this purpose, an online amperometric analyzer of cerebral microdialysis samples for glucose and lactate has been developed and optimized within the Boutelle group. The initial aim of this thesis was to significantly improve the signal-to-noise ratio and limit of detection of the assay to allow reliable quantification of the analytical data. The first approach was to re-design the electronic instrumentation of the assay. Printed-circuit boards were fabricated and proved very low noise, stable and much smaller than the previous potentiostats. The second approach was to develop generic data processing algorithms to remove three complex types of noise that commonly contaminate analytical signals: spikes, non-stationary ripples and baseline drift. The general strategy consisted in identifying the types of noise, characterising them, and subsequently subtracting them from the otherwise unprocessed data set. Spikes were effectively removed with 96.8% success and ripples were removed with minimal distortion of the signal resulting in an increased signal-to-noise ratio by up to 250%. This allowed reliable quantification of traces from ten patients monitored with the online microdialysis assay. Ninety-six spontaneous metabolic events in response to spreading depolarizations were resolved. These were characterized by a fall in glucose by -32.0 ÎŒM and a rise in lactate by +23.1 ÎŒM (median values) for over a 20-minute time-period. With frequently repeating events, this led to a progressive depletion of brain glucose. Finally, to improve the temporal coupling between the metabolic data and the electro-cortical signals, a flow-cell was engineered to integrate a potassium selective electrode into the microdialysate flow stream. With good stability over hours of continuous use and a 90% response time of 65 seconds, this flow cell was used for preliminary in vivo experiments the Max Planck Institute in Cologne
    • 

    corecore