14 research outputs found

    Supplementary materials for "A Model for Data-Driven Sonification Using Soundscapes," a poster presented at the ACM Intelligent User Interfaces Conference in 2015.

    Get PDF
    We provide: 1. An example "input" soundscape, exemplifying an existing soundscape that can be provided by a user 2. A clip of the "output" soundscape that was rendered for a dataset, using the above input soundscape and the mapping policy 3. A supplementary text describing the features of the input soundscape, sound groups, sound samples, and Twitter data, as well as the details of the mapping policy that were used to generate this output soundscape. This text also provides attribution for the sound files used in the sonification

    End-user Development of Sonifications using Soundscapes

    Get PDF
    Designing sonifications requires knowledge in many domains including sound design, sonification design, and programming. Thus end users typically do not create sonifications on their own, but instead work with sonification experts to iteratively co-design their systems. However, once a sonification system is deployed there is little a user can do to make adjustments. In this work, we present an approach for sonification system design that puts end users in the control of the design process by allowing them to interactively generate, explore, and refine sonification designs. Our approach allows a user to start creating sonifications simply by providing an example soundscape (i.e., an example of what they might want their sonification to sound like), and an example dataset illustrating properties of the data they would like to sonify. The user is then provided with the ability to employ automated or semi-automated design of mappings from features of the data to soundscape controls. To make this possible, we describe formal models for soundscape, data, and sonification, and an optimization-based method for creating sonifications that is informed by design principles outlined in past auditory display research

    A Model for Data-Driven Sonification Using Soundscapes

    Get PDF
    A sonification is a rendering of audio in response to data, and is used in instances where visual representations of data are impossible, difficult, or unwanted. Designing sonifications often requires knowledge in multiple areas as well as an understanding of how the end users will use the system. This makes it an ideal candidate for end-user development where the user plays a role in the creation of the design. We present a model for sonification that utilizes user-specified examples and data to generate cross-domain mappings from data to sound. As a novel contribution we utilize soundscapes (acoustic scenes) for these user-selected examples to define a structure for the sonification. We demonstrate a proof of concept of our model using sound examples and discuss how we plan to build on this work in the future

    Genetic effects on gene expression across human tissues

    Get PDF
    Characterization of the molecular function of the human genome and its variation across individuals is essential for identifying the cellular mechanisms that underlie human genetic traits and diseases. The Genotype-Tissue Expression (GTEx) project aims to characterize variation in gene expression levels across individuals and diverse tissues of the human body, many of which are not easily accessible. Here we describe genetic effects on gene expression levels across 44 human tissues. We find that local genetic variation affects gene expression levels for the majority of genes, and we further identify inter-chromosomal genetic effects for 93 genes and 112 loci. On the basis of the identified genetic effects, we characterize patterns of tissue specificity, compare local and distal effects, and evaluate the functional properties of the genetic effects. We also demonstrate that multi-tissue, multi-individual data can be used to identify genes and pathways affected by human disease-associated variation, enabling a mechanistic interpretation of gene regulation and the genetic basis of diseas

    Genetic effects on gene expression across human tissues

    Get PDF
    Characterization of the molecular function of the human genome and its variation across individuals is essential for identifying the cellular mechanisms that underlie human genetic traits and diseases. The Genotype-Tissue Expression (GTEx) project aims to characterize variation in gene expression levels across individuals and diverse tissues of the human body, many of which are not easily accessible. Here we describe genetic effects on gene expression levels across 44 human tissues. We find that local genetic variation affects gene expression levels for the majority of genes, and we further identify inter-chromosomal genetic effects for 93 genes and 112 loci. On the basis of the identified genetic effects, we characterize patterns of tissue specificity, compare local and distal effects, and evaluate the functional properties of the genetic effects. We also demonstrate that multi-tissue, multi-individual data can be used to identify genes and pathways affected by human disease-associated variation, enabling a mechanistic interpretation of gene regulation and the genetic basis of disease

    End-user development of sonifications using soundscapes

    Get PDF
    Presented at the 21st International Conference on Auditory Display (ICAD2015), July 6-10, 2015, Graz, Styria, Austria.Designing sonifications requires knowledge in many domains including sound design, sonification design, and programming. Thus end users typically do not create sonifications on their own, but instead work with sonification experts to iteratively co-design their systems. However, once a sonification system is deployed there is little a user can do to make adjustments. In this work, we present an approach for sonification system design that puts end users in the control of the design process by allowing them to interactively generate, explore, and refine sonification designs. Our approach allows a user to start creating sonifications simply by providing an example soundscape (i.e., an example of what they might want their sonification to sound like), and an example dataset illustrating properties of the data they would like to sonify. The user is then provided with the ability to employ automated or semi-automated design of mappings from features of the data to soundscape controls. To make this possible, we describe formal models for soundscape, data, and sonification, and an optimization-based method for creating sonifications that is informed by design principles outlined in past auditory display research
    corecore