8 research outputs found

    What is lost in translation from visual graphics to text for accessibility

    Get PDF
    Many blind and low-vision individuals are unable to access digital media visually. Currently, the solution to this accessibility problem is to produce text descriptions of visual graphics, which are then translated via text-to-speech screen reader technology. However, if a text description can accurately convey the meaning intended by an author of a visualization, then why did the author create the visualization in the first place? This essay critically examines this problem by comparing the so-called graphic–linguistic distinction to similar distinctions between the properties of sound and speech. It also presents a provisional model for identifying visual properties of graphics that are not conveyed via text-tospeech translations, with the goal of informing the design of more effective sonic translations of visual graphics

    Soundsense: Sonifying pryroelectric sensor data for an interactive media event

    Get PDF
    Presented at the 11th International Conference on Auditory Display (ICAD2005)Collaborations between artists, engineers, and scientists often occur when creating new media works. These interdisciplinary efforts must overcome the ideals and practical-limitations inherent in both artistic and research pursuits. In turn, successful projects may truly be greater than the sum of their parts, enabling each collaborator to gain insight into their own work. soundSense, a cooperative effort between engineers, composers, and other specialists, sonifies pyroelectic sensor data to create a novel interactive-media event. Signals generated by multiplexing pyroelectric detectors inform datadriven audio and visual displays articulating – in real-time – the presence and motion of individuals within the sensed space

    The past, present, and promise of sonification

    Get PDF
    The use of sound to systematically communicate data has been with us for a long time, and has received considerable research, albeit in a broad range of distinct fields of inquiry. Sonification is uniquely capable of conveying series and patterns, trends and outliers…and effortlessly carries affect and emotion related to those data. And sound-either by itself or in conjunction with visual, tactile, or even olfactory representations-can make data exploration more compelling and more accessible to a broader range of individuals. Nevertheless, sonification and auditory displays still occupy only a sliver of popular mindshare: most people have never thought about using non-speech sound in this manner, even though they are certainly very familiar with other intentional uses of sound to convey status, notifications, and warnings. This article provides a brief history of sonification, introduces terms, quickly surveys a range of examples, and discusses the past, present, and as-yet unrealized future promise of using sound to expand the way we can communicate about data, broaden the use of auditory displays in society, and make science more engaging and more accessible

    Creating functional and livable soundscapes for peripheral monitoring of dynamic data

    Get PDF
    Presented at the 10th International Conference on Auditory Display (ICAD2004)Sonifications must be studied in order to match listener expectancies about data representation in the form of sound. In this study, a system was designed and implemented for dynamically rendering sonifications of simulated real-time data from the stock market. The system read and parsed the stock data then operated unit generators and mixers through a predefined sound mapping to create a `soundscape' of complementary ecological sounds. The sound mapping consisted of a threshold-based model in which a percentage change in price value was mapped to an ecological sound to be played whenever that threshold or gradient had been reached. The system also provided a generic mechanism for fading and transitioning between gradients. The prototype system was presented to stock trader test subjects in their work-listening environment for evaluation as a stand-alone system and in comparison to their preferred tools

    Listening back

    Full text link
    Listening Back is a practice-based research project that develops a critical mode of sonic inquiry into a technique of contemporary Web surveillance – the cookie. Following creative sonification practices, cookie data is sonified as a strategy for interrupting the visual surface of the browser interface to sonically draw attention to backend data capture. Theoretical scholarship from surveillance studies proposes that visual panopticism has been largely superseded by automated technologies of humanly incomprehensible data collection. Scholars such as Mark Andrejevic have observed how the operations of algorithmic surveillance have become post-representational. Listening Back aims to address the post-representational character of Web surveillance by asking: how can artists critically render an online experience of continuous and ubiquitous surveillance? During this PhD research, I have created the Listening Back browser add-on that sonifies Internet cookies in real-time. The add-on has been enacted across both live performance, installation, and personal computer usage. As a sounding Web-based arts practice, it deploys artistic approaches to browser add-ons and creative data sonification that I and others have developed within networked and sounding art fields during the last two decades. Artists such as Adriana Knouf, Allison Burtch and Michael Mandiberg have addressed the opacity and normalisation of the Web browser by creating artistic browser add-ons. These ethico-aesthetic strategies of awareness adopt Web protocols and data mining techniques to re-navigate and expose ordinarily obscured data logics and repurpose the browser as a site for artistic practice. In addition to repurposing and exposing hidden cookie data, sonification aims to situate an embodied listening within the real-time dynamics of Web surveillance and facilitate an engagement across critical analysis and sensing modes of online surveillance. By providing the opportunity to listen back, a human-level connection to real-time data capture is facilitated as an aesthetic sounding strategy for making the capture of surveillant data online tangible. Listening Back, as practice-based research, contributes a new artistic strategy to creative browser add-on practices by engaging an embodied listening experience that deploys time-based and experiential aspects of sound. Listening Back also uses creative sonification to situate online listening as an activity that occurs at the intersection of the network infrastructure, the Web browser, and personal computing

    Ambientes sonoros interativos e imersivos

    Get PDF
    O uso de técnicas de espacialização sonora interativas para a criação de ambientes virtuais sonoros permite atingir uma resolução superior à apresentada pelos ambientes virtuais visuais. Recorrendo a estas ferramentas é possível o desenvolvimento de aplicações interativas e imersivas, como jogos sonoros. Não estando dependentes da componente visual e criando um estímulo à criatividade do utilizador, este género de experiências permite atingir um grau de imersão superior aos jogos tradicionais. Esta investigação pretendeu analisar os principais métodos de espacialização sonora interativa e o fenómeno de imersão, de modo a permitir reunir o conhecimento necessário à concepção de uma instalação sonora interativa e imersiva. Foi desenvolvido e testado o jogo sonoro The Sound of Horror, do género first-person shooter e subgénero survival horror, controlado através de uma interface à semelhança de uma arma de grande porte que permite um controlo gestual orgânico para a execução de múltiplas funções, permitindo ao utilizador interagir com alvos sonoros dinâmicos, na forma de criaturas monstruosas e assustadoras

    Sonification of exosolar planetary systems

    Get PDF
    The purpose of this research is to investigate sonification techniques suitable for astronomers to explore exosolar planetary data. Four studies were conducted, one with sonification specialists and three with exosolar planetary astronomers. The first study was to establish existing practices in sonification design and obtain detailed information about design processes not fully communicated in published papers. The other studies were about designing and evaluating sonifications for three different fields of exosolar astronomy. One, to sonify atmospheric data of an exoplanet in a habitable zone. Another, to sonify accretion discs located in newly developing exosolar systems. The third sonification, planet detection in an asteroid belt. User-centred design was used so that mappings of the datasets could be easily comprehensible. Each sonification was designed to sound like the natural elements that were represented in the data. Spatial separation between overlapping datasets can make hidden information more noticeable and provide additional dimensionality for sound objects. It may also give a more realistic interpretation of the data object in a real-world capacity. Multiple psychoacoustic mappings can convey data dimensionality and immediate recognition of subtle changes. Sound design aesthetics that mimic natural sounds were more relatable for the user. Sonification has been effective within the context of these studies offering new insight by unmasking previously unnoticed data particulars. It has also given the astronomers a broader understanding of the dimension of the data objects that they study and their temporal-spatial behaviours. Future work pertains to the further development and creation of a sonification model consisting of different aspects of exosolar astronomy that could be developed for a platform that houses different data related to this field of study
    corecore