14 research outputs found

    Textflow: Screenless Access to Non-Visual Smart Messaging

    Get PDF
    Texting relies on screen-centric prompts designed for sighted users, still posing significant barriers to people who are blind and visually impaired (BVI). Can we re-imagine texting untethered from a visual display? In an interview study, 20 BVI adults shared situations surrounding their texting practices, recurrent topics of conversations, and challenges. Informed by these insights, we introduce TextFlow: a mixed-initiative context-aware system that generates entirely auditory message options relevant to the users’ location, activity, and time of the day. Users can browse and select suggested aural messages using finger-taps supported by an off-the-shelf finger-worn device, without having to hold or attend to a mobile screen. In an evaluative study, 10 BVI participants successfully interacted with TextFlow to browse and send messages in screen-free mode. The experiential response of the users shed light on the importance of bypassing the phone and accessing rapidly controllable messages at their fingertips while preserving privacy and accuracy with respect to speech or screen-based input. We discuss how non-visual access to proactive, contextual messaging can support the blind in a variety of daily scenarios

    The Domestication of Voice Activated -Technology & EavesMining: Surveillance, Privacy and Gender Relations at Home

    Get PDF
    This thesis develops a case study analysis of the Amazon Echo, the first-ever voice-activated smart speaker. The domestication of the devices feminine conversational agent, Alexa, and the integration of its microphone and digital sensor technology in home environments represents a moment of radical change in the domestic sphere. This development is interpreted according to two primary force relations: historical gender patterns of domestic servitude and eavesmining (eavesdropping + datamining) processes of knowledge extraction and analysis. The thesis is framed around three pillars of study that together demonstrate: how routinization with voice-activated technology affects acoustic space and ones experiences of home; how online warm experts initiate a dialogue about the domestication of technology that disregards and ignores Amazons corporate privacy framework; and finally, how the technologys conditions of use silently result in the deployment of ever-intensifying surveillance mechanisms in home environments. Eavesmining processes are beginning to construct a new world of media and surveillance where every spoken word can potentially be heard and recorded, and speaking is inseparable from identification

    Supporting Voice-Based Natural Language Interactions for Information Seeking Tasks of Various Complexity

    Get PDF
    Natural language interfaces have seen a steady increase in their popularity over the past decade leading to the ubiquity of digital assistants. Such digital assistants include voice activated assistants, such as Amazon's Alexa, as well as text-based chat bots that can substitute for a human assistant in business settings (e.g., call centers, retail / banking websites) and at home. The main advantages of such systems are their ease of use and - in the case of voice-activated systems - hands-free interaction. The majority of tasks undertaken by users of these commercially available voice-based digital assistants are simple in nature, where the responses of the agent are often determined using a rules-based approach. However, such systems have the potential to support users in completing more complex and involved tasks. In this dissertation, I describe experiments investigating user behaviours when interacting with natural language systems and how improvements in design of such systems can benefit the user experience. Currently available commercial systems tend to be designed in a way to mimic superficial characteristics of a human-to-human conversation. However, the interaction with a digital assistant differs significantly from the interaction between two people, partly due to limitations of the underlying technology such as automatic speech recognition and natural language understanding. As computing technology evolves, it may make interactions with digital assistants resemble those between humans. The first part of this thesis explores how users will perceive the systems that are capable of human-level interaction, how users will behave while communicating with such systems, and new opportunities that may be opened by that behaviour. Even in the absence of the technology that allows digital assistants to perform on a human level, the digital assistants that are widely adopted by people around the world are found to be beneficial for a number of use-cases. The second part of this thesis describes user studies aiming at enhancing the functionality of digital assistants using the existing level of technology. In particular, chapter 6 focuses on expanding the amount of information a digital assistant is able to deliver using a voice-only channel, and chapter 7 explores how expanded capabilities of voice-based digital assistants would benefit people with visual impairments. The experiments presented throughout this dissertation produce a set of design guidelines for existing as well as potential future digital assistants. Experiments described in chapters 4, 6, and 7 focus on supporting the task of finding information online, while chapter 5 considers a case of guiding a user through a culinary recipe. The design recommendations provided by this thesis can be generalised in four categories: how naturally a user can communicate their thoughts to the system, how understandable the system's responses are to the user, how flexible the system's parameters are, and how diverse the information delivered by the system is

    Accessible Autonomy: Exploring Inclusive Autonomous Vehicle Design and Interaction for People who are Blind and Visually Impaired

    Get PDF
    Autonomous vehicles are poised to revolutionize independent travel for millions of people experiencing transportation-limiting visual impairments worldwide. However, the current trajectory of automotive technology is rife with roadblocks to accessible interaction and inclusion for this demographic. Inaccessible (visually dependent) interfaces and lack of information access throughout the trip are surmountable, yet nevertheless critical barriers to this potentially lifechanging technology. To address these challenges, the programmatic dissertation research presented here includes ten studies, three published papers, and three submitted papers in high impact outlets that together address accessibility across the complete trip of transportation. The first paper began with a thorough review of the fully autonomous vehicle (FAV) and blind and visually impaired (BVI) literature, as well as the underlying policy landscape. Results guided prejourney ridesharing needs among BVI users, which were addressed in paper two via a survey with (n=90) transit service drivers, interviews with (n=12) BVI users, and prototype design evaluations with (n=6) users, all contributing to the Autonomous Vehicle Assistant: an award-winning and accessible ridesharing app. A subsequent study with (n=12) users, presented in paper three, focused on prejourney mapping to provide critical information access in future FAVs. Accessible in-vehicle interactions were explored in the fourth paper through a survey with (n=187) BVI users. Results prioritized nonvisual information about the trip and indicated the importance of situational awareness. This effort informed the design and evaluation of an ultrasonic haptic HMI intended to promote situational awareness with (n=14) participants (paper five), leading to a novel gestural-audio interface with (n=23) users (paper six). Strong support from users across these studies suggested positive outcomes in pursuit of actionable situational awareness and control. Cumulative results from this dissertation research program represent, to our knowledge, the single most comprehensive approach to FAV BVI accessibility to date. By considering both pre-journey and in-vehicle accessibility, results pave the way for autonomous driving experiences that enable meaningful interaction for BVI users across the complete trip of transportation. This new mode of accessible travel is predicted to transform independent travel for millions of people with visual impairment, leading to increased independence, mobility, and quality of life

    THE CHIME: POETICALLY TRANSLATING THE DISCRETE DIFFERENCE OF AGNSOTIC SENSORS INTO A SONIFICATION OF THE CITY

    Get PDF
    Inspired by Georg Simmel’s notion of the blasé and Mark Weiser’s vision for calm technology, this document detailing the application of critical concepts to the realization of a design intention is a critical and creative exploration of computation and the everyday. While paying particular attention to the conceptual underappreciation of acoustic space and place, I outline a case for poetically translating data collected from inherently agnostic sensors through the design, construction and use of an instrument for sensing environmental difference (comprised of 18 sensors measuring 27 data points) and exemplified through a musical sonification. A generative instrument, such as The Chime, takes external impulses and translates them poetically into a form that naturally casts the attention back upon the initial gust. In the built environment such treatment of discrete sensing could help engender what I call acoustic places; place that, even if for a passing moment, might resonate harmonically and reciprocally with the inspiration for its emission

    A critical practice-based exploration of interactive panoramas' role in helping to preserve cultural memory

    Get PDF
    I am enclosing the content of two DVDs which are integral part of the practice-based thesis.The rapid development of digital communication technologies in the 20th and 21st centuries has affected the way researchers look at ways memory – especially cultural memory – can be preserved and enhanced. State-of-the-art communication technologies such as the Internet or immersive environments support participation and interaction and transform memory into ‘prosthetic’ experience, where digital technologies could enable 'implantation' of events that have not actually been experienced. While there is a wealth of research on the preservation of public memory and cultural heritage sites using digital media, more can be explored on how these media can contribute to the cultivation of cultural memory. One of the most interesting phenomena related to this issue is how panoramas, which are immersive and have a well-established tradition in preserving memories, can be enhanced by recent digital technologies and image spaces. The emergence of digital panoramic video cameras and panoramic environments has opened up new opportunities for exploring the role of interactive panoramas not only as a documentary tool for visiting sites but mainly as a more complex technique for telling non-linear interactive narratives through the application of panoramic photography and panoramic videography which, when presented in a wrap-around environment, could enhance recalling. This thesis attempts to explore a way of preserving inspirational environments and memory sites in a way that combines panoramic interactive film and traversing the panoramic environment with viewing the photo-realistic panoramic content rather than computer-generated environment. This research is based on two case studies. The case study of Charles Church in Plymouth represents the topical approach to narrative and focuses on the preservation of the memory of the Blitz in Plymouth and the ruin of Charles Church which stands as a silent reminder of this event. The case study of Charles Causley reflects topographical approach where, through traversing the town of Launceston, viewers learn about Causley’s life and places that provided inspirations for his poems. The thesis explores through practice what can be done and reflects on positive and less positive aspects of preserving cultural memory in these case studies in a critical way. Therefore, the results and recommendations from this thesis can be seen as valuable contribution to the study of intermedia and cultural memory in general

    Audio augmented objects and the audio augmented reality experience

    Get PDF
    This thesis explores the characteristics, experiential qualities and functional attributes of audio augmented objects within the context of museums and the home. Within these contexts, audio augmented objects are realised by attaching binaurally rendered and spatially positioned virtual audio content to real-world objects, museum artefacts, physical locations, architectural features, fixtures and fittings. The potential of these audio augmented objects is explored through a combination of practice-based research and ethnographically framed studies. The practical research takes the form of four sound installation environments delivered through the use of an augmented reality mobile phone application that are deployed within a museum environment and in participants’ homes. Within these experiences, audio augmented objects are capable of being perceived as the actual source of virtual audio content. The findings also demonstrate how the perceived characteristics of real-world objects and physical space can be altered and manipulated through their audio augmentation. In addition, audio augmented museum objects present themselves as providing effective interfaces to digital audio archival content, and digital audio archival content presents itself as an effective re-animator of silenced museum objects. How audio augmented objects can function as catalysts for the exploration of physical space and virtual audio space within both the home and museum is presented. This is achieved by the uncovering of a sequence of interactional phases along with the uncovering of the functional properties of different types of audio content and physical objects within audio augmented object realities. By way of conclusion, it is proposed that the audio augmented object reality alters the current, popular experience of acoustic virtual reality from an experience of you being there, to one of it being here. This change in the perception of the acoustic virtual reality has applications across an array of audio experiences, not just within cultural institutions, but also within various domestic listening experiences including the consumption and delivery of recorded music and audio-based drama

    Audio augmented objects and the audio augmented reality experience

    Get PDF
    This thesis explores the characteristics, experiential qualities and functional attributes of audio augmented objects within the context of museums and the home. Within these contexts, audio augmented objects are realised by attaching binaurally rendered and spatially positioned virtual audio content to real-world objects, museum artefacts, physical locations, architectural features, fixtures and fittings. The potential of these audio augmented objects is explored through a combination of practice-based research and ethnographically framed studies. The practical research takes the form of four sound installation environments delivered through the use of an augmented reality mobile phone application that are deployed within a museum environment and in participants’ homes. Within these experiences, audio augmented objects are capable of being perceived as the actual source of virtual audio content. The findings also demonstrate how the perceived characteristics of real-world objects and physical space can be altered and manipulated through their audio augmentation. In addition, audio augmented museum objects present themselves as providing effective interfaces to digital audio archival content, and digital audio archival content presents itself as an effective re-animator of silenced museum objects. How audio augmented objects can function as catalysts for the exploration of physical space and virtual audio space within both the home and museum is presented. This is achieved by the uncovering of a sequence of interactional phases along with the uncovering of the functional properties of different types of audio content and physical objects within audio augmented object realities. By way of conclusion, it is proposed that the audio augmented object reality alters the current, popular experience of acoustic virtual reality from an experience of you being there, to one of it being here. This change in the perception of the acoustic virtual reality has applications across an array of audio experiences, not just within cultural institutions, but also within various domestic listening experiences including the consumption and delivery of recorded music and audio-based drama

    Clique: Perceptually Based, Task Oriented Auditory Display for GUI Applications

    Get PDF
    Screen reading is the prevalent approach for presenting graphical desktop applications in audio. The primary function of a screen reader is to describe what the user encounters when interacting with a graphical user interface (GUI). This straightforward method allows people with visual impairments to hear exactly what is on the screen, but with significant usability problems in a multitasking environment. Screen reader users must infer the state of on-going tasks spanning multiple graphical windows from a single, serial stream of speech. In this dissertation, I explore a new approach to enabling auditory display of GUI programs. With this method, the display describes concurrent application tasks using a small set of simultaneous speech and sound streams. The user listens to and interacts solely with this display, never with the underlying graphical interfaces. Scripts support this level of adaption by mapping GUI components to task definitions. Evaluation of this approach shows improvements in user efficiency, satisfaction, and understanding with little development effort. To develop this method, I studied the literature on existing auditory displays, working user behavior, and theories of human auditory perception and processing. I then conducted a user study to observe problems encountered and techniques employed by users interacting with an ideal auditory display: another human being. Based on my findings, I designed and implemented a prototype auditory display, called Clique, along with scripts adapting seven GUI applications. I concluded my work by conducting a variety of evaluations on Clique. The results of these studies show the following benefits of Clique over the state of the art for users with visual impairments (1-5) and mobile sighted users (6): 1. Faster, accurate access to speech utterances through concurrent speech streams. 2. Better awareness of peripheral information via concurrent speech and sound streams. 3. Increased information bandwidth through concurrent streams. 4. More efficient information seeking enabled by ubiquitous tools for browsing and searching. 5. Greater accuracy in describing unfamiliar applications learned using a consistent, task-based user interface. 6. Faster completion of email tasks in a standard GUI after exposure to those tasks in audio
    corecore