1,392 research outputs found

    Somatic ABC's: A Theoretical Framework for Designing, Developing and Evaluating the Building Blocks of Touch-Based Information Delivery

    Get PDF
    abstract: Situations of sensory overload are steadily becoming more frequent as the ubiquity of technology approaches reality--particularly with the advent of socio-communicative smartphone applications, and pervasive, high speed wireless networks. Although the ease of accessing information has improved our communication effectiveness and efficiency, our visual and auditory modalities--those modalities that today's computerized devices and displays largely engage--have become overloaded, creating possibilities for distractions, delays and high cognitive load; which in turn can lead to a loss of situational awareness, increasing chances for life threatening situations such as texting while driving. Surprisingly, alternative modalities for information delivery have seen little exploration. Touch, in particular, is a promising candidate given that it is our largest sensory organ with impressive spatial and temporal acuity. Although some approaches have been proposed for touch-based information delivery, they are not without limitations including high learning curves, limited applicability and/or limited expression. This is largely due to the lack of a versatile, comprehensive design theory--specifically, a theory that addresses the design of touch-based building blocks for expandable, efficient, rich and robust touch languages that are easy to learn and use. Moreover, beyond design, there is a lack of implementation and evaluation theories for such languages. To overcome these limitations, a unified, theoretical framework, inspired by natural, spoken language, is proposed called Somatic ABC's for Articulating (designing), Building (developing) and Confirming (evaluating) touch-based languages. To evaluate the usefulness of Somatic ABC's, its design, implementation and evaluation theories were applied to create communication languages for two very unique application areas: audio described movies and motor learning. These applications were chosen as they presented opportunities for complementing communication by offloading information, typically conveyed visually and/or aurally, to the skin. For both studies, it was found that Somatic ABC's aided the design, development and evaluation of rich somatic languages with distinct and natural communication units.Dissertation/ThesisPh.D. Computer Science 201

    Instructional eLearning technologies for the vision impaired

    Get PDF
    The principal sensory modality employed in learning is vision, and that not only increases the difficulty for vision impaired students from accessing existing educational media but also the new and mostly visiocentric learning materials being offered through on-line delivery mechanisms. Using as a reference Certified Cisco Network Associate (CCNA) and IT Essentials courses, a study has been made of tools that can access such on-line systems and transcribe the materials into a form suitable for vision impaired learning. Modalities employed included haptic, tactile, audio and descriptive text. How such a multi-modal approach can achieve equivalent success for the vision impaired is demonstrated. However, the study also shows the limits of the current understanding of human perception, especially with respect to comprehending two and three dimensional objects and spaces when there is no recourse to vision

    You don't need eyes to see, you need vision: performative pedagogy, technology and teaching art to students with vision impairment

    Get PDF
    This paper links experiential learning and Performance Art with public pedagogy on sight/visual negation and contributes to knowledge by drawing together performance as pedagogy to demonstrate how teaching styles can accommodate those with vision impairment and adapt (performance) art to make it more accessible. In so doing it seeks to develop inclusion for students with a vision impairment. Intermeshing practice, teaching and research around issues of access, participation and education, it builds upon previous work exploring teaching strategies for the visually impaired within contemporary art practice (Axel and Levent, 2003; Hayhoe, 2008; Allan, 2014) and shares useful adaptations to help make learning about art more accessible for students with vision impairment. It also sheds light upon aspects of the question, ‘What are the basics that an educator needs to know when designing art programs for persons with visual impairment?’ (Axel and Levent, 2003: 51). This paper can be read as a benchmark for critical engagement in its attempt to combine performative pedagogy with an emphasis on technological means, access and visual impairment. While vision is favoured over other senses (Jonas, 1954) and with the increasing importance of digital and virtual realities as a major component of students’ lives, never has there been a time in which the meanings of access are so broadened via technological mediation—that draw on all senses—to which artworks, as suggested, respond. Relying on all senses becomes an aspect of public pedagogy that is more inclusive

    The Knotweed Factor: Non-visual Aspects of Poetic Documentary

    Get PDF
    This thesis is an inquiry into the creative processes of poetry and poetic expression in documentary. The practice-based element is a 60 minute video about a poet living in Exeter, UK, called James Turner. The documentary is entitled, The Knotweed Factor. This written element of the thesis contextualises the investigation as a discourse on blindness and visual impairment. There are few representations of blindness and/or visual impairment (VI) in The Knotweed Factor. Rather, the documentary is concerned with how visual information (e.g., filming a poet) is translated non-visually (e.g., the sound of the poem being recited). It also addresses the issue of how the non-visual is translated into the visual. I argue in this text that blindness/VI is marginalised in visual studies/culture. This is unfortunate because blindness/VI studies provides valuable context for understanding the dynamics of sound and vision in creative media, which is a central concern of The Knotweed Factor. The rationale for taking this approach is as follows: During the editing, it was noticed that Turner (who is sighted) provides a kind of unprompted audio description (AD) of events in his environment to the audience, as if he is participating in a radio documentary. This raised questions, not only about the ekphrastic possibilities of his technique, but also about the potential to contextualise such scenes as a disquisition on blindness/VI. Blindness/VI is an important and under-theorised element of visual studies/culture (VS/C). Many films, plays, animations, documentaries, and television programmes are audio described. AD enables the blind/visually impaired (also VI) to comprehend and enjoy visual action. It is suggested here that AD theory is an insufficient model for critically reflecting on the creative processes in The Knotweed Factor. This is because the field is presently more concerned with practicability than with aesthetics. It seemed more helpful to address the broader question of how blindness/VI is positioned in VS/C. Doing so has highlighted instances of exclusion and marginalisation in VS/C. In the course of the video production, it was discovered that the interaction of dreams, memories, and ideas (the mindscape) informs the temporal creative process. Most analytical models within VS/C (e.g., Deleuze) offer a dialectical approach to understanding creativity. Henri Bergson, however, proposes a theory of multiplicity, which considers the interplay of phenomenological creativity of the mindscape as a homogenous, multifaceted process, in place of a dialectical one. Martha Blassnigg interrogates Bergson’s responses to audiovisual media and argues that Bergson’s multiplicity formula is more useful for understanding these processes, both for artist and audience. Blassnigg interprets Bergson’s theory as a universality of idea communication. This thesis considers what the universality of audiovisual experience implies for blindness/VI studies. It does so by contextualising the written research as a discourse on VS/C. In The Knotweed Factor, the emotions, sounds, and visual ideas, memories, and dreams which inform James Turner’s creativity are conveyed to the audience in two ways: 1) By sound (Turner’s recitations, interviews, and conversations), and 2) by the documentary’s abstracted audiovisualisations of Turner’s poetry and mindscape. For Turner, the ‘image’ is a personalised, innate phenomenon. It is ephemeral, intangible imagination. Turner’s experience (audiovisualised in The Knotweed Factor) is compared in this written part of the thesis to pre-Socratic ideations of image-making. It is argued that for many cultures, the image was (and for some remains) an emanation of spirit or idea. In other words, the image was considered a transcendent force, and the ‘soul’ of the image eternal and universal. This transcendence is considered in this written element of the thesis as a bridge between the present academic gap in the fields of blindness/VI studies and visual studies/culture. In this text, The Knotweed Factor serves as a case-study to test how non- and minimal-visual elements of audiovisual art and media are positioned in VS/C. Constructed here is a history of the interpretation of blindness and the image, from pre-Socratic aesthetics to the Enlightenment, where ideas concerning the phenomenology of blindness and visual impairment were transformed into epistemological inquiries. This approach enables the researcher to reflect critically on the aesthetics of The Knotweed Factor, using the framework of the non-visual (in this case recited poetry) to test and interrogate the visual (i.e., ‘poetically’ visualised poetry)

    Convey Data in QlikÂź Sense from a Universal Design Perspective

    Get PDF
    “The important criterion for a graph is not simply how fast we can see a result; rather it is whether through the use of the graph we can see something that would have been harder to see otherwise or that could not have been seen at all.” William S. Cleveland When presenting graphs for people with visual impairments the solutions found on the market today often present pure data values only. Overview is missing and there is no possibility to get any additional information about the visual content of the graph. Should we accept the fact that visually impaired persons are only presented with the data or could they actually benefit from data represented in graphical form? The aim of this project was to investigate how to provide people with visual impairments the best possible user experience when analyzing data in the business intelligence program Qlik Sense. The research showed that it is possible to convey an overview of the content in graphs with a synthetic speech solution. The synthetic speech presents the purpose of the graph and key values as well as the overall shape of the curve. In a future development it is possible to extend the product to include voice recognition to allow the user to explore the data and make own discoveries. The project begun with a literature study to find previously conducted work in the same field. To gather proper knowledge about what information users find interesting, what they are looking for and how they can benefit from using charts, a pilot study was initiated. The pilot study was performed by people who have vision classified as normal. Further, persons from the target group, i.e. people with visual impairments, were interviewed to receive an understanding of what is missing from today’s low vision aids and solutions. These results were used when creating a Low Fidelity prototype in an attempt to show how to present visual data to a visually impaired user. The prototype was tested on both sighted persons and persons from the target group. Results were then collected and analyzed to create a foundation for the High Fidelity (Hi-Fi) prototype that was developed to realize the design ideas. Finally, the Hi-Fi prototype was evaluated through usability tests that resulted in the above-mentioned conclusion.“Det viktigaste kriteriet för en graf Ă€r egentligen inte hur fort vi kan se ett resultat; det Ă€r snarare om vi genom anvĂ€ndandet av en graf kan se nĂ„got som hade varit svĂ„rare att se annars eller som inte hade alls kunnat ses." William S. Cleveland ÖversĂ€ttning: Celie Gunnarsson De lösningar som finns pĂ„ dagens marknad gĂ€llande presentation av grafer för synskadade bygger ofta pĂ„ ren presentation av enskilda vĂ€rden. Översikt saknas och det finns ingen möjlighet att fĂ„ övrig information om det visuella innehĂ„llet i grafen. Ska man som synskadad nöja sig med att endast fĂ„ ta del av datan eller ska man faktiskt kunna ha samma nytta av ett diagram som en seende person? Syftet med detta projekt var att undersöka hur man kan erbjuda personer med synnedsĂ€ttning den bĂ€sta möjliga anvĂ€ndarupplevelsen dĂ„ de analyserar data i Business Intelligence-programmet Qlik Sense. Arbetet visade att det Ă€r möjligt att förmedla en överblick av en grafs innehĂ„ll med hjĂ€lp av syntetiskt tal. Det syntetiska talet kan dĂ„ presentera grafens syfte, nyckeltal tillika formen pĂ„ kurvan som bildas. Vid framtida utveckling Ă€r det möjligt att utöka produkten sĂ„ att Ă€ven röstigenkĂ€nning kan inkluderas. Detta skulle ge anvĂ€ndaren en möjlighet att pĂ„ egen hand utforska datan och göra egna upptĂ€ckter. Projektet inleddes med en litteraturstudie för att finna tidigare utfört arbete inom samma omrĂ„de. För att insamla kunskap om vilken information som upplevs som intressant, vad anvĂ€ndare tittar efter och hur de kan ha nytta av grafer utfördes Ă€ven en pilotstudie. Försökspersonerna i denna studie utgjordes av personer med syn klassificerad som normal. Vidare intervjuades personer ur mĂ„lgruppen, det vill sĂ€ga personer med olika typer av synnedsĂ€ttning, för att skapa förstĂ„else för vad som saknas i dagens hjĂ€lpmedel och lösningar. Dessa resultat anvĂ€ndes för att skapa en Low Fidelityprototyp i ett försök att visa hur man skulle kunna presentera visuell data för en synskadad person. Prototypen testades sedan, bĂ„de utav seende personer och personer frĂ„n mĂ„lgruppen, varefter resultatet sammanstĂ€lldes och analyserades. Analysen anvĂ€ndes för att skapa en grund till den High Fidelityprototyp (Hi Fi-prototyp) som kom att utvecklas ur designidĂ©erna. Slutligen utvĂ€rderades Hi Fi-prototypen genom anvĂ€ndbarhetstester, vilka resulterade i ovan nĂ€mnda slutsats

    Putting It Into Words: The Impact of Visual Impairment on Perception, Experience and Presence

    Get PDF
    The experience of being “present” in a mediated environment, such that it appears real, is known to be affected by deficits in perception yet little research has been devoted to disabled audiences. People with a visual impairment access audiovisual media by means of Audio Description, which gives visual information in verbal form. The AD user plays an active role, engaging their own perceptual processing systems and bringing real-world experiences to the mediated environment. In exploring visual impairment and presence, this thesis concerns a question fundamental to psychology, whether propositional and experiential knowledge equate. It casts doubt on current models of sensory compensation in the blind and puts forward an alternative hypothesis of linguistic compensation. Qualitative evidence from Study 1 suggests that, in the absence of bimodal (audio-visual) cues, words can compensate for missing visual information. The role of vision in multisensory integration is explored experimentally in Studies 2 and 3. Crossmodal associations arising both from direct perception and imagery are shown to be altered by visual experience. Study 4 tests presence in an auditory environment. Non-verbal sound is shown to enhance presence in the sighted but not the blind. Both Studies 3 and 4 support neuroimaging evidence that words are processed differently in the absence of sight. Study 5, comparing mental spatial models, suggests this is explained by explicit verbal encoding by people with a visual impairment. Study 6 tests the effect of words on presence and emotion elicitation in an audiovisual environment. In the absence of coherent information from the dialogue, additional verbal information significantly improves understanding. Moreover, in certain circumstances, Audio Description significantly enhances presence and successfully elicits a target emotion. A model of Audio Description is presented. Implications are discussed for theoretical models of perceptual processing and presence in those with and without sight

    A Haptic Study to Inclusively Aid Teaching and Learning in the Discipline of Design

    Get PDF
    Designers are known to use a blend of manual and virtual processes to produce design prototype solutions. For modern designers, computer-aided design (CAD) tools are an essential requirement to begin to develop design concept solutions. CAD, together with augmented reality (AR) systems have altered the face of design practice, as witnessed by the way a designer can now change a 3D concept shape, form, color, pattern, and texture of a product by the click of a button in minutes, rather than the classic approach to labor on a physical model in the studio for hours. However, often CAD can limit a designer’s experience of being ‘hands-on’ with materials and processes. The rise of machine haptic1 (MH) tools have afforded a great potential for designers to feel more ‘hands-on’ with the virtual modeling processes. Through the use of MH, product designers are able to control, virtually sculpt, and manipulate virtual 3D objects on-screen. Design practitioners are well placed to make use of haptics, to augment 3D concept creation which is traditionally a highly tactile process. For similar reasoning, it could also be said that, non-sighted and visually impaired (NS, VI) communities could also benefit from using MH tools to increase touch-based interactions, thereby creating better access for NS, VI designers. In spite of this the use of MH within the design industry (specifically product design), or for use by the non-sighted community is still in its infancy. Therefore the full benefit of haptics to aid non-sighted designers has not yet been fully realised. This thesis empirically investigates the use of multimodal MH as a step closer to improving the virtual hands-on process, for the benefit of NS, VI and fully sighted (FS) Designer-Makers. This thesis comprises four experiments, embedded within four case studies (CS1-4). Case study 1and2 worked with self-employed NS, VI Art Makers at Henshaws College for the Blind and Visual Impaired. The study examined the effects of haptics on NS, VI users, evaluations of experience. Case study 3 and4, featuring experiments 3 and4, have been designed to examine the effects of haptics on distance learning design students at the Open University. The empirical results from all four case studies showed that NS, VI users were able to navigate and perceive virtual objects via the force from the haptically rendered objects on-screen. Moreover, they were assisted by the whole multimodal MH assistance, which in CS2 appeared to offer better assistance to NS versus FS participants. In CS3 and 4 MH and multimodal assistance afforded equal assistance to NS, VI, and FS, but haptics were not as successful in bettering the time results recorded in manual (M) haptic conditions. However, the collision data between M and MH showed little statistical difference. The thesis showed that multimodal MH systems, specifically used in kinesthetic mode have enabled human (non-disabled and disabled) to credibly judge objects within the virtual realm. It also shows that multimodal augmented tooling can improve the interaction and afford better access to the graphical user interface for a wider body of users
    • 

    corecore