95 research outputs found

    Extending Public Accessibility to the Mind: Designing Airports for People with Aphasia

    Get PDF
    Imagine traveling in an airport in another country. The language is entirely foreign, and all signs are written in text you cannot understand. You have ten minutes to make your connection. How do you find your gate? How do you ask questions? This hypothetical generation of panic is the reality for many travelers every day who suffer from aphasia. Aphasia is a language disorder often caused by stroke or other brain injury that makes it difficult to communicate, read, and process numbers, especially in stressful environments like an airport. More than two million people in America suffer from aphasia and have not been effectively served in the public sphere. Since 1990, the Americans with Disabilities Act (ADA) has sought to “promote equal opportunity, full participation, independent living, and economic self-sufficiency for Americans with disabilities.” Though America has seen great strides in helping public spaces become more accessible to the physically handicapped, fewer attempts have been made to help those suffering from mental handicaps. This research will help to address one of the many subgroups that struggle to mentally navigate public spaces. Through secondary research, case studies, and visual analysis, this research will explore practical methods and solutions that enable people with aphasia to navigate and utilize airports independently and confidently. The solution to this issue will require a layered approach of reimagining signage and navigation tools for airports and creating training tools for airport employees to better understand and serve aphasia patients when they travel. Equipping aphasic travelers with the necessary tools and support will empower them to fly with confidence and independence. This collection of research and tools could expand to impact other people with language disorders navigating high-traffic public spaces like hospitals, schools, subways, and bus stations

    Multimodal Accessibility of Documents

    Get PDF

    Interface Design for Mobile Applications

    Get PDF
    Interface design is arguably one of the most important issues in the development of mobile applications. Mobile users often suffer from the poor interface design that seriously hinders the usability of those mobile applications. The major challenge in the interface design of mobile applications is caused by the unique features of mobile devices, such as small screen size, low resolution, and inefficient data entry methods. Therefore, there is a pressing need of theoretical frameworks or guidelines for designing effective and user-friendly interfaces for mobile applications. Based on a comprehensive literature review, this paper proposes a novel framework for the design of effective mobile interfaces. This framework consists of four major components, namely information presentation, data entry methods, mobile users, and context. We also provide a set of practical interface design guidelines and some insights into what factors should be taken into consideration while designing interfaces for mobile applications

    FM radio: family interplay with sonic mementos

    Get PDF
    Digital mementos are increasingly problematic, as people acquire large amounts of digital belongings that are hard to access and often forgotten. Based on fieldwork with 10 families, we designed a new type of embodied digital memento, the FM Radio. It allows families to access and play sonic mementos of their previous holidays. We describe our underlying design motivation where recordings are presented as a series of channels on an old fashioned radio. User feedback suggests that the device met our design goals: being playful and intriguing, easy to use and social. It facilitated family interaction, and allowed ready access to mementos, thus sharing many of the properties of physical mementos that we intended to trigger

    Designing multimodal interaction for the visually impaired

    Get PDF
    Although multimodal computer input is believed to have advantages over unimodal input, little has been done to understand how to design a multimodal input mechanism to facilitate visually impaired users\u27 information access. This research investigates sighted and visually impaired users\u27 multimodal interaction choices when given an interaction grammar that supports speech and touch input modalities. It investigates whether task type, working memory load, or prevalence of errors in a given modality impact a user\u27s choice. Theories in human memory and attention are used to explain the users\u27 speech and touch input coordination. Among the abundant findings from this research, the following are the most important in guiding system design: (1) Multimodal input is likely to be used when it is available. (2) Users select input modalities based on the type of task undertaken. Users prefer touch input for navigation operations, but speech input for non-navigation operations. (3) When errors occur, users prefer to stay in the failing modality, instead of switching to another modality for error correction. (4) Despite the common multimodal usage patterns, there is still a high degree of individual differences in modality choices. Additional findings include: (I) Modality switching becomes more prevalent when lower working memory and attentional resources are required for the performance of other concurrent tasks. (2) Higher error rates increases modality switching but only under duress. (3) Training order affects modality usage. Teaching a modality first versus second increases the use of this modality in users\u27 task performance. In addition to discovering multimodal interaction patterns above, this research contributes to the field of human computer interaction design by: (1) presenting a design of an eyes-free multimodal information browser, (2) presenting a Wizard of Oz method for working with visually impaired users in order to observe their multimodal interaction. The overall contribution of this work is that of one of the early investigations into how speech and touch might be combined into a non-visual multimodal system that can effectively be used for eyes-free tasks

    Voice controlled audiobook reader software for visually impaired

    Get PDF
    This thesis results in a functional proof-of-concept level application that will help Pratsam Oy Ab to determine if voice-controlled audiobook player developed for Google Home device has potential to be a quality product. This thesis describes the development and the research of functional system that could be used as a starting point, in case it would be developed into a full product. Before committing to a product development, software companies might want to test if the product concept is viable. The concept viability can be verified with research into possible development problems and with proof-of-concept level software. The proof-of-concept can explore if the product can be built at all, and if the quality would be high enough, so that it results in a profitable product. In this product concept the user would use voice to control and to play Daisy 2.02 format audiobooks, which is an audiobook format developed for visually impaired. The audiobook reading software would be developed for Google Home device without a visual user interface. Main technologies of the thesis are voice user interface, speech recognition, text-to-speech technologies, Google cloud platform, cloud-based MySQL database, Java REST API backend and a Java console application to parse data from Daisy audiobook files into the database. The proof-of-concept period did not fully prove the viability of the concept. Google however has given clues that the required features would be added later, which could make it worth it to start developing the concept further

    Strategic Intelligence Monitor on Personal Health Systems (SIMPHS): Report on Typology/Segmentation of the PHS Market

    Get PDF
    This market segmentation reports for Personal Health Systems (PHS) describes the methodological background and illustrates the principles of classification and typology regarding different fragments forming this market. It discusses different aspects of the market for PHS and highlights challenges towards a stringent and clear-cut typology or defining market segmentation. Based on these findings a preliminary hybrid typology and indications and insights are created in order to be used in the continuation of the SIMPHS project. It concludes with an annex containing examples and cases studies.JRC.DDG.J.4-Information Societ
    corecore