509 research outputs found

    Helping the Blind to Get through COVID-19: Social Distancing Assistant Using Real-Time Semantic Segmentation on RGB-D Video

    Get PDF
    The current COVID-19 pandemic is having a major impact on our daily lives. Social distancing is one of the measures that has been implemented with the aim of slowing the spread of the disease, but it is difficult for blind people to comply with this. In this paper, we present a system that helps blind people to maintain physical distance to other persons using a combination of RGB and depth cameras. We use a real-time semantic segmentation algorithm on the RGB camera to detect where persons are and use the depth camera to assess the distance to them; then, we provide audio feedback through bone-conducting headphones if a person is closer than 1.5 m. Our system warns the user only if persons are nearby but does not react to non-person objects such as walls, trees or doors; thus, it is not intrusive, and it is possible to use it in combination with other assistive devices. We have tested our prototype system on one blind and four blindfolded persons, and found that the system is precise, easy to use, and amounts to low cognitive load

    Users' perception of relevance of spoken documents

    Get PDF
    We present the results of a study of user's perception of relevance of documents. The aim is to study experimentally how users' perception varies depending on the form that retrieved documents are presented. Documents retrieved in response to a query are presented to users in a variety of ways, from full text to a machine spoken query-biased automatically-generated summary, and the difference in users' perception of relevance is studied. The experimental results suggest that the effectiveness of advanced multimedia information retrieval applications may be affected by the low level of users' perception of relevance of retrieved documents

    Design and evaluation of auditory spatial cues for decision making within a game environment for persons with visual impairments

    Get PDF
    An audio platform game was created and evaluated in order to answer the question of whether or not an audio game could be designed that effectively conveys the spatial information necessary for persons with visual impairments to successfully navigate the game levels and respond to audio cues in time to avoid obstacles. The game used several types of audio cues (sounds and speech) to convey the spatial setup (map) of the game world. Most audio-only players seemed to be able to create a workable mental map from the game\u27s sound cues alone, pointing to potential for the further development of similar audio games for persons with visual impairments. The research also investigated the navigational strategies used by persons with visual impairments and the accuracy of the participants\u27 mental maps as a consequence of their navigational strategy. A comparisons of the maps created by visually impaired participants with those created by sighted participants playing the game with and without graphics, showed no statistically significant difference in map accuracy between groups. However, there was a marked difference between the number of invented objects when we compared this value between the sighted audio-only group and the other groups, which could serve as an area for future research

    Using audio cues to support motion gesture interaction on mobile devices

    Get PDF
    2015 Spring.Includes bibliographical references.Motion gestures are an underutilized input modality for mobile interaction despite numerous potential advantages. Negulescu et al. found that the lack of feedback on attempted motion gestures made it difficult for participants to diagnose and correct errors, resulting in poor recognition performance and user frustration. In this paper, we describe and evaluate a training and feedback technique, Glissando, which uses audio characteristics to provide feedback on the systems interpretation of user input. This technique enables feedback by verbally confirming correct gestures and notifying users of errors in addition to providing continuous feedback by mapping distinct musical notes to each of three axes and manipulating pitch to specify both spatial and temporal information

    The Role of Sonification as a Code Navigation Aid: Improving Programming Structure Readability and Understandability For Non-Visual Users

    Get PDF
    Integrated Development Environments (IDEs) play an important role in the workflow of many software developers, e.g. providing syntactic highlighting or other navigation aids to support the creation of lengthy codebases. Unfortunately, such complex visual information is difficult to convey with current screen-reader technologies, thereby creating barriers for programmers who are blind, who are nevertheless using IDEs. This dissertation is focused on utilizing audio-based techniques to assist non-visual programmers when navigating through large amounts of code. Recently, audio generation techniques have seen major improvements in their capabilities to covey visually-based information to both sighted and non-visual users – making them a potential candidate for providing useful information, especially in places where information is visually structured. However, there is little known about the usability of such techniques in software development. Therefore, we investigated whether audio-based techniques capable of providing useful information about the code structure to assist non-visual programmers. The major contributions in this dissertation are split into two major parts: The first part of this dissertation explains our prior work that investigates the major challenges in software development faced by non-visual programmers, specifically code navigation difficulties. It also discusses areas of improvement where additional features could be developed in order to make the programming environment more accessible to non-visual programmers. The second part of this dissertation focuses on studies aimed to evaluate the usability and efficacy of audio-based techniques for conveying the structure of the programming codebase, which was suggested by the stakeholders in Part I. Specifically, we investigated various sound effects, audio parameters, and different interaction techniques to determine whether these techniques could provide adequate support to assist non-visual programmers when navigating through lengthy codebases. In Part II, we discussed the methodological aspects of evaluating the above-mentioned techniques with the stakeholders and examine these techniques using an audio-based prototype that was designed to control audio timing, locations, and methods of interaction. A set of design guidelines are provided based on the evaluation described previously to suggest including an auditory-based feedback system in the programming environment in efforts to improve code structure readability and understandability for assisting non-visual programmers

    Sonancia : sonification of procedurally generated game levels

    Get PDF
    How can creative elements brought from level design effectively be coupled with audio in order to create tense and engaging player experiences? In this paper the above question is posed through the sonification of procedurally generated digital game levels. The paper details some initial approaches and methodologies for achieving this core aim.The research is supported, in part, by the FP7 ICT project C2Learn (project no: 318480) and the FP7 Marie Curie CIG project AutoGameDesign (project no: 630665).peer-reviewe

    Exploring the Use of Wearables to Enable Indoor Navigation for Blind Users

    Get PDF
    One of the challenges that people with visual impairments (VI) have to have to confront daily, is navigating independently through foreign or unfamiliar spaces.Navigating through unfamiliar spaces without assistance is very time consuming and leads to lower mobility. Especially in the case of indoor environments where the use of GPS is impossible, this task becomes even harder.However, advancements in mobile and wearable computing pave the path to new cheap assistive technologies that can make the lives of people with VI easier.Wearable devices have great potential for assistive applications for users who are blind as they typically feature a camera and support hands and eye free interaction. Smart watches and heads up displays (HUDs), in combination with smartphones, can provide a basis for development of advanced algorithms, capable of providing inexpensive solutions for navigation in indoor spaces. New interfaces are also introduced making the interaction between users who are blind and mo-bile devices more intuitive.This work presents a set of new systems and technologies created to help users with VI navigate indoor environments. The first system presented is an indoor navigation system for people with VI that operates by using sensors found in mo-bile devices and virtual maps of the environment. The second system presented helps users navigate large open spaces with minimum veering. Next a study is conducted to determine the accuracy of pedometry based on different body placements of the accelerometer sensors. Finally, a gesture detection system is introduced that helps communication between the user and mobile devices by using sensors in wearable devices
    corecore