3,552 research outputs found

    Coding without sight: Teaching object-oriented java programming to a blind student

    Get PDF
    In this paper, I describe my experience of teaching object-oriented Java programming to a blind student. This includes the particular environment setup used (a screen reader, JAWS, and an advanced Windows-based text editor, Textpad) and alterations made to the course to accommodate the blind student's special needs. I also discuss how a number of difficulties encountered by the blind student, such as compiling Java applications using the command-line interface and javac, a Java compiler, was addressed and provide some practical recommendations based on my experience

    An aesthetics of touch: investigating the language of design relating to form

    Get PDF
    How well can designers communicate qualities of touch? This paper presents evidence that they have some capability to do so, much of which appears to have been learned, but at present make limited use of such language. Interviews with graduate designer-makers suggest that they are aware of and value the importance of touch and materiality in their work, but lack a vocabulary to fully relate to their detailed explanations of other aspects such as their intent or selection of materials. We believe that more attention should be paid to the verbal dialogue that happens in the design process, particularly as other researchers show that even making-based learning also has a strong verbal element to it. However, verbal language alone does not appear to be adequate for a comprehensive language of touch. Graduate designers-makers’ descriptive practices combined non-verbal manipulation within verbal accounts. We thus argue that haptic vocabularies do not simply describe material qualities, but rather are situated competences that physically demonstrate the presence of haptic qualities. Such competencies are more important than groups of verbal vocabularies in isolation. Design support for developing and extending haptic competences must take this wide range of considerations into account to comprehensively improve designers’ capabilities

    A Dataset for Movie Description

    Full text link
    Descriptive video service (DVS) provides linguistic descriptions of movies and allows visually impaired people to follow a movie along with their peers. Such descriptions are by design mainly visual and thus naturally form an interesting data source for computer vision and computational linguistics. In this work we propose a novel dataset which contains transcribed DVS, which is temporally aligned to full length HD movies. In addition we also collected the aligned movie scripts which have been used in prior work and compare the two different sources of descriptions. In total the Movie Description dataset contains a parallel corpus of over 54,000 sentences and video snippets from 72 HD movies. We characterize the dataset by benchmarking different approaches for generating video descriptions. Comparing DVS to scripts, we find that DVS is far more visual and describes precisely what is shown rather than what should happen according to the scripts created prior to movie production

    VisionBlocks: A Social Computer Vision Framework

    Get PDF
    Vision Blocks (http://visionblocks.org) is an on demand, in-browser, customizable computer vision application publishing platform for masses. It empowers end-users (consumers)to create novel solutions for themselves that they would not easily obtain off-the-shelf. By transferring design capability to the consumers, we enable creation and dissemination of custom products and algorithms. We adapt a visual programming paradigm to codify vision algorithms for general use. As a proof of-concept, we implement computer vision algorithms such as motion tracking, face detection, change detection and others. We demonstrate their applications on real-time video. Our studies show that end users (non programmers) only need 50% more time to build such systems when compared to the most experienced researchers. We made progress towards closing the gap between researchers and consumers by finding that users rate the intuitiveness of the approach in a level 6% less than researchers. We discuss different application scenarios where such study will be useful and argue its benefit for computer vision research community. We believe that enabling users with ability to create application will be first step towards creating social computer vision applications and platform.Alfred P. Sloan Foundation (Research Fellowship

    Making Spatial Information Accessible on Touchscreens for Users who are Blind and Visually Impaired

    Get PDF
    Touchscreens have become a de facto standard of input for mobile devices as they most optimally use the limited input and output space that is imposed by their form factor. In recent years, people who are blind and visually impaired have been increasing their usage of smartphones and touchscreens. Although basic access is available, there are still many accessibility issues left to deal with in order to bring full inclusion to this population. One of the important challenges lies in accessing and creating of spatial information on touchscreens. The work presented here provides three new techniques, using three different modalities, for accessing spatial information on touchscreens. The first system makes geometry and diagram creation accessible on a touchscreen through the use of text-to-speech and gestural input. This first study is informed by a qualitative study of how people who are blind and visually impaired currently access and create graphs and diagrams. The second system makes directions through maps accessible using multiple vibration sensors without any sound or visual output. The third system investigates the use of binaural sound on a touchscreen to make various types of applications accessible such as physics simulations, astronomy, and video games

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion
    • …
    corecore