287 research outputs found

    Towards Sustainable Research Data Management in Human-Computer Interaction

    Full text link
    We discuss important aspects of HCI research regarding Research Data Management (RDM) to achieve better publication processes and higher reuse of HCI research results. Various context elements of RDM for HCI are discussed, including examples of existing and emerging infrastructures for RDM. We briefly discuss existing approaches and come up with additional aspects which need to be addressed. This is to apply the so-called FAIR principle fully, which -- besides being findable and accessible -- also includes interoperability and reusability. We also discuss briefly the kind of research data types that play a role here and propose to build on existing work and involve the HCI scientific community to improve current practices

    Tele-media-art: web-based inclusive teaching of body expression

    Get PDF
    Conferência Internacional, realizada em Olhão, Algarve, de 26-28 de abril de 2018.The Tele-Media-Art project aims to promote the improvement of the online distance learning and artistic teaching process applied in the teaching of two test scenarios, doctorate in digital art-media and the lifelong learning course ”the experience of diversity” by exploiting multimodal telepresence facilities encompassing the diversified visual, auditory and sensory channels, as well as rich forms of gestural / body interaction. To this end, a telepresence system was developed to be installed at Palácio Ceia, in Lisbon, Portugal, headquarters of the Portuguese Open University, from which methodologies of artistic teaching in mixed regime - face-to-face and online distance - that are inclusive to blind and partially sighted students. This system has already been tested against a group of subjects, including blind people. Although positive results were achieved, more development and further tests will be carried in the futureThis project was financed by Calouste Gulbenkian Foundation under Grant number 142793.info:eu-repo/semantics/publishedVersio

    The Graphical Access Challenge for People with Visual Impairments: Positions and Pathways Forward

    Get PDF
    Graphical access is one of the most pressing challenges for individuals who are blind or visually impaired. This chapter discusses some of the factors underlying the graphics access challenge, reviews prior approaches to addressing this long-standing information access barrier, and describes some promising new solutions. We specifically focus on touchscreen-based smart devices, a relatively new class of information access technologies, which our group believes represent an exemplary model of user-centered, needs-based design. We highlight both the challenges and the vast potential of these technologies for alleviating the graphics accessibility gap and share the latest results in this line of research. We close with recommendations on ideological shifts in mindset about how we approach solving this vexing access problem, which will complement both technological and perceptual advancements that are rapidly being uncovered through a growing research community in this domain

    An empirical evaluation of a graphics creation technique for blind and visually impaired individuals

    Get PDF
    The representation of pictorial data by people who are blind and sight impaired has gathered momentum with research and development; however, little research has focused on the use of a screen layout to provide people who are blind and sight impaired users with the spatial orientation to create and reuse graphics. This article contributes an approach to navigating on the screen, manipulating computer graphics, and user-defined images. The technique described in this article enables features such as zooming, grouping, and drawing by calling primitive and user-defined shapes. It enables blind people to engage in and experience drawing and art production on their own. The navigation technique gives an initiative sense of autonomy with compass directions, makes it easy to learn, efficient to manipulate shape with a the simple drawing language, and takes less time to complete with system support features. An empirical evaluation was conducted to validate the suitability of the SETUP09 technique and to evaluate the accuracy, and efficiency of the navigation and drawing techniques proposed. The drawing experiment results confirmed high accuracy (88%) and efficiency among blind and visually impaired (BVI) users

    An empirical evaluation of a graphics creation technique for blind and visually impaired individuals

    Get PDF
    The representation of pictorial data by people who are blind and sight impaired has gathered momentum with research and development; however, little research has focused on the use of a screen layout to provide people who are blind and sight impaired users with the spatial orientation to create and reuse graphics. This paper contributes an approach to navigating on the screen, manipulating computer graphics, and user-defined images. The technique described in this paper enables features such as zooming, grouping, and drawing by calling primitive and user-defined shapes. It enables blind people to engage and experience drawing and art production on their own. The navigation technique gives an initiative sense of autonomy with compass directions, makes it easy to learn, efficient to manipulate shape with a the simple drawing language and takes less time to complete with system support features. An empirical evaluation was conducted to validate the suitability of SETUP09 technique and to evaluate the accuracy, efficiency of the navigation and drawing techniques proposed. The drawing experiment results confirmed high accuracy (88%) and efficiency among BVI users

    Analyzing the Impact of Cognitive Load in Evaluating Gaze-based Typing

    Full text link
    Gaze-based virtual keyboards provide an effective interface for text entry by eye movements. The efficiency and usability of these keyboards have traditionally been evaluated with conventional text entry performance measures such as words per minute, keystrokes per character, backspace usage, etc. However, in comparison to the traditional text entry approaches, gaze-based typing involves natural eye movements that are highly correlated with human brain cognition. Employing eye gaze as an input could lead to excessive mental demand, and in this work we argue the need to include cognitive load as an eye typing evaluation measure. We evaluate three variations of gaze-based virtual keyboards, which implement variable designs in terms of word suggestion positioning. The conventional text entry metrics indicate no significant difference in the performance of the different keyboard designs. However, STFT (Short-time Fourier Transform) based analysis of EEG signals indicate variances in the mental workload of participants while interacting with these designs. Moreover, the EEG analysis provides insights into the user's cognition variation for different typing phases and intervals, which should be considered in order to improve eye typing usability.Comment: 6 pages, 4 figures, IEEE CBMS 201

    Author Reflections on Creating Accessible Academic Papers

    Get PDF

    ASL Citizen: A Community-Sourced Dataset for Advancing Isolated Sign Language Recognition

    Full text link
    Sign languages are used as a primary language by approximately 70 million D/deaf people world-wide. However, most communication technologies operate in spoken and written languages, creating inequities in access. To help tackle this problem, we release ASL Citizen, the first crowdsourced Isolated Sign Language Recognition (ISLR) dataset, collected with consent and containing 83,399 videos for 2,731 distinct signs filmed by 52 signers in a variety of environments. We propose that this dataset be used for sign language dictionary retrieval for American Sign Language (ASL), where a user demonstrates a sign to their webcam to retrieve matching signs from a dictionary. We show that training supervised machine learning classifiers with our dataset advances the state-of-the-art on metrics relevant for dictionary retrieval, achieving 63% accuracy and a recall-at-10 of 91%, evaluated entirely on videos of users who are not present in the training or validation sets. An accessible PDF of this article is available at the following link: https://aashakadesai.github.io/research/ASLCitizen_arxiv_updated.pd
    corecore