2,655 research outputs found

    MINDtouch embodied ephemeral transference: Mobile media performance research

    Get PDF
    This is the post-print version of the final published article that is available from the link below. Copyright @ Intellect Ltd 2011.The aim of the author's media art research has been to uncover any new understandings of the sensations of liveness and presence that may emerge in participatory networked performance, using mobile phones and physiological wearable devices. To practically investigate these concepts, a mobile media performance series was created, called MINDtouch. The MINDtouch project proposed that the mobile videophone become a new way to communicate non-verbally, visually and sensually across space. It explored notions of ephemeral transference, distance collaboration and participant as performer to study presence and liveness emerging from the use of wireless mobile technologies within real-time, mobile performance contexts. Through participation by in-person and remote interactors, creating mobile video-streamed mixes, the project interweaves and embodies a daisy chain of technologies through the network space. As part of a practice-based Ph.D. research conducted at the SMARTlab Digital Media Institute at the University of East London, MINDtouch has been under the direction of Professor Lizbeth Goodman and sponsored by BBC R&D. The aim of this article is to discuss the project research, conducted and recently completed for submission, in terms of the technical and aesthetic developments from 2008 to present, as well as the final phase of staging the events from July 2009 to February 2010. This piece builds on the article (Baker 2008) which focused on the outcomes of phase 1 of the research project and initial developments in phase 2. The outcomes from phase 2 and 3 of the project are discussed in this article

    Developing and evaluating a gestural and tactile mobile interface to support user authentication

    Get PDF
    As awareness grows surrounding the importance of protecting sensitive data, stored on or accessed through a mobile device, a need has been identified to develop authentication schemes which better match the needs of users, and are more resistant to observer attacks. This paper describes the design and evaluation of H4Plock (pronounced “Hap-lock”), a novel authentication mechanism to address the situation. In order to authenticate, the user enters up to four pre-selected on-screen gestures, informed by tactile prompts. The system has been designed in such a way that the sequence of gestures will vary on each authentication attempt, reducing the capability of a shoulder surfer to recreate entry. 94.1% of participants were able to properly authenticate using H4Plock, with 73.3% successfully accessing the system after a gap of five days without rehearsal. Only 23.5% of participants were able to successfully recreate passcodes in a video-based attack scenario, where gestures were unique in design and entered at different locations around the interface

    Pedestrian Detection with Wearable Cameras for the Blind: A Two-way Perspective

    Full text link
    Blind people have limited access to information about their surroundings, which is important for ensuring one's safety, managing social interactions, and identifying approaching pedestrians. With advances in computer vision, wearable cameras can provide equitable access to such information. However, the always-on nature of these assistive technologies poses privacy concerns for parties that may get recorded. We explore this tension from both perspectives, those of sighted passersby and blind users, taking into account camera visibility, in-person versus remote experience, and extracted visual information. We conduct two studies: an online survey with MTurkers (N=206) and an in-person experience study between pairs of blind (N=10) and sighted (N=40) participants, where blind participants wear a working prototype for pedestrian detection and pass by sighted participants. Our results suggest that both of the perspectives of users and bystanders and the several factors mentioned above need to be carefully considered to mitigate potential social tensions.Comment: The 2020 ACM CHI Conference on Human Factors in Computing Systems (CHI 2020

    Tele-media-art: web-based inclusive teaching of body expression

    Get PDF
    ConferĂȘncia Internacional, realizada em OlhĂŁo, Algarve, de 26-28 de abril de 2018.The Tele-Media-Art project aims to promote the improvement of the online distance learning and artistic teaching process applied in the teaching of two test scenarios, doctorate in digital art-media and the lifelong learning course ”the experience of diversity” by exploiting multimodal telepresence facilities encompassing the diversified visual, auditory and sensory channels, as well as rich forms of gestural / body interaction. To this end, a telepresence system was developed to be installed at PalĂĄcio Ceia, in Lisbon, Portugal, headquarters of the Portuguese Open University, from which methodologies of artistic teaching in mixed regime - face-to-face and online distance - that are inclusive to blind and partially sighted students. This system has already been tested against a group of subjects, including blind people. Although positive results were achieved, more development and further tests will be carried in the futureThis project was financed by Calouste Gulbenkian Foundation under Grant number 142793.info:eu-repo/semantics/publishedVersio

    A System to Generate SignWriting for Video Tracks Enhancing Accessibility of Deaf People

    Get PDF
    Video content has increased much on the Internet during last years. In spite of the efforts of different organizations and governments to increase the accessibility of websites, most multimedia content on the Internet is not accessible. This paper describes a system that contributes to make multimedia content more accessible on the Web, by automatically translating subtitles in oral language to SignWriting, a way of writing Sign Language. This system extends the functionality of a general web platform that can provide accessible web content for different needs. This platform has a core component that automatically converts any web page to a web page compliant with level AA of WAI guidelines. Around this core component, different adapters complete the conversion according to the needs of specific users. One adapter is the Deaf People Accessibility Adapter, which provides accessible web content for the Deaf, based on SignWritting. Functionality of this adapter has been extended with the video subtitle translator system. A first prototype of this system has been tested through different methods including usability and accessibility tests and results show that this tool can enhance the accessibility of video content available on the Web for Deaf people

    Parental mediation, YouTube’s networked public, and the baby-iPad encounter:mobilizing digital dexterity

    Get PDF
    This study collected a sample of YouTube videos in which parents recorded their young children utilizing mobile touchscreen devices. Focusing on the more frequently viewed and highly-discussed videos, the paper analyzes the ways in which babies’ ‘digital dexterity’ is coded and understood in terms of contested notions of ‘naturalness’, and how the display of these capabilities is produced for a networked public. This reading of the ‘baby-iPad encounter’ helps expand existing scholarly concepts such as parental mediation and technology domestication. Recruiting several theoretical frameworks, the paper seeks to go beyond concerns of mobile devices and immobile children by analyzing children’s digital dexterity not just as a kind of mobility, but also as a set of reciprocal mobilizations that work across domestic, virtual and publically networked spaces

    Discoverable Free Space Gesture Sets for Walk-Up-and-Use Interactions

    Get PDF
    abstract: Advances in technology are fueling a movement toward ubiquity for beyond-the-desktop systems. Novel interaction modalities, such as free space or full body gestures are becoming more common, as demonstrated by the rise of systems such as the Microsoft Kinect. However, much of the interaction design research for such systems is still focused on desktop and touch interactions. Current thinking in free-space gestures are limited in capability and imagination and most gesture studies have not attempted to identify gestures appropriate for public walk-up-and-use applications. A walk-up-and-use display must be discoverable, such that first-time users can use the system without any training, flexible, and not fatiguing, especially in the case of longer-term interactions. One mechanism for defining gesture sets for walk-up-and-use interactions is a participatory design method called gesture elicitation. This method has been used to identify several user-generated gesture sets and shown that user-generated sets are preferred by users over those defined by system designers. However, for these studies to be successfully implemented in walk-up-and-use applications, there is a need to understand which components of these gestures are semantically meaningful (i.e. do users distinguish been using their left and right hand, or are those semantically the same thing?). Thus, defining a standardized gesture vocabulary for coding, characterizing, and evaluating gestures is critical. This dissertation presents three gesture elicitation studies for walk-up-and-use displays that employ a novel gesture elicitation methodology, alongside a novel coding scheme for gesture elicitation data that focuses on features most important to users’ mental models. Generalizable design principles, based on the three studies, are then derived and presented (e.g. changes in speed are meaningful for scroll actions in walk up and use displays but not for paging or selection). The major contributions of this work are: (1) an elicitation methodology that aids users in overcoming biases from existing interaction modalities; (2) a better understanding of the gestural features that matter, e.g. that capture the intent of the gestures; and (3) generalizable design principles for walk-up-and-use public displays.Dissertation/ThesisDoctoral Dissertation Computer Science 201
    • 

    corecore