11,553 research outputs found

    Presenting in Virtual Worlds: An Architecture for a 3D Anthropomorphic Presenter

    Get PDF
    Multiparty-interaction technology is changing entertainment, education, and training. Deployed examples of such technology include embodied agents and robots that act as a museum guide, a news presenter, a teacher, a receptionist, or someone trying to sell you insurance, homes, or tickets. In all these cases, the embodied agent needs to explain and describe. This article describes the design of a 3D virtual presenter that uses different output channels (including speech and animation of posture, pointing, and involuntary movements) to present and explain. The behavior is scripted and synchronized with a 2D display containing associated text and regions (slides, drawings, and paintings) at which the presenter can point. This article is part of a special issue on interactive entertainment

    Classifying public display systems: an input/output channel perspective

    Get PDF
    Public display screens are relatively recent additions to our world, and while they may be as simple as a large screen with minimal input/output features, more recent developments have introduced much richer interaction possibilities supporting a variety of interaction styles. In this paper we propose a framework for classifying public display systems with a view to better understanding how they differ in terms of their interaction channels and how future installations are likely to evolve. This framework is explored through 15 existing public display systems which use mobile phones for interaction in the display space

    Pervasive Displays Research: What's Next?

    Get PDF
    Reports on the 7th ACM International Symposium on Pervasive Displays that took place from June 6-8 in Munich, Germany

    Seamless and Secure VR: Adapting and Evaluating Established Authentication Systems for Virtual Reality

    Get PDF
    Virtual reality (VR) headsets are enabling a wide range of new opportunities for the user. For example, in the near future users may be able to visit virtual shopping malls and virtually join international conferences. These and many other scenarios pose new questions with regards to privacy and security, in particular authentication of users within the virtual environment. As a first step towards seamless VR authentication, this paper investigates the direct transfer of well-established concepts (PIN, Android unlock patterns) into VR. In a pilot study (N = 5) and a lab study (N = 25), we adapted existing mechanisms and evaluated their usability and security for VR. The results indicate that both PINs and patterns are well suited for authentication in VR. We found that the usability of both methods matched the performance known from the physical world. In addition, the private visual channel makes authentication harder to observe, indicating that authentication in VR using traditional concepts already achieves a good balance in the trade-off between usability and security. The paper contributes to a better understanding of authentication within VR environments, by providing the first investigation of established authentication methods within VR, and presents the base layer for the design of future authentication schemes, which are used in VR environments only

    Touch or Touchless? Evaluating Usability of Interactive Displays for Persons with Autistic Spectrum Disorders

    Get PDF
    Interactive public displays have been exploited and studied for engaging interaction in several previous studies. In this context, applications have been focused on supporting learning or entertainment activities, specifically designed for people with special needs. This includes, for example, those with Autism Spectrum Disorders (ASD). In this paper, we present a comparison study aimed at understanding the difference in terms of usability, effectiveness, and enjoyment perceived by users with ASD between two interaction modalities usually supported by interactive displays: touch-based and touchless gestural interaction. We present the outcomes of a within-subject setup involving 8 ASD users (age 18-25 y.o., IQ 40-60), based on the use of two similar user interfaces, differing only by the interaction modality. We show that touch interaction provides higher usability level and results in more effective actions, although touchless interaction is more effective in terms of enjoyment and engagemen

    Direct and gestural interaction with relief: A 2.5D shape display

    Get PDF
    Actuated shape output provides novel opportunities for experiencing, creating and manipulating 3D content in the physical world. While various shape displays have been proposed, a common approach utilizes an array of linear actuators to form 2.5D surfaces. Through identifying a set of common interactions for viewing and manipulating content on shape displays, we argue why input modalities beyond direct touch are required. The combination of freehand gestures and direct touch provides additional degrees of freedom and resolves input ambiguities, while keeping the locus of interaction on the shape output. To demonstrate the proposed combination of input modalities and explore applications for 2.5D shape displays, two example scenarios are implemented on a prototype system

    Gesturing on the steering wheel, a comparison with speech and touch interaction modalities

    Get PDF
    This paper compares an emergent interaction modality for the In-Vehicle Infotainment System (IVIS), i.e., gesturing on the steering wheel, with two more popular modalities in modern cars: touch and speech. We conducted a betweensubjects experiment with 20 participants for each modality to assess the interaction performance with the IVIS and the impact on the driving performance. Moreover, we compared the three modalities in terms of usability, subjective workload and emotional response. The results showed no statically significant differences between the three interaction modalities regarding the various indicators for the driving task performance, while significant differences were found in measures of IVIS interaction performance: users performed less interactions to complete the secondary tasks with the speech modality, while, in average, a lower task completion time was registered with the touch modality. The three interfaces were comparable in all the subjective metrics
    corecore