39,295 research outputs found

    Tactons: structured tactile messages for non-visual information display

    Get PDF
    Tactile displays are now becoming available in a form that can be easily used in a user interface. This paper describes a new form of tactile output. Tactons, or tactile icons, are structured, abstract messages that can be used to communicate messages non-visually. A range of different parameters can be used for Tacton construction including: frequency, amplitude and duration of a tactile pulse, plus other parameters such as rhythm and location. Tactons have the potential to improve interaction in a range of different areas, particularly where the visual display is overloaded, limited in size or not available, such as interfaces for blind people or in mobile and wearable devices. This paper describes Tactons, the parameters used to construct them and some possible ways to design them. Examples of where Tactons might prove useful in user interfaces are given

    User-Defined Gestural Interaction: a Study on Gesture Memorization

    No full text
    8 pagesInternational audienceIn this paper we study the memorization of user created gestures for 3DUI. Wide public applications mostly use standardized gestures for interactions with simple contents. This work is motivated by two application cases for which a standardized approach is not possible and thus user specific or dedicated interfaces are needed. The first one is applications for people with limited sensory-motor abilities for whom generic interaction methods may not be adapted. The second one is creative arts applications, for which gesture freedom is part of the creative process. In this work, users are asked to create gestures for a set of tasks, in a specific phase, prior to using the system. We propose a user study to explore the question of gesture memorization. Gestures are recorded and recognized with a Hidden Markov Model. Results show that it seems difficult to recall more than two abstract gestures. Affordances strongly improve memorization whereas the use of colocalization has no significant effect

    Enabling collaboration in virtual reality navigators

    Get PDF
    In this paper we characterize a feature superset for Collaborative Virtual Reality Environments (CVRE), and derive a component framework to transform stand-alone VR navigators into full-fledged multithreaded collaborative environments. The contributions of our approach rely on a cost-effective and extensible technique for loading software components into separate POSIX threads for rendering, user interaction and network communications, and adding a top layer for managing session collaboration. The framework recasts a VR navigator under a distributed peer-to-peer topology for scene and object sharing, using callback hooks for broadcasting remote events and multicamera perspective sharing with avatar interaction. We validate the framework by applying it to our own ALICE VR Navigator. Experimental results show that our approach has good performance in the collaborative inspection of complex models.Postprint (published version

    Interactive form creation: exploring the creation and manipulation of free form through the use of interactive multiple input interface

    Get PDF
    Most current CAD systems support only the two most common input devices: a mouse and a keyboard that impose a limit to the degree of interaction that a user can have with the system. However, it is not uncommon for users to work together on the same computer during a collaborative task. Beside that, people tend to use both hands to manipulate 3D objects; one hand is used to orient the object while the other hand is used to perform some operation on the object. The same things could be applied to computer modelling in the conceptual phase of the design process. A designer can rotate and position an object with one hand, and manipulate the shape [deform it] with the other hand. Accordingly, the 3D object can be easily and intuitively changed through interactive manipulation of both hands.The research investigates the manipulation and creation of free form geometries through the use of interactive interfaces with multiple input devices. First the creation of the 3D model will be discussed; several different types of models will be illustrated. Furthermore, different tools that allow the user to control the 3D model interactively will be presented. Three experiments were conducted using different interactive interfaces; two bi-manual techniques were compared with the conventional one-handed approach. Finally it will be demonstrated that the use of new and multiple input devices can offer many opportunities for form creation. The problem is that few, if any, systems make it easy for the user or the programmer to use new input devices

    Enhancing Perception and Immersion in Pre-Captured Environments through Learning-Based Eye Height Adaptation

    Full text link
    Pre-captured immersive environments using omnidirectional cameras provide a wide range of virtual reality applications. Previous research has shown that manipulating the eye height in egocentric virtual environments can significantly affect distance perception and immersion. However, the influence of eye height in pre-captured real environments has received less attention due to the difficulty of altering the perspective after finishing the capture process. To explore this influence, we first propose a pilot study that captures real environments with multiple eye heights and asks participants to judge the egocentric distances and immersion. If a significant influence is confirmed, an effective image-based approach to adapt pre-captured real-world environments to the user's eye height would be desirable. Motivated by the study, we propose a learning-based approach for synthesizing novel views for omnidirectional images with altered eye heights. This approach employs a multitask architecture that learns depth and semantic segmentation in two formats, and generates high-quality depth and semantic segmentation to facilitate the inpainting stage. With the improved omnidirectional-aware layered depth image, our approach synthesizes natural and realistic visuals for eye height adaptation. Quantitative and qualitative evaluation shows favorable results against state-of-the-art methods, and an extensive user study verifies improved perception and immersion for pre-captured real-world environments.Comment: 10 pages, 13 figures, 3 tables, submitted to ISMAR 202

    Could people with stereo-deficiencies have a rich 3D experience using HMDs?

    Get PDF
    People with stereo-deficiencies usually have problems for the perception of depth using stereo devices. This paper presents a study that involves participants who did not have stereopsis and participants who had stereopsis. The two groups of participants were exposed to a maze navigation task in a 3D environment in two conditions, using a HMD and a large stereo screen. Fifty-nine adults participated in our study. From the results, there were no statistically significant differences for the performance on the task between the participants with stereopsis and those without stereopsis. We found statistically significant differences between the two conditions in favor of the HMD for the two groups of participants. The participants who did not have stereopsis and could not perceive 3D when looking at the Lang 1 Stereotest did have the illusion of depth perception using the HMD. The study suggests that for the people who did not have stereopsis, the head tracking largely influences the 3D experience
    • 

    corecore