10 research outputs found

    Smoothness perception : investigation of beat rate effect on frame rate perception

    Get PDF
    Despite the complexity of the Human Visual System (HVS), research over the last few decades has highlighted a number of its limitations. These limitations can be exploited in computer graphics to significantly reduce computational cost and thus required rendering time, without a viewer perceiving any difference in resultant image quality. Furthermore, cross-modal interaction between different modalities, such as the influence of audio on visual perception, has also been shown as significant both in psychology and computer graphics. In this paper we investigate the effect of beat rate on temporal visual perception, i.e. frame rate perception. For the visual quality and perception evaluation, a series of psychophysical experiments was conducted and the data analysed. The results indicate that beat rates in some cases do affect temporal visual perception and that certain beat rates can be used in order to reduce the amount of rendering required to achieve a perceptual high quality. This is another step towards a comprehensive understanding of auditory-visual cross-modal interaction and could be potentially used in high-fidelity interactive multi-sensory virtual environments

    First Person Perspective of Seated Participants Over a Walking Virtual Body Leads to Illusory Agency Over the Walking

    Get PDF
    Agency, the attribution of authorship to an action of our body, requires the intention to carry out the action, and subsequently a match between its predicted and actual sensory consequences. However, illusory agency can be generated through priming of the action together with perception of bodily action, even when there has been no actual corresponding action. Here we show that participants can have the illusion of agency over the walking of a virtual body even though in reality they are seated and only allowed head movements. The experiment (n = 28) had two factors: Perspective (1PP or 3PP) and Head Sway (Sway or NoSway). Participants in 1PP saw a life-sized virtual body spatially coincident with their own from a first person perspective, or the virtual body from third person perspective (3PP). In the Sway condition the viewpoint included a walking animation, but not in NoSway. The results show strong illusions of body ownership, agency and walking, in the 1PP compared to the 3PP condition, and an enhanced level of arousal while the walking was up a virtual hill. Sway reduced the level of agency. We conclude with a discussion of the results in the light of current theories of agency

    First person perspective of seated participants over a walking virtual body leads to illusory agency over the walking

    Get PDF
    Agency, the attribution of authorship to an action of our body, requires the intention to carry out the action, and subsequently a match between its predicted and actual sensory consequences. However, illusory agency can be generated through priming of the action together with perception of bodily action, even when there has been no actual corresponding action. Here we show that participants can have the illusion of agency over the walking of a virtual body even though in reality they are seated and only allowed head movements. The experiment (n = 28) had two factors: Perspective (1PP or 3PP) and Head Sway (Sway or NoSway). Participants in 1PP saw a life-sized virtual body spatially coincident with their own from a first person perspective, or the virtual body from third person perspective (3PP). In the Sway condition the viewpoint included a walking animation, but not in NoSway. The results show strong illusions of body ownership, agency and walking, in the 1PP compared to the 3PP condition, and an enhanced level of arousal while the walking was up a virtual hill. Sway reduced the level of agency. We conclude with a discussion of the results in the light of current theories of agency

    NaviFields: relevance fields for adaptive VR navigation

    Get PDF
    Virtual Reality allow users to explore virtual environments naturally, by moving their head and body. However, the size of the environments they can explore is limited by real world constraints, such as the tracking technology or the physical space available. Existing techniques removing these limitations often break the metaphor of natural navigation in VR (e.g. steering techniques), involve control commands (e.g., teleporting) or hinder precise navigation (e.g., scaling user's displacements). This paper proposes NaviFields, which quantify the requirements for precise navigation of each point of the environment, allowing natural navigation within relevant areas, while scaling users' displacements when travelling across non-relevant spaces. This expands the size of the navigable space, retains the natural navigation metaphor and still allows for areas with precise control of the virtual head. We present a formal description of our NaviFields technique, which we compared against two alternative solutions (i.e., homogeneous scaling and natural navigation). Our results demonstrate our ability to cover larger spaces, introduce minimal disruption when travelling across bigger distances and improve very significantly the precise control of the viewpoint inside relevant areas

    Shake-Your-Head: Revisiting Walking-In-Place for Desktop Virtual Reality

    Get PDF
    International audienceThe Walking-In-Place interaction technique was introduced to navigate infinitely in 3D virtual worlds by walking in place in the real world. The technique has been initially developed for users standing in immersive setups and was built upon sophisticated visual displays and tracking equipments. In this paper, we propose to revisit the whole pipeline of the Walking-In-Place technique to match a larger set of configurations and apply it notably to the context of desktop Virtual Reality. With our novel "Shake-Your-Head" technique, the user is left with the possibility to sit down, and to use small screens and standard input devices such as a basic webcam for tracking. The locomotion simulation can compute various motions such as turning, jumping and crawling, using as sole input the head movements of the user. We also introduce the use of additional visual feedback based on camera motions to enhance the walking sensations. An experiment was conducted to compare our technique with classical input devices used for navigating in desktop VR. Interestingly, the results showed that our technique could even allow faster navigations when sitting, after a short learning. Our technique was also perceived as more fun and increasing presence, and was generally more appreciated for VR navigation

    Simulated Viewpoint Jitter Shakes Sensory Conflict Accounts of Vection

    Full text link

    Camera Motions Improve the Sensation of Walking in Virtual Environments

    No full text

    Computational interaction techniques for 3D selection, manipulation and navigation in immersive VR

    Get PDF
    3D interaction provides a natural interplay for HCI. Many techniques involving diverse sets of hardware and software components have been proposed, which has generated an explosion of Interaction Techniques (ITes), Interactive Tasks (ITas) and input devices, increasing thus the heterogeneity of tools in 3D User Interfaces (3DUIs). Moreover, most of those techniques are based on general formulations that fail in fully exploiting human capabilities for interaction. This is because while 3D interaction enables naturalness, it also produces complexity and limitations when using 3DUIs. In this thesis, we aim to generate approaches that better exploit the high potential human capabilities for interaction by combining human factors, mathematical formalizations and computational methods. Our approach is focussed on the exploration of the close coupling between specific ITes and ITas while addressing common issues of 3D interactions. We specifically focused on the stages of interaction within Basic Interaction Tasks (BITas) i.e., data input, manipulation, navigation and selection. Common limitations of these tasks are: (1) the complexity of mapping generation for input devices, (2) fatigue in mid-air object manipulation, (3) space constraints in VR navigation; and (4) low accuracy in 3D mid-air selection. Along with two chapters of introduction and background, this thesis presents five main works. Chapter 3 focusses on the design of mid-air gesture mappings based on human tacit knowledge. Chapter 4 presents a solution to address user fatigue in mid-air object manipulation. Chapter 5 is focused on addressing space limitations in VR navigation. Chapter 6 describes an analysis and a correction method to address Drift effects involved in scale-adaptive VR navigation; and Chapter 7 presents a hybrid technique 3D/2D that allows for precise selection of virtual objects in highly dense environments (e.g., point clouds). Finally, we conclude discussing how the contributions obtained from this exploration, provide techniques and guidelines to design more natural 3DUIs

    Real-walking models improve walking-in-place systems

    Get PDF
    Many Virtual Environment (VE) systems require a walking interface to travel through the virtual world. Real Walking- the preferred interface- is only feasible if the virtual world is smaller than the real-world tracked space. When the to-be-explored virtual world is larger than the real-world tracked space, Walking-In-Place (WIP) systems are frequently used: The user's in-place stepping gestures are measured and identified by the WIP system to move virtual viewpoint through the virtual world. When the system-generated forward motions do not match the user's intended motions, the user becomes frustrated and the VE experience degrades. This dissertation presents two real-walking-based models that enable WIP systems to generate walking-like speeds. Direction from in-place stepping gestures is not covered. The first model (GUD WIP) calculates in-place step frequency- even when only a portion of a step has been completed. The second (The Forward Walking Model) measures a user's step-frequency-to-walk-speed function in real-time from head-track-data alone. The two models combined enable per-user-calibrated real-walking-like speeds from in-place stepping gestures. This dissertation also presents validation studies for each model and a usability study that demonstrate that WIP systems that employ these models are better than a previous speed-focused WIP system.Doctor of Philosoph

    Understanding and designing for control in camera operation

    Get PDF
    Kameraleute nutzen traditionell gezielt Hilfsmittel um kontrollierte Kamerabewegungen zu ermöglichen. Der technische Fortschritt hat hierbei unlĂ€ngst zum Entstehen neuer Werkzeugen wie Gimbals, Drohnen oder Robotern beigetragen. Dabei wurden durch eine Kombination von Motorisierung, Computer-Vision und Machine-Learning auch neue Interaktionstechniken eingeführt. Neben dem etablierten achsenbasierten Stil wurde nun auch ein inhaltsbasierter Interaktionsstil ermöglicht. Einerseits vereinfachte dieser die Arbeit, andererseits aber folgten dieser (Teil-)Automatisierung auch unerwünschte Nebeneffekte. GrundsĂ€tzlich wollen sich Kameraleute wĂ€hrend der Kamerabewegung kontinuierlich in Kontrolle und am Ende als Autoren der Aufnahmen fühlen. WĂ€hrend Automatisierung hierbei Experten unterstützen und AnfĂ€nger befĂ€higen kann, führt sie unweigerlich auch zu einem gewissen Verlust an gewünschter Kontrolle. Wenn wir Kamerabewegung mit neuen Werkzeugen unterstützen wollen, stellt sich uns daher die Frage: Wie sollten wir diese Werkzeuge gestalten damit sie, trotz fortschreitender Automatisierung ein Gefühl von Kontrolle vermitteln? In der Vergangenheit wurde Kamerakontrolle bereits eingehend erforscht, allerdings vermehrt im virtuellen Raum. Die Anwendung inhaltsbasierter Kontrolle im physikalischen Raum trifft jedoch auf weniger erforschte domĂ€nenspezifische Herausforderungen welche gleichzeitig auch neue Gestaltungsmöglichkeiten eröffnen. Um dabei auf Nutzerbedürfnisse einzugehen, müssen sich Schnittstellen zum Beispiel an diese EinschrĂ€nkungen anpassen können und ein Zusammenspiel mit bestehenden Praktiken erlauben. Bisherige Forschung fokussierte sich oftmals auf ein technisches VerstĂ€ndnis von Kamerafahrten, was sich auch in der Schnittstellengestaltung niederschlug. Im Gegensatz dazu trĂ€gt diese Arbeit zu einem besseren VerstĂ€ndnis der Motive und Praktiken von Kameraleuten bei und bildet eine Grundlage zur Forschung und Gestaltung von Nutzerschnittstellen. Diese Arbeit prĂ€sentiert dazu konkret drei BeitrĂ€ge: Zuerst beschreiben wir ethnographische Studien über Experten und deren Praktiken. Sie zeigen vor allem die Herausforderungen von Automatisierung bei Kreativaufgaben auf (Assistenz vs. Kontrollgefühl). Zweitens, stellen wir ein Prototyping-Toolkit vor, dass für den Einsatz im Feld geeignet ist. Das Toolkit stellt Software für eine Replikation quelloffen bereit und erleichtert somit die Exploration von Designprototypen. Um Fragen zu deren Gestaltung besser beantworten zu können, stellen wir ebenfalls ein Evaluations-Framework vor, das vor allem KontrollqualitĂ€t und -gefühl bestimmt. Darin erweitern wir etablierte AnsĂ€tze um eine neurowissenschaftliche Methodik, um Daten explizit wie implizit erheben zu können. Drittens, prĂ€sentieren wir Designs und deren Evaluation aufbauend auf unserem Toolkit und Framework. Die Alternativen untersuchen Kontrolle bei verschiedenen Automatisierungsgraden und inhaltsbasierten Interaktionen. Auftretende Verdeckung durch graphische Elemente, wurde dabei durch visuelle Reduzierung und Mid-Air Gesten kompensiert. Unsere Studien implizieren hohe Grade an KontrollqualitĂ€t und -gefühl bei unseren AnsĂ€tzen, die zudem kreatives Arbeiten und bestehende Praktiken unterstützen.Cinematographers often use supportive tools to craft desired camera moves. Recent technological advances added new tools to the palette such as gimbals, drones or robots. The combination of motor-driven actuation, computer vision and machine learning in such systems also rendered new interaction techniques possible. In particular, a content-based interaction style was introduced in addition to the established axis-based style. On the one hand, content-based cocreation between humans and automated systems made it easier to reach high level goals. On the other hand however, the increased use of automation also introduced negative side effects. Creatives usually want to feel in control during executing the camera motion and in the end as the authors of the recorded shots. While automation can assist experts or enable novices, it unfortunately also takes away desired control from operators. Thus, if we want to support cinematographers with new tools and interaction techniques the following question arises: How should we design interfaces for camera motion control that, despite being increasingly automated, provide cinematographers with an experience of control? Camera control has been studied for decades, especially in virtual environments. Applying content-based interaction to physical environments opens up new design opportunities but also faces, less researched, domain-specific challenges. To suit the needs of cinematographers, designs need to be crafted with care. In particular, they must adapt to constraints of recordings on location. This makes an interplay with established practices essential. Previous work has mainly focused on a technology-centered understanding of camera travel which consequently influenced the design of camera control systems. In contrast, this thesis, contributes to the understanding of the motives of cinematographers, how they operate on set and provides a user-centered foundation informing cinematography specific research and design. The contribution of this thesis is threefold: First, we present ethnographic studies on expert users and their shooting practices on location. These studies highlight the challenges of introducing automation to a creative task (assistance vs feeling in control). Second, we report on a domain specific prototyping toolkit for in-situ deployment. The toolkit provides open source software for low cost replication enabling the exploration of design alternatives. To better inform design decisions, we further introduce an evaluation framework for estimating the resulting quality and sense of control. By extending established methodologies with a recent neuroscientific technique, it provides data on explicit as well as implicit levels and is designed to be applicable to other domains of HCI. Third, we present evaluations of designs based on our toolkit and framework. We explored a dynamic interplay of manual control with various degrees of automation. Further, we examined different content-based interaction styles. Here, occlusion due to graphical elements was found and addressed by exploring visual reduction strategies and mid-air gestures. Our studies demonstrate that high degrees of quality and sense of control are achievable with our tools that also support creativity and established practices
    corecore