3,042 research outputs found

    Seamless and Secure VR: Adapting and Evaluating Established Authentication Systems for Virtual Reality

    Get PDF
    Virtual reality (VR) headsets are enabling a wide range of new opportunities for the user. For example, in the near future users may be able to visit virtual shopping malls and virtually join international conferences. These and many other scenarios pose new questions with regards to privacy and security, in particular authentication of users within the virtual environment. As a first step towards seamless VR authentication, this paper investigates the direct transfer of well-established concepts (PIN, Android unlock patterns) into VR. In a pilot study (N = 5) and a lab study (N = 25), we adapted existing mechanisms and evaluated their usability and security for VR. The results indicate that both PINs and patterns are well suited for authentication in VR. We found that the usability of both methods matched the performance known from the physical world. In addition, the private visual channel makes authentication harder to observe, indicating that authentication in VR using traditional concepts already achieves a good balance in the trade-off between usability and security. The paper contributes to a better understanding of authentication within VR environments, by providing the first investigation of established authentication methods within VR, and presents the base layer for the design of future authentication schemes, which are used in VR environments only

    3DTouch: A wearable 3D input device with an optical sensor and a 9-DOF inertial measurement unit

    Full text link
    We present 3DTouch, a novel 3D wearable input device worn on the fingertip for 3D manipulation tasks. 3DTouch is designed to fill the missing gap of a 3D input device that is self-contained, mobile, and universally working across various 3D platforms. This paper presents a low-cost solution to designing and implementing such a device. Our approach relies on relative positioning technique using an optical laser sensor and a 9-DOF inertial measurement unit. 3DTouch is self-contained, and designed to universally work on various 3D platforms. The device employs touch input for the benefits of passive haptic feedback, and movement stability. On the other hand, with touch interaction, 3DTouch is conceptually less fatiguing to use over many hours than 3D spatial input devices. We propose a set of 3D interaction techniques including selection, translation, and rotation using 3DTouch. An evaluation also demonstrates the device's tracking accuracy of 1.10 mm and 2.33 degrees for subtle touch interaction in 3D space. Modular solutions like 3DTouch opens up a whole new design space for interaction techniques to further develop on.Comment: 8 pages, 7 figure

    Laser Pointer Tracking in Projector-Augmented Architectural Environments

    Get PDF
    We present a system that applies a custom-built pan-tilt-zoom camera for laser-pointer tracking in arbitrary real environments. Once placed in a building environment, it carries out a fully automatic self-registration, registrations of projectors, and sampling of surface parameters, such as geometry and reflectivity. After these steps, it can be used for tracking a laser spot on the surface as well as an LED marker in 3D space, using inter-playing fisheye context and controllable detail cameras. The captured surface information can be used for masking out areas that are critical to laser-pointer tracking, and for guiding geometric and radiometric image correction techniques that enable a projector-based augmentation on arbitrary surfaces. We describe a distributed software framework that couples laser-pointer tracking for interaction, projector-based AR as well as video see-through AR for visualizations with the domain specific functionality of existing desktop tools for architectural planning, simulation and building surveying

    Identifying Inexpensive Off-the-Shelf Laser Pointers for Multi-User Interaction on Large Scale Displays

    Get PDF
    We present a method for identifying inexpensive, off-the-shelf laser pointers in a multiuser interaction environment on large-scale displays. We identify a laser pointer\u27s personality, a measure of its output in a particular context. Our method requires a set of inexpensive and unmodified green lasers, a large screen, a projector, and a camera with an infrared (IR) filter. The camera detects the IR spillover from the green laser beam, while ignoring color information projected onto the screen. During a calibration phase, a radial histogram of each laser\u27s IR spillover are used to represent the laser\u27s personality. Our system is able to identify the spots of a specific laser, allowing multiple users to simultaneously interact in the environment. In addition, we present a series of applications that take advantage of tracked and identified laser pointers to demonstrate large-scale, multiuser interactions

    The Performance of Knowledge: Pointing and Knowledge in Powerpoint Presentations

    Get PDF
    Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG geförderten) Allianz- bzw. Nationallizenz frei zugänglich.This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively.Powerpoint and similar technologies have contributed to a profound transformation of lecturing and presenting information. In focusing on pointing in powerpoint presentations, the article addresses aspects of this transformation of speech into 'presentations'. As opposed to popular attacks against powerpoint, the analysis of a large number of audio-visually recorded presentations (mainly in German) demonstrates the creativity of these 'performances', based on the interplay of slides (and other aspects of this technology), speech, pointing and body formations. Pointing seems to be a particular feature of this kind of presentation, allowing knowledge to be located in space. Considering powerpoint as one of the typical technologies of so-called 'knowledge societies', this aspect provides some indication as to the social understanding of knowledge. Instead of 'representing' reality, knowledge is defined by the circularity of speaking and showing, thus becoming presented knowledge rather than representing knowledge

    Services surround you:physical-virtual linkage with contextual bookmarks

    Get PDF
    Our daily life is pervaded by digital information and devices, not least the common mobile phone. However, a seamless connection between our physical world, such as a movie trailer on a screen in the main rail station and its digital counterparts, such as an online ticket service, remains difficult. In this paper, we present contextual bookmarks that enable users to capture information of interest with a mobile camera phone. Depending on the user’s context, the snapshot is mapped to a digital service such as ordering tickets for a movie theater close by or a link to the upcoming movie’s Web page

    Interaktion mit Medienfassaden : Design und Implementierung interaktiver Systeme fĂĽr groĂźe urbane Displays

    Get PDF
    Media facades are a prominent example of the digital augmentation of urban spaces. They denote the concept of turning the surface of a building into a large-scale urban screen. Due to their enormous size, they require interaction at a distance and they have a high level of visibility. Additionally, they are situated in a highly dynamic urban environment with rapidly changing conditions, which results in settings that are neither comparable, nor reproducible. Altogether, this makes the development of interactive media facade installations a challenging task. This thesis investigates the design of interactive installations for media facades holistically. A theoretical analysis of the design space for interactive installations for media facades is conducted to derive taxonomies to put media facade installations into context. Along with this, a set of observations and guidelines is provided to derive properties of the interaction from the technical characteristics of an interactive media facade installation. This thesis further provides three novel interaction techniques addressing the form factor and resolution of the facade, without the need for additionally instrumenting the space around the facades. The thesis contributes to the design of interactive media facade installations by providing a generalized media facade toolkit for rapid prototyping and simulating interactive media facade installations, independent of the media facade’s size, form factor, technology and underlying hardware.Die wachsende Zahl an Medienfassenden ist ein eindrucksvolles Beispiel für die digitale Erweiterung des öffentlichen Raums. Medienfassaden beschreiben die Möglichkeit, die Oberfläche eines Gebäudes in ein digitales Display zu wandeln. Ihre Größe erfordert Interaktion aus einer gewissen Distanz und führt zu einer großen Sichtbarkeit der dargestellten Inhalte. Medienfassaden-Installationen sind bedingt durch ihre dynamische Umgebung nur schwerlich vergleich- und reproduzierbar. All dies macht die Entwicklung von Installationen für Medienfassaden zu einer großen Herausforderung. Diese Arbeit beschäftigt sich mit der Entwicklung interaktiver Installationen für Medienfassaden. Es wird eine theoretische Analyse des Design-Spaces interaktiver Medienfassaden-Installationen durchgeführt und es werden Taxonomien entwickelt, die Medienfassaden-Installationen in Bezug zueinander setzen. In diesem Zusammenhang werden ausgehend von den technischen Charakteristika Eigenschaften der Interaktion erarbeitet. Zur Interaktion mit Medienfassaden werden drei neue Interaktionstechniken vorgestellt, die Form und Auflösung der Fassade berücksichtigen, ohne notwendigerweise die Umgebung der Fassade zu instrumentieren. Die Ergebnisse dieser Arbeit verbessern darüber hinaus die Entwicklung von Installationen für Medienfassaden, indem ein einheitliches Medienfassaden-Toolkit zum Rapid-Prototyping und zur Simulation interaktiver Installationen vorgestellt wird, das unabhängig von Größe und Form der Medienfassade sowie unabhängig von der verwendeten Technologie und der zugrunde liegenden Hardware ist

    Interacting "Through the Display"

    Get PDF
    The increasing availability of displays at lower costs has led to a proliferation of such in our everyday lives. Additionally, mobile devices are ready to hand and have been proposed as interaction devices for external screens. However, only their input mechanism was taken into account without considering three additional factors in environments hosting several displays: first, a connection needs to be established to the desired target display (modality). Second, screens in the environment may be re-arranged (flexibility). And third, displays may be out of the user’s reach (distance). In our research we aim to overcome the problems resulting from these characteristics. The overall goal is a new interaction model that allows for (1) a non-modal connection mechanism for impromptu use on various displays in the environment, (2) interaction on and across displays in highly flexible environments, and (3) interacting at variable distances. In this work we propose a new interaction model called through the display interaction which enables users to interact with remote content on their personal device in an absolute and direct fashion. To gain a better understanding of the effects of the additional characteristics, we implemented two prototypes each of which investigates a different distance to the target display: LucidDisplay allows users to place their mobile device directly on top of a larger external screen. MobileVue on the other hand enables users to interact with an external screen at a distance. In each of these prototypes we analyzed their effects on the remaining two criteria – namely the modality of the connection mechanism as well as the flexibility of the environment. With the findings gained in this initial phase we designed Shoot & Copy, a system that allows the detection of screens purely based on their visual content. Users aim their personal device’s camera at the target display which then appears in live video shown in the viewfinder. To select an item, users take a picture which is analyzed to determine the targeted region. We further extended this approach to multiple displays by using a centralized component serving as gateway to the display environment. In Tap & Drop we refined this prototype to support real-time feedback. Instead of taking pictures, users can now aim their mobile device at the display resulting and start interacting immediately. In doing so, we broke the rigid sequential interaction of content selection and content manipulation. Both prototypes allow for (1) connections in a non-modal way (i.e., aim at the display and start interacting with it) from the user’s point of view and (2) fully flexible environments (i.e., the mobile device tracks itself with respect to displays in the environment). However, the wide-angle lenses and thus greater field of views of current mobile devices still do not allow for variable distances. In Touch Projector, we overcome this limitation by introducing zooming in combination with temporarily freezing the video image. Based on our extensions to taxonomy of mobile device interaction on external displays, we created a refined model of interacting through the display for mobile use. It enables users to interact impromptu without explicitly establishing a connection to the target display (non-modal). As the mobile device tracks itself with respect to displays in the environment, the model further allows for full flexibility of the environment (i.e., displays can be re-arranged without affecting on the interaction). And above all, users can interact with external displays regardless of their actual size at variable distances without any loss of accuracy.Die steigende Verfügbarkeit von Bildschirmen hat zu deren Verbreitung in unserem Alltag geführt. Ferner sind mobile Geräte immer griffbereit und wurden bereits als Interaktionsgeräte für zusätzliche Bildschirme vorgeschlagen. Es wurden jedoch nur Eingabemechanismen berücksichtigt ohne näher auf drei weitere Faktoren in Umgebungen mit mehreren Bildschirmen einzugehen: (1) Beide Geräte müssen verbunden werden (Modalität). (2) Bildschirme können in solchen Umgebungen umgeordnet werden (Flexibilität). (3) Monitore können außer Reichweite sein (Distanz). Wir streben an, die Probleme, die durch diese Eigenschaften auftreten, zu lösen. Das übergeordnete Ziel ist ein Interaktionsmodell, das einen nicht-modalen Verbindungsaufbau für spontane Verwendung von Bildschirmen in solchen Umgebungen, (2) Interaktion auf und zwischen Bildschirmen in flexiblen Umgebungen, und (3) Interaktionen in variablen Distanzen erlaubt. Wir stellen ein Modell (Interaktion durch den Bildschirm) vor, mit dem Benutzer mit entfernten Inhalten in direkter und absoluter Weise auf ihrem Mobilgerät interagieren können. Um die Effekte der hinzugefügten Charakteristiken besser zu verstehen, haben wir zwei Prototypen für unterschiedliche Distanzen implementiert: LucidDisplay erlaubt Benutzern ihr mobiles Gerät auf einen größeren, sekundären Bildschirm zu legen. Gegensätzlich dazu ermöglicht MobileVue die Interaktion mit einem zusätzlichen Monitor in einer gewissen Entfernung. In beiden Prototypen haben wir dann die Effekte der verbleibenden zwei Kriterien (d.h. Modalität des Verbindungsaufbaus und Flexibilität der Umgebung) analysiert. Mit den in dieser ersten Phase erhaltenen Ergebnissen haben wir Shoot & Copy entworfen. Dieser Prototyp erlaubt die Erkennung von Bildschirmen einzig über deren visuellen Inhalt. Benutzer zeigen mit der Kamera ihres Mobilgeräts auf einen Bildschirm dessen Inhalt dann in Form von Video im Sucher dargestellt wird. Durch die Aufnahme eines Bildes (und der darauf folgenden Analyse) wird Inhalt ausgewählt. Wir haben dieses Konzept zudem auf mehrere Bildschirme erweitert, indem wir eine zentrale Instanz verwendet haben, die als Schnittstelle zur Umgebung agiert. Mit Tap & Drop haben wir den Prototyp verfeinert, um Echtzeit-Feedback zu ermöglichen. Anstelle der Bildaufnahme können Benutzer nun ihr mobiles Gerät auf den Bildschirm richten und sofort interagieren. Dadurch haben wir die strikt sequentielle Interaktion (Inhalt auswählen und Inhalt manipulieren) aufgebrochen. Beide Prototypen erlauben bereits nicht-modale Verbindungsmechanismen in flexiblen Umgebungen. Die in heutigen Mobilgeräten verwendeten Weitwinkel-Objektive erlauben jedoch nach wie vor keine variablen Distanzen. Mit Touch Projector beseitigen wir diese Einschränkung, indem wir Zoomen in Kombination mit einer vorübergehenden Pausierung des Videos im Sucher einfügen. Basierend auf den Erweiterungen der Klassifizierung von Interaktionen mit zusätzlichen Bildschirmen durch mobile Geräte haben wir ein verbessertes Modell (Interaktion durch den Bildschirm) erstellt. Es erlaubt Benutzern spontan zu interagieren, ohne explizit eine Verbindung zum zweiten Bildschirm herstellen zu müssen (nicht-modal). Da das mobile Gerät seinen räumlichen Bezug zu allen Bildschirmen selbst bestimmt, erlaubt unser Modell zusätzlich volle Flexibilität in solchen Umgebungen. Darüber hinaus können Benutzer mit zusätzlichen Bildschirmen (unabhängig von deren Größe) in variablen Entfernungen interagieren

    Detection of Non-Stationary Photometric Perturbations on Projection Screens

    Get PDF
    Interfaces based on projection screens have become increasingly more popular in recent years, mainly due to the large screen size and resolution that they provide, as well as their stereo-vision capabilities. This work shows a local method for real-time detection of non-stationary photometric perturbations in projected images by means of computer vision techniques. The method is based on the computation of differences between the images in the projector’s frame buffer and the corresponding images on the projection screen observed by the camera. It is robust under spatial variations in the intensity of light emitted by the projector on the projection surface and also robust under stationary photometric perturbations caused by external factors. Moreover, we describe the experiments carried out to show the reliability of the method
    • …
    corecore