90,619 research outputs found

    Adaptive User Perspective Rendering for Handheld Augmented Reality

    Full text link
    Handheld Augmented Reality commonly implements some variant of magic lens rendering, which turns only a fraction of the user's real environment into AR while the rest of the environment remains unaffected. Since handheld AR devices are commonly equipped with video see-through capabilities, AR magic lens applications often suffer from spatial distortions, because the AR environment is presented from the perspective of the camera of the mobile device. Recent approaches counteract this distortion based on estimations of the user's head position, rendering the scene from the user's perspective. To this end, approaches usually apply face-tracking algorithms on the front camera of the mobile device. However, this demands high computational resources and therefore commonly affects the performance of the application beyond the already high computational load of AR applications. In this paper, we present a method to reduce the computational demands for user perspective rendering by applying lightweight optical flow tracking and an estimation of the user's motion before head tracking is started. We demonstrate the suitability of our approach for computationally limited mobile devices and we compare it to device perspective rendering, to head tracked user perspective rendering, as well as to fixed point of view user perspective rendering

    EyeSpot: leveraging gaze to protect private text content on mobile devices from shoulder surfing

    Get PDF
    As mobile devices allow access to an increasing amount of private data, using them in public can potentially leak sensitive information through shoulder surfing. This includes personal private data (e.g., in chat conversations) and business-related content (e.g., in emails). Leaking the former might infringe on users’ privacy, while leaking the latter is considered a breach of the EU’s General Data Protection Regulation as of May 2018. This creates a need for systems that protect sensitive data in public. We introduce EyeSpot, a technique that displays content through a spot that follows the user’s gaze while hiding the rest of the screen from an observer’s view through overlaid masks. We explore different configurations for EyeSpot in a user study in terms of users’ reading speed, text comprehension, and perceived workload. While our system is a proof of concept, we identify crystallized masks as a promising design candidate for further evaluation with regard to the security of the system in a shoulder surfing scenario

    GazeTouchPass: Multimodal Authentication Using Gaze and Touch on Mobile Devices

    Get PDF
    We propose a multimodal scheme, GazeTouchPass, that combines gaze and touch for shoulder-surfing resistant user authentication on mobile devices. GazeTouchPass allows passwords with multiple switches between input modalities during authentication. This requires attackers to simultaneously observe the device screen and the user's eyes to find the password. We evaluate the security and usability of GazeTouchPass in two user studies. Our findings show that GazeTouchPass is usable and significantly more secure than single-modal authentication against basic and even advanced shoulder-surfing attacks

    Continuous glucose monitoring sensors: Past, present and future algorithmic challenges

    Get PDF
    Continuous glucose monitoring (CGM) sensors are portable devices that allow measuring and visualizing the glucose concentration in real time almost continuously for several days and are provided with hypo/hyperglycemic alerts and glucose trend information. CGM sensors have revolutionized Type 1 diabetes (T1D) management, improving glucose control when used adjunctively to self-monitoring blood glucose systems. Furthermore, CGM devices have stimulated the development of applications that were impossible to create without a continuous-time glucose signal, e.g., real-time predictive alerts of hypo/hyperglycemic episodes based on the prediction of future glucose concentration, automatic basal insulin attenuation methods for hypoglycemia prevention, and the artificial pancreas. However, CGM sensors’ lack of accuracy and reliability limited their usability in the clinical practice, calling upon the academic community for the development of suitable signal processing methods to improve CGM performance. The aim of this paper is to review the past and present algorithmic challenges of CGM sensors, to show how they have been tackled by our research group, and to identify the possible future ones

    Seamless and Secure VR: Adapting and Evaluating Established Authentication Systems for Virtual Reality

    Get PDF
    Virtual reality (VR) headsets are enabling a wide range of new opportunities for the user. For example, in the near future users may be able to visit virtual shopping malls and virtually join international conferences. These and many other scenarios pose new questions with regards to privacy and security, in particular authentication of users within the virtual environment. As a first step towards seamless VR authentication, this paper investigates the direct transfer of well-established concepts (PIN, Android unlock patterns) into VR. In a pilot study (N = 5) and a lab study (N = 25), we adapted existing mechanisms and evaluated their usability and security for VR. The results indicate that both PINs and patterns are well suited for authentication in VR. We found that the usability of both methods matched the performance known from the physical world. In addition, the private visual channel makes authentication harder to observe, indicating that authentication in VR using traditional concepts already achieves a good balance in the trade-off between usability and security. The paper contributes to a better understanding of authentication within VR environments, by providing the first investigation of established authentication methods within VR, and presents the base layer for the design of future authentication schemes, which are used in VR environments only

    Web-Based Visualization of Very Large Scientific Astronomy Imagery

    Full text link
    Visualizing and navigating through large astronomy images from a remote location with current astronomy display tools can be a frustrating experience in terms of speed and ergonomics, especially on mobile devices. In this paper, we present a high performance, versatile and robust client-server system for remote visualization and analysis of extremely large scientific images. Applications of this work include survey image quality control, interactive data query and exploration, citizen science, as well as public outreach. The proposed software is entirely open source and is designed to be generic and applicable to a variety of datasets. It provides access to floating point data at terabyte scales, with the ability to precisely adjust image settings in real-time. The proposed clients are light-weight, platform-independent web applications built on standard HTML5 web technologies and compatible with both touch and mouse-based devices. We put the system to the test and assess the performance of the system and show that a single server can comfortably handle more than a hundred simultaneous users accessing full precision 32 bit astronomy data.Comment: Published in Astronomy & Computing. IIPImage server available from http://iipimage.sourceforge.net . Visiomatic code and demos available from http://www.visiomatic.org
    • …
    corecore