72 research outputs found

    Natural navigation in space and time

    Get PDF
    Faculty at the Department of Computer Science at RIT had developed the Spiegel, a scientific data visualization framework. The system needed a natural interface to control 3D data visualizations in real-time. This thesis describes an extendable system for testing remote control interfaces for 3-dimensional virtual spaces. We had developed and compared 4 remote controls: multi-touch TouchPad, gyroscope-based GyroPad, wearable Data Glove, and Kinect-based Hands controller. Our research study revealed TouchPad as the most natural remote control

    Sensor-based user interface concepts for continuous, around-device and gestural interaction on mobile devices

    Get PDF
    A generally observable trend of the past 10 years is that the amount of sensors embedded in mobile devices such as smart phones and tablets is rising steadily. Arguably, the available sensors are mostly underutilized by existing mobile user interfaces. In this dissertation, we explore sensor-based user interface concepts for mobile devices with the goal of making better use of the available sensing capabilities on mobile devices as well as gaining insights on the types of sensor technologies that could be added to future mobile devices. We are particularly interested how novel sensor technologies could be used to implement novel and engaging mobile user interface concepts. We explore three particular areas of interest for research into sensor-based user interface concepts for mobile devices: continuous interaction, around-device interaction and motion gestures. For continuous interaction, we explore the use of dynamic state-space systems to implement user interfaces based on a constant sensor data stream. In particular, we examine zoom automation in tilt-based map scrolling interfaces. We show that although fully automatic zooming is desirable in certain situations, adding a manual override capability of the zoom level (Semi-Automatic Zooming) will increase the usability of such a system, as shown through a decrease in task completion times and improved user ratings of user study. The presented work on continuous interaction also highlights how the sensors embedded in current mobile devices can be used to support complex interaction tasks. We go on to introduce the concept of Around-Device Interaction (ADI). By extending the interactive area of the mobile device to its entire surface and the physical volume surrounding it we aim to show how the expressivity and possibilities of mobile input can be improved this way. We derive a design space for ADI and evaluate three prototypes in this context. HoverFlow is a prototype allowing coarse hand gesture recognition around a mobile device using only a simple set of sensors. PalmSpace a prototype exploring the use of depth cameras on mobile devices to track the user's hands in direct manipulation interfaces through spatial gestures. Lastly, the iPhone Sandwich is a prototype supporting dual-sided pressure-sensitive multi-touch interaction. Through the results of user studies, we show that ADI can lead to improved usability for mobile user interfaces. Furthermore, the work on ADI contributes suggestions for the types of sensors could be incorporated in future mobile devices to expand the input capabilities of those devices. In order to broaden the scope of uses for mobile accelerometer and gyroscope data, we conducted research on motion gesture recognition. With the aim of supporting practitioners and researchers in integrating motion gestures into their user interfaces at early development stages, we developed two motion gesture recognition algorithms, the $3 Gesture Recognizer and Protractor 3D that are easy to incorporate into existing projects, have good recognition rates and require a low amount of training data. To exemplify an application area for motion gestures, we present the results of a study on the feasibility and usability of gesture-based authentication. With the goal of making it easier to connect meaningful functionality with gesture-based input, we developed Mayhem, a graphical end-user programming tool for users without prior programming skills. Mayhem can be used to for rapid prototyping of mobile gestural user interfaces. The main contribution of this dissertation is the development of a number of novel user interface concepts for sensor-based interaction. They will help developers of mobile user interfaces make better use of the existing sensory capabilities of mobile devices. Furthermore, manufacturers of mobile device hardware obtain suggestions for the types of novel sensor technologies that are needed in order to expand the input capabilities of mobile devices. This allows the implementation of future mobile user interfaces with increased input capabilities, more expressiveness and improved usability

    Generalized Trackball and 3D Touch Interaction

    Get PDF
    This thesis faces the problem of 3D interaction by means of touch and mouse input. We propose a multitouch enabled adaptation of the classical mouse based trackball interaction scheme. In addition we introduce a new interaction metaphor based on visiting the space around a virtual object remaining at a given distance. This approach allows an intuitive navigation of topologically complex shapes enabling unexperienced users to visit hard to be reached parts

    Multi-touch selection and interaction on and above the surface

    Full text link
    Multi-Touch-Eingaben sind der De-facto-Standard für die Interaktion mit mobilen Geräten. Durch die Verfügbarkeit von immer leistungsstärkeren mobilen Geräten führen die Benutzer auf diesen zunehmend komplexere Operationen durch. Hierdurch entsteht ein Bedarf für Interaktionstechniken und Benutzerschnittstellen die sowohl präzise als auch ausdrucksstark sind. Selektion ist eine der fundamentalen Operationen in graphischen Benutzeroberflächen. Durch eine Untersuchung existierender Selektionstechniken wird ein Überblick über bestehende Arbeiten zum Thema Selektion vermittelt. Der Großteil der Eingabetechniken für Multi-Touch-Eingabegeräte basiert auf traditionellen Selektionstechniken, die für mausbasierte Benutzerschnittstellen entwickelt wurden, und benutzen nur einen einzigen Touch. Die Verwendung von echter Multi-Touch-Eingabe gibt die Möglichkeit, aber auch eine Herausforderung, leistungsfähigere und ausdrucksstärkere Interaktionstechniken zu erstellen. Wir setzen die Möglichkeiten von mehreren Touches ein und stellen eine Selektionstechnik für beliebige Regionen und ein Interaktionskonzept zur Gruppierung von Objekten vor. Im zweiten Teil dieser Arbeit wird die Eingabe mit Multi-Touch kombiniert mit einer stereoskopischen Projektion um in die dritte Dimension, oberhalb der Bildschirmoberfläche, vorzudringen.Multi-touch input is the de facto standard for interaction with mobile devices. As mobile devices become more powerful users perform increasingly complex computing tasks on them, necessitating interaction techniques and interfaces that are both precise and expressive. Selection is a fundamental operation in graphical user interfaces. In a survey of existing selection techniques an overview of previous work on selection is presented. Most techniques for touch input are based on traditional selection techniques that were developed for mouse-based interfaces and only use a single touch. Using true multi-touch input presents an opportunity, but also a challenge, to create more powerful and expressive interaction techniques. We present a selection technique for arbitrary regions and an interface for tagging and editing of complex groups facilitated by the capabilities of multiple touches. In the second part of this thesis multi-touch input is combined with a stereoscopic projection to access the third dimension above the surface

    Scalable exploration of highly detailed and annotated 3D models

    Get PDF
    With the widespread availability of mobile graphics terminals andWebGL-enabled browsers, 3D graphics over the Internet is thriving. Thanks to recent advances in 3D acquisition and modeling systems, high-quality 3D models are becoming increasingly common, and are now potentially available for ubiquitous exploration. In current 3D repositories, such as Blend Swap, 3D Café or Archive3D, 3D models available for download are mostly presented through a few user-selected static images. Online exploration is limited to simple orbiting and/or low-fidelity explorations of simplified models, since photorealistic rendering quality of complex synthetic environments is still hardly achievable within the real-time constraints of interactive applications, especially on on low-powered mobile devices or script-based Internet browsers. Moreover, navigating inside 3D environments, especially on the now pervasive touch devices, is a non-trivial task, and usability is consistently improved by employing assisted navigation controls. In addition, 3D annotations are often used in order to integrate and enhance the visual information by providing spatially coherent contextual information, typically at the expense of introducing visual cluttering. In this thesis, we focus on efficient representations for interactive exploration and understanding of highly detailed 3D meshes on common 3D platforms. For this purpose, we present several approaches exploiting constraints on the data representation for improving the streaming and rendering performance, and camera movement constraints in order to provide scalable navigation methods for interactive exploration of complex 3D environments. Furthermore, we study visualization and interaction techniques to improve the exploration and understanding of complex 3D models by exploiting guided motion control techniques to aid the user in discovering contextual information while avoiding cluttering the visualization. We demonstrate the effectiveness and scalability of our approaches both in large screen museum installations and in mobile devices, by performing interactive exploration of models ranging from 9Mtriangles to 940Mtriangles

    Optimizing Human Performance in Mobile Text Entry

    Get PDF
    Although text entry on mobile phones is abundant, research strives to achieve desktop typing performance "on the go". But how can researchers evaluate new and existing mobile text entry techniques? How can they ensure that evaluations are conducted in a consistent manner that facilitates comparison? What forms of input are possible on a mobile device? Do the audio and haptic feedback options with most touchscreen keyboards affect performance? What influences users' preference for one feedback or another? Can rearranging the characters and keys of a keyboard improve performance? This dissertation answers these questions and more. The developed TEMA software allows researchers to evaluate mobile text entry methods in an easy, detailed, and consistent manner. Many in academia and industry have adopted it. TEMA was used to evaluate a typical QWERTY keyboard with multiple options for audio and haptic feedback. Though feedback did not have a significant effect on performance, a survey revealed that users' choice of feedback is influenced by social and technical factors. Another study using TEMA showed that novice users entered text faster using a tapping technique than with a gesture or handwriting technique. This motivated rearranging the keys and characters to create a new keyboard, MIME, that would provide better performance for expert users. Data on character frequency and key selection times were gathered and used to design MIME. A longitudinal user study using TEMA revealed an entry speed of 17 wpm and a total error rate of 1.7% for MIME, compared to 23 wpm and 5.2% for QWERTY. Although MIME's entry speed did not surpass QWERTY's during the study, it is projected to do so after twelve hours of practice. MIME's error rate was consistently low and significantly lower than QWERTY's. In addition, participants found MIME more comfortable to use, with some reporting hand soreness after using QWERTY for extended periods

    Rapid development of applications for the interactive visual analysis of multimodal medical data

    Full text link
    Multimodale medizinische Volumendaten gewinnen zunehmend an Verbreitung. Wir diskutieren verschiedene interaktive Applikationen welche den Nutzer bei der Analyse dieser Daten unterstĂĽtzen. Alle Applikationen basieren auf Erweiterungen des Voreen Frameworks, welche ebenfalls in dieser Dissertation diskutiert werden. With multimodal volumetric medical data sets becoming more common due to the increasing availability of scanning hardware, software for the visualization and analysis of such data sets needs to become more efficient as well in order to prevent overloading the user with data. This dissertation presents several interactive techniques for the visual analysis of medical volume data. All applications are based on extensions to the Voreen volume rendering framework, which we will discuss first. Since visual analysis applications are interactive by definition, we propose a general-purpose navigation technique for volume data. Next, we discuss our concepts for the interactive planning of brain tumor resections. Finally, we present two systems designed to work with images of vasculature. First, we discuss an interactive vessel segmentation system enabling an efficient, visually supported workflow. Second, we propose an application for the visual analysis of PET tracer uptake along vessels
    • …
    corecore