638 research outputs found

    Developing a Multi-Touch Map Application for a Large Screen in a Nature Centre

    Get PDF
    The paper describes the development of a research prototype of a multi-touch map application for multi-use on a large multi-touch screen intended for a nature centre. The presented system and the development steps provide insight into what can be expected when similar systems are designed. A number of new considerations regarding multi-touch interaction, map browsing, and user needs for multi-use have been taken into account during the challenging ongoing development. These considerations include making a simple user interface used with intuitive, continuous and simultaneous gestures for map browsing, and taking different kinds of users and their needs to interact with each other into account. Since multi-user map applications in multi-touch environments are still rare, the given considerations may be helpful for the future development of similar map applications intended for public spaces

    Modeling On and Above a Stereoscopic Multitouch Display

    Get PDF
    International audienceWe present a semi-immersive environment for conceptual design where virtual mockups are obtained from gestures we aim to get closer to the way people conceive, create and manipulate three-dimensional shapes. We developed on-and-above-the-surface interaction techniques based on asymmetric bimanual interaction for creating and editing 3D models in a stereoscopic environment. Our approach combines hand and nger tracking in the space on and above a multitouch surface. This combination brings forth an alternative design environment where users can seamlessly switch between interacting on the surface or in the space above it to leverage the bene t of both interaction spaces

    Exploring Alternative Control Modalities for Unmanned Aerial Vehicles

    Get PDF
    Unmanned aerial vehicles (UAVs), commonly known as drones, are defined by the International Civil Aviation Organization (ICAO) as an aircraft without a human pilot on board. They are currently utilized primarily in the defense and security sectors but are moving towards the general market in surprisingly powerful and inexpensive forms. While drones are presently restricted to non-commercial recreational use in the USA, it is expected that they will soon be widely adopted for both commercial and consumer use. Potentially, UAVs can revolutionize various business sectors including private security, agricultural practices, product transport and maybe even aerial advertising. Business Insider foresees that 12% of the expected $98 billion cumulative global spending on aerial drones through the following decade will be for business purposes.[28] At the moment, most drones are controlled by some sort of classic joystick or multitouch remote controller. While drone manufactures have improved the overall controllability of their products, most drones shipped today are still quite challenging for inexperienced users to pilot. In order to help mitigate the controllability challenges and flatten the learning curve, gesture controls can be utilized to improve piloting UAVs. The purpose of this study was to develop and evaluate an improved and more intuitive method of flying UAVs by supporting the use of hand gestures, and other non-traditional control modalities. The goal was to employ and test an end-to-end UAV system that provides an easy-to-use control interface for novice drone users. The expectation was that by implementing gesture-based navigation, the novice user will have an overall enjoyable and safe experience quickly learning how to navigate a drone with ease, and avoid losing or damaging the vehicle while they are on the initial learning curve. During the course of this study we have learned that while this approach does offer lots of promise, there are a number of technical challenges that make this problem much more challenging than anticipated. This thesis details our approach to the problem, analyzes the user data we collected, and summarizes the lessons learned

    Generalized Trackball and 3D Touch Interaction

    Get PDF
    This thesis faces the problem of 3D interaction by means of touch and mouse input. We propose a multitouch enabled adaptation of the classical mouse based trackball interaction scheme. In addition we introduce a new interaction metaphor based on visiting the space around a virtual object remaining at a given distance. This approach allows an intuitive navigation of topologically complex shapes enabling unexperienced users to visit hard to be reached parts

    Exploring the Multi-touch Interaction Design Space for 3D Virtual Objects to Support Procedural Training Tasks

    Get PDF
    Multi-touch interaction has the potential to be an important input method for realistic training in 3D environments. However, multi-touch interaction has not been explored much in 3D tasks, especially when trying to leverage realistic, real-world interaction paradigms. A systematic inquiry into what realistic gestures look like for 3D environments is required to understand how users translate real-world motions to multi-touch motions. Once those gestures are defined, it is important to see how we can leverage those gestures to enhance training tasks. In order to explore the interaction design space for 3D virtual objects, we began by conducting our first study exploring user-defined gestures. From this work we identified a taxonomy and design guidelines for 3D multi-touch gestures and how perspective view plays a role in the chosen gesture. We also identified a desire to use pressure on capacitive touch screens. Since the best way to implement pressure still required some investigation, our second study evaluated two different pressure estimation techniques in two different scenarios. Once we had a taxonomy of gestures we wanted to examine whether implementing these realistic multi-touch interactions in a training environment provided training benefits. Our third study compared multi-touch interaction to standard 2D mouse interaction and to actual physical training and found that multi-touch interaction performed better than 2D mouse and as well as physical training. This study showed us that multi-touch training using a realistic gesture set can perform as well as training on the actual apparatus. One limitation of the first training study was that the user had constrained perspective to allow for us to focus on isolating the gestures. Since users can change their perspective in a real life training scenario and therefore gain spatial knowledge of components, we wanted to see if allowing users to alter their perspective helped or hindered training. Our final study compared training with Unconstrained multi-touch interaction, Constrained multi-touch interaction, or training on the actual physical apparatus. Results show that the Unconstrained multi-touch interaction and the Physical groups had significantly better performance scores than the Constrained multi-touch interaction group, with no significant difference between the Unconstrained multi-touch and Physical groups. Our results demonstrate that allowing users more freedom to manipulate objects as they would in the real world benefits training. In addition to the research already performed, we propose several avenues for future research into the interaction design space for 3D virtual objects that we believe will be of value to researchers and designers of 3D multi-touch training environments

    Natural user interfaces in games for mobile devices

    Get PDF
    Tese de mestrado integrado. Engenharia Informática e Computação. Faculdade de Engenharia. Universidade do Porto. 201

    Gaze+RST: Integrating Gaze and Multitouch for Remote Rotate-Scale-Translate Tasks

    Get PDF
    Our work investigates the use of gaze and multitouch to fluidly perform rotate-scale-translate (RST) tasks on large displays. The work specifically aims to understand if gaze can provide benefit in such a task, how task complexity affects performance, and how gaze and multitouch can be combined to create an integral input structure suited to the task of RST. We present four techniques that individually strike a different balance between gaze-based and touch-based translation while maintaining concurrent rotation and scaling operations. A 16 participant empirical evaluation revealed that three of our four techniques present viable options for this scenario, and that larger distances and rotation/scaling operations can significantly affect a gaze-based translation configuration. Furthermore we uncover new insights regarding multimodal integrality, finding that gaze and touch can be combined into configurations that pertain to integral or separable input structures

    Software support for multitouch interaction: the end-user programming perspective

    Get PDF
    Empowering users with tools for developing multitouch interaction is a promising step toward the materialization of ubiquitous computing. This survey frames the state of the art of existing multitouch software development tools from an end-user programming perspective.This research has been partially funded by the EUFP7 project meSch (grant agreement 600851 and CREAx grant (Spanish Ministry of Economy and Competitivity TIN2014-56534-R

    Supporting collaborative work using interactive tabletop

    Get PDF
    PhD ThesisCollaborative working is a key of success for organisations. People work together around tables at work, home, school, and coffee shops. With the explosion of the internet and computer systems, there are a variety of tools to support collaboration in groups, such as groupware, and tools that support online meetings. However, in the case of co-located meetings and face-to-face situations, facial expressions, body language, and the verbal communications have significant influence on the group decision making process. Often people have a natural preference for traditional pen-and-paper-based decision support solutions in such situations. Thus, it is a challenge to implement tools that rely advanced technological interfaces, such as interactive multi-touch tabletops, to support collaborative work. This thesis proposes a novel tabletop application to support group work and investigates the effectiveness and usability of the proposed system. The requirements for the developed system are based on a review of previous literature and also on requirements elicited from potential users. The innovative aspect of our system is that it allows the use of personal devices that allow some level of privacy for the participants in the group work. We expect that the personal devices may contribute to the effectiveness of the use of tabletops to support collaborative work. We chose for the purpose of evaluation experiment the collaborative development of mind maps by groups, which has been investigated earlier as a representative form of collaborative work. Two controlled laboratory experiments were designed to examine the usability features and associated emotional attitudes for the tabletop mind map application in comparison with the conventional pen-and-paper approach in the context of collaborative work. The evaluation clearly indicates that the combination of the tabletop and personal devices support and encourage multiple people working collaboratively. The comparison of the associated emotional attitudes indicates that the interactive tabletop facilitates the active involvement of participants in the group decision making significantly more than the use of the pen-and-paper conditions. The work reported here contributes significantly to our understanding of the usability and effectiveness of interactive tabletop applications in the context of supporting of collaborative work.The Royal Thai governmen
    • …
    corecore