38 research outputs found

    Evaluating Controls for a Point and Shoot Mobile Game: Augmented Reality, Tilt and Touch

    Get PDF
    International audienceControls based on Augmented Reality (AR), Tilt and Touch have been evaluated in a point and shoot game for mobile devices. A user study (n=12) was conducted to compare the three controls in terms of player experience and accuracy. Tilt and AR controls provided more enjoyment, immersion and accuracy to the players than Touch. Nonetheless, Touch caused fewer nuisances and was playable under more varied situations. Despite the current technical limitations, we suggest to incorporate AR controls into the mobile games that supported them. Nowadays, AR controls can be implemented on handheld devices as easily as the more established Tilt and Touch controls. However, this study is the first comparison of them and thus its findings could be of interest for game developers

    An Approach of Integrating Communication Services in Applications for Android-Based Digital TV Receivers

    Get PDF
    Digital TV receivers are becoming increasingly powerful devices allowing consumers to not only watch television broadcasts but also to access the Internet or communicate to other devices in the same local area network through either an Ethernet connection or by using a wireless connection. As the living room represents a meeting place for family and friends to gather and socialize in, the possibility of playing informal games using the television set as the interaction device is very attractive. This paper presents a developed application that integrates new communication capabilities of digital TV receivers running the Android OS. The application is a game showing its content overlaid on top of a television program whereas Android mobile devices are used as controllers. The performance of the application is tested by measuring the response times of the various communication services and by analyzing feedback from a selected group of users

    Automatic Speed Control For Navigation in 3D Virtual Environment

    Get PDF
    As technology progresses, the scale and complexity of 3D virtual environments can also increase proportionally. This leads to multiscale virtual environments, which are environments that contain groups of objects with extremely unequal levels of scale. Ideally the user should be able to navigate such environments efficiently and robustly. Yet, most previous methods to automatically control the speed of navigation do not generalize well to environments with widely varying scales. I present an improved method to automatically control the navigation speed of the user in 3D virtual environments. The main benefit of my approach is that automatically adapts the navigation speed in multi-scale environments in a manner that enables efficient navigation with maximum freedom, while still avoiding collisions. The results of a usability tests show a significant reduction in the completion time for a multi-scale navigation task

    MOBILEWHEEL A mobile driving station

    Get PDF
    Current mobile devices (e.g., smartphones) are equipped with several sensors that allow different forms of user interaction. They also offer several connectivity options and a growing computing power which supports its use in new Human Computer Interaction (HCI) scenarios. This paper presents the mobileWheel, a system that exploits the capabilities of current mobile devices as a means of interaction with a real-time graphical driving simulation running on a desktop computer. The application on the mobile device performs data acquisition from various sensors (focusing on the 3D accelerometer) and also provides different types of feedback to the user. This system represents a ubiquitous, simple and affordable alternative approach to the traditional control of virtual vehicles in driving simulators and could also be applied in other similar architectures. To evaluate and validate this approach several tests were conducted with volunteer users. The control mode where the virtual vehicle is fully controlled by the accelerometer had the highest acceptance and produced the best results

    Building Web Based Programming Environments for Functional Programming

    Get PDF
    Functional programming offers an accessible and powerful algebraic model for computing. JavaScript is the language of the ubiquitous Web, but it does not support functional programs well due to its single-threaded, asynchronous nature and lack of rich control flow operators. The purpose of this work is to extend JavaScript to a language environment that satisfies the needs of functional programs on the Web. This extended language environment uses sophisticated control operators to provide an event-driven functional programming model that cooperates with the browser\u27s DOM, along with synchronous access to JavaScript\u27s asynchronous APIs. The results of this work are used toward two projects: (1) a programming environment called WeScheme that runs in the web browser and supports a functional programming curriculum, and (2) a tool-chain called Moby that compiles event-driven functional programs to smartphones, with access to phone-specific features

    Designing a Natural User Interface to Support Information Sharing among Co-Located Mobile Devices

    Get PDF
    Users of mobile devices share their information through various methods, which are supported by mobile devices. However, the information sharing process of these methods are typically redundant and sometimes tedious. This is because it may require the user to repeatedly perform a series of steps to share one or more selected files with another individual. The proliferation of mobile devices support new, more intuitive, and less complicated solutions to information sharing in the field of mobile computing. The aim of this paper is to present MotionShare, which is a NUI application that supports information sharing among co-located mobile devices. Unlike other existing systems, MotionShare’s distinguishing attribute is its inability of relying on additional and assisting technologies in determining the positions of devices. A primary example is using an external camera to determine device positioning in a spatial environment. An analytical evaluation investigated the accuracy of device positioning and gesture recognition, where the results were positive. The empirical evaluation investigated any usability issues. The results of the empirical evaluation showed high levels of user satisfaction and that participants preferred touch gestures to point gestures

    The Virtual Armory: Virtual Jousting Simulator

    Get PDF
    This project presents a sport of the past using current technologies to recreate the experience of jousting for visitors to the Higgins Armory Museum. Through collaboration with museum staff, intensive historical research, and a rigorous, iterative software development cycle, the project team developed a jousting simulation using technologies that incorporated Java, Flash, TCP/IP sockets, Bluetooth and XML. Nintendo Wii remotes, embedded in a lance stub and to horse reins, were also used to further simulate realism in the user-application interface

    What you see is what you feel : on the simulation of touch in graphical user interfaces

    Get PDF
    This study introduces a novel method of simulating touch with merely visual means. Interactive animations are used to create an optical illusion that evokes haptic percepts like stickiness, stiffness and mass, within a standard graphical user interface. The technique, called optically simulated hapic feedback, exploits the domination of the visual over the haptic modality and the general human tendency to integrate between the various senses. The study began with an aspiration to increase the sensorial qualities of the graphical user interface. With the introduction of the graphical user interface – and in particular the desktop metaphor – computers have become accessible for almost anyone; all over the world, people from various cultures use the same icons, folders, buttons and trashcans. However, from a sensorial point of view this computing paradigm is still extremely limited. Touch can play a powerful role in communication. It can offer an immediacy and intimacy unparalleled by words or images. Although few doubt this intrinsic value of touch perception in everyday life, examples in modern technology where human-machine communication utilizes the tactile and kinesthetic senses as additional channels of information flow are scarce. Hence, it has often been suggested that improvements in the sensorial qualities of computers could lead to more natural interfaces. Various researchers have been creating scenarios and technologies that should enrich the sensorial qualities of our digital environment. Some have developed mechanical force feedback devices that enable people to experience haptics while interacting with a digital display. Others have suggested that the computer should ‘disappear’ into the environment and proposed tangible objects as a means to connect between the digital and the physical environment. While the scenarios of force feedback, tangible interactions and the disappearing computer are maturing, millions of people are still working with a desktop computer interface every day. In spite of its obvious drawbacks, the desktop computing model penetrated deeply into our society and cannot be expected to disappear overnight. Radically different computing paradigms will require the development of radically different hardware. This takes time and it is yet unsure when, if so, other computing paradigms will replace the current desktop computing setup. It is for that reason, that we pursued another approach towards physical computing. Inspired by renaissance painters, who already centuries ago invented illusionary techniques like perspective and trompe d’oeil to increase the presence of their paintings, we aim to improve the physicality of the graphical user interface, without resorting to special hardware. Optically simulated haptic feedback, described in this thesis, has a lot in common with mechanical force-feedback systems, except for the fact that in mechanical force-feedback systems the location of the cursor is manipulated as a result of the force sent to the haptic device (force-feedback mouse, trackball, etc), whereas in our system the cursor location is directly manipulated, resulting in an purely visual force feedback. By applying tiny displacements upon the cursor’s movement, tactile sensations like stickiness, touch, or mass can be simulated. In chapter 2 we suggest that the active cursor technique can be applied to create richer interactions without the need for special hardware. The cursor channel is transformed from an input only to an input/output channel. The active cursor displacements can be used to create various (dynamic) slopes as well as textures and material properties, which can provide the user with feedback while navigating the on-screen environment. In chapter 3 the perceptual illusion of touch, resulting from the domination of the visual over the haptic modality, is described in a larger context of prior research and experimentally tested. Using both the active cursor technique and a mechanical force feedback device, we generated bumps and hole structures. In a controlled experiment the perception of the slopes was measured, comparing between the optical and the mechanical simulation. Results show that people can recognize optically simulated bump and hole structures, and that active cursor displacements influence the haptic perception of bumps and holes. Depending on the simulated strength of the force, optically simulated haptic feedback can take precedence over mechanically simulated haptic feedback, but also the other way around. When optically simulated and mechanically simulated haptic feedback counteract each other, however, the weight attributed to each source of haptic information differs between users. It is concluded that active cursor displacements can be used to optically simulate the operation of mechanical force feedback devices. An obvious application of optically simulated haptic feedback in graphical user interfaces, is to assist the user in pointing at icons and objects on the screen. Given the pervasiveness of pointing in graphical interfaces, every small improvement in a target-acquisition task, represents a substantial improvement in usability. Can active cursor displacements be applied to help the user reach its goal? In chapter 4 we test the usability of optically simulated haptic feedback in a pointing task, again in comparison with the force feedback generated by a mechanical device. In a controlled Fitts’-law type experiment, subjects were asked to point and click at targets of different sizes and distances. Results learn that rendering hole type structures underneath the targets improves the effectiveness, efficiency and satisfaction of the target acquisition task. Optically simulated haptic feedback results in lower error rates, more satisfaction, and a higher index of performance, which can be attributed to the shorter movement times realized for the smaller targets. For larger targets, optically simulated haptic feedback resulted in comparable movement times as mechanically simulated haptic feedback. Since the current graphical interfaces are not designed with tactility in mind, the development of novel interaction styles should also be an important research path. Before optically simulated haptic feedback can be fully brought into play in more complex interaction styles, designers and researchers need to further experiment with the technique. In chapter 5 we describe a software prototyping toolkit, called PowerCursor, which enables designers to create interaction styles using optically simulated haptic feedback, without having to do elaborate programming. The software engine consists of a set of ready force field objects – holes, hills, ramps, rough and slick objects, walls, whirls, and more – that can be added to any Flash project, as well as force behaviours that can be added to custom made shapes and objects. These basic building blocks can be combined to create more complex and dynamic force objects. This setup should allow the users of the toolkit to creatively design their own interaction styles with optically simulated haptic feedback. The toolkit is implemented in Adobe Flash and can be downloaded at www.powercursor.com. Furthermore, in chapter 5 we present a preliminary framework of the expected applicability of optically simulated haptic feedback. Illustrated with examples that have been created with the beta-version of the PowerCursor toolkit so far, we discuss some of the ideas for novel interaction styles. Besides being useful in assisting the user while navigating, optically simulated haptic feedback might be applied to create so-called mixed initiative interfaces – one can for instance think of an installation wizard, which guides the cursor towards the recommended next step. Furthermore since optically simulated haptic feedback can be used to communicate material properties of textures or 3D objects, it can be applied to create aesthetically pleasing interactions – which with the migration of computers into other domains than the office environment are becoming more relevant. Finally we discuss the opportunities for applications outside the desktop computer model. We discuss how, in principle, optically simulated haptic feedback can play a role in any graphical interface where the input and output channels are decoupled. In chapter 6 we draw conclusions and discuss future directions. We conclude that optically simulated haptic feedback can increase the physicality and quality of our current graphical user interfaces, without resorting to specialistic hardware. Users are able to recognize haptic structures simulated by applying active cursor displacements upon the users mouse movements. Our technique of simulating haptic feedback optically opens up an additional communication channel with the user that can enhance the usability of the graphical interface. However, the active cursor technique is not to be expected to replace mechanical haptic feedback altogether, since it can be applied only in combination with a visual display and thus will not work for visually impaired people. Rather, we expect the ability to employ tactile interaction styles in a standard graphical user interface, could catalyze the development of novel physical interaction styles and on the long term might instigate the acceptance of haptic devices. With this research we hope to have contributed to a more sensorial and richer graphical user interface. Moreover we have aimed to increase our awareness and understanding of media technology and simulations in general. Therefore, our scientific research results are deliberately presented within a social-cultural context that reflects upon the dominance of the visual modality in our society and the ever-increasing role of media and simulations in people’s everyday lives

    Designing for Effective Freehand Gestural Interaction

    Get PDF
    corecore