1,261 research outputs found

    An investigation into alternative human-computer interaction in relation to ergonomics for gesture interface design

    Get PDF
    Recent, innovative developments in the field of gesture interfaces as input techniques have the potential to provide a basic, lower-cost, point-and-click function for graphic user interfaces (GUIs). Since these gesture interfaces are not yet widely used, indeed no tilt-based gesture interface is currently on the market, there is neither an international standard for the testing procedure nor a guideline for their ergonomic design and development. Hence, the research area demands more design case studies on a practical basis. The purpose of the research is to investigate the design factors of gesture interfaces for the point-andclick task in the desktop computer environment. The key function of gesture interfaces is to transfer the specific body movement into the cursor movement on the two-dimensional graphical user interface(2D GUI) on a real-time basis, based in particular on the arm movement. The initial literature review identified limitations related to the cursor movement behaviour with gesture interfaces. Since the cursor movement is the machine output of the gesture interfaces that need to be designed, a new accuracy measure based on the calculation of the cursor movement distance and an associated model was then proposed in order to validate the continuous cursor movement. Furthermore, a design guideline with detailed design requirements and specifications for the tilt-based gesture interfaces was suggested. In order to collect the human performance data and the cursor movement distance, a graphical measurement platform was designed and validated with the ordinary mouse. Since there are typically two types of gesture interface, i.e. the sweep-based and the tilt-based, and no commercial tilt-based gesture interface has yet been developed, a commercial sweep-based gesture interface, namely the P5 Glove, was studied and the causes and effects of the discrete cursor movement on the usability was investigated. According to the proposed design guideline, two versions of the tilt-based gesture 3 interface were designed and validated based on an iterative design process. Most of the phenomena and results from the trials undertaken, which are inter-related, were analyzed and discussed. The research has contributed new knowledge through design improvement of tilt-based gesture interfaces and the improvement of the discrete cursor movement by elimination of the manual error compensation. This research reveals that there is a relation between the cursor movement behaviour and the adjusted R 2 for the prediction of the movement time across models expanded from Fitts’ Law. In such a situation, the actual working area and the joint ranges are lengthy and appreciably different from those that had been planned. Further studies are suggested. The research was associated with the University Alliance Scheme technically supported by Freescale Semiconductor Co., U.S

    Adapting Multi-touch Systems to Capitalise on Different Display Shapes

    Get PDF
    The use of multi-touch interaction has become more widespread. With this increase of use, the change in input technique has prompted developers to reconsider other elements of typical computer design such as the shape of the display. There is an emerging need for software to be capable of functioning correctly with different display shapes. This research asked: ‘What must be considered when designing multi-touch software for use on different shaped displays?’ The results of two structured literature surveys highlighted the lack of support for multi-touch software to utilise more than one display shape. From a prototype system, observations on the issues of using different display shapes were made. An evaluation framework to judge potential solutions to these issues in multi-touch software was produced and employed. Solutions highlighted as being suitable were implemented into existing multi-touch software. A structured evaluation was then used to determine the success of the design and implementation of the solutions. The hypothesis of the evaluation stated that the implemented solutions would allow the applications to be used with a range of different display shapes in such a way that did not leave visual content items unfit for purpose. The majority of the results conformed to this hypothesis despite minor deviations from the designs of solutions being discovered in the implementation. This work highlights how developers, when producing multi-touch software intended for more than one display shape, must consider the issue of visual content items being occluded. Developers must produce, or identify, solutions to resolve this issue which conform to the criteria outlined in this research. This research shows that it is possible for multi-touch software to be made display shape independent

    Enhancing User Immersion and Virtual Presence in Interactive Multiuser Virtual Environments through the Development and Integration of a Gesture-Centric Natural User Interface Developed from Existing Virtual Reality Technologies

    Get PDF
    Immersion, referring to the level of physical or psychological submergence of a user within a virtual space relative to that user's consciousness of the real-world environment, has predominantly been established as an indispensable part of interactive media designs. This is most prevalent in Virtual Reality (VR) platforms, as their applications are typically reliant on user believability. With a wide variation of possible methodologies for the enhancement of this feature, the collectively recognised paradigm lies on the emphasis of naturalism in the design of the virtual system [7]. Though widely used by some specialised VR applications [4] such concepts are yet to be fully explored in the more contemporary virtual systems such as Social Immersive Virtual Environment (SIVE). The focus of the study described in this paper are the techniques being developed to enhance user immersion, virtual presence and co-presence in a SIVE application, through the design and integration of a VR-based Natural User Interface (NUI) that allows users to naturally and intuitively interact with the virtual environment and other networked users through the utilisation of full body gesture controls. These gestural controls prioritise the emulation of the alternate equivalent of such real-wold interactions, whilst also providing an interface for the seamless and unobtrusive translation of the user's real-world physical state into the virtual environment through intuitive user to virtual avatar proprioceptive coordination. © Springer International Publishing Switzerland 2014

    Light on horizontal interactive surfaces: Input space for tabletop computing

    Get PDF
    In the last 25 years we have witnessed the rise and growth of interactive tabletop research, both in academic and in industrial settings. The rising demand for the digital support of human activities motivated the need to bring computational power to table surfaces. In this article, we review the state of the art of tabletop computing, highlighting core aspects that frame the input space of interactive tabletops: (a) developments in hardware technologies that have caused the proliferation of interactive horizontal surfaces and (b) issues related to new classes of interaction modalities (multitouch, tangible, and touchless). A classification is presented that aims to give a detailed view of the current development of this research area and define opportunities and challenges for novel touch- and gesture-based interactions between the human and the surrounding computational environment. © 2014 ACM.This work has been funded by Integra (Amper Sistemas and CDTI, Spanish Ministry of Science and Innovation) and TIPEx (TIN2010-19859-C03-01) projects and Programa de Becas y Ayudas para la Realización de Estudios Oficiales de Máster y Doctorado en la Universidad Carlos III de Madrid, 2010

    Eye-gaze interaction techniques for use in online games and environments for users with severe physical disabilities.

    Get PDF
    Multi-User Virtual Environments (MUVEs) and Massively Multi-player On- line Games (MMOGs) are a popular, immersive genre of computer game. For some disabled users, eye-gaze offers the only input modality with the potential for sufficiently high bandwidth to support the range of time-critical interaction tasks required to play. Although, there has been much research into gaze interaction techniques for computer interaction over the past twenty years, much of this has focused on 2D desktop application control. There has been some work that investigates the use of gaze interaction as an additional input device for gaming but very little on using gaze on its own. Further, configuration of these techniques usually requires expert knowledge often beyond the capabilities of a parent, carer or support worker. The work presented in this thesis addresses these issues by the investigation of novel gaze-only interaction techniques. These are to enable at least a beginner level of game play to take place together with a means of adapting the techniques to suit an individual. To achieve this, a collection of novel gaze based interaction techniques have been evaluated through empirical studies. These have been encompassed within an extensible software architecture that has been made available for free download. Further, a metric of reliability is developed that when used as a measure within a specially designed diagnostic test, allows the interaction technique to be adapted to suit an individual. Methods of selecting interaction techniques based upon game task are also explored and a novel methodology based on expert task analysis is developed to aid selection

    An Exploration of Multi-touch Interaction Techniques

    Get PDF
    Research in multi-touch interaction has typically been focused on direct spatial manipulation; techniques have been created to result in the most intuitive mapping between the movement of the hand and the resultant change in the virtual object. As we attempt to design for more complex operations, the effectiveness of spatial manipulation as a metaphor becomes weak. We introduce two new platforms for multi-touch computing: a gesture recognition system, and a new interaction technique. I present Multi-Tap Sliders, a new interaction technique for operation in what we call non-spatial parametric spaces. Such spaces do not have an obvious literal spatial representation, (Eg.: exposure, brightness, contrast and saturation for image editing). The multi-tap sliders encourage the user to keep her visual focus on the tar- get, instead of requiring her to look back at the interface. My research emphasizes ergonomics, clear visual design, and fluid transition between modes of operation. Through a series of iterations, I develop a new technique for quickly selecting and adjusting multiple numerical parameters. Evaluations of multi-tap sliders show improvements over traditional sliders. To facilitate further research on multi-touch gestural interaction, I developed mGestr: a training and recognition system using hidden Markov models for designing a multi-touch gesture set. Our evaluation shows successful recognition rates of up to 95%. The recognition framework is packaged into a service for easy integration with existing applications
    corecore