25 research outputs found

    Geometric anticipation: assisting users in 2D layout tasks

    Get PDF
    We describe an experimental interface that anticipates a user's intentions and accommodates predicted changes in advance. Our canonical example is an interactive version of ``magnetic poetry'' in which rectangular blocks containing single words can be juxtaposed to form arbitrary sentences or ``poetry.'' The user can rearrange the blocks at will, forming and dissociating word sequences. A crucial attribute of the blocks in our system is that they anticipate insertions and gracefully rearrange themselves in time to make space for a new word or phrase. The challenges in creating such an interface are three fold: 1) the user's intentions must be inferred from noisy input, 2) arrangements must be altered smoothly and intuitively in response to anticipated changes, and 3) new and changing goals must be handled gracefully at any time, even in mid animation. We describe a general approach for handling the dynamic creation and deletion of organizational goals. Fluid motion is achieved by continually applying and correcting goal-directed forces to the objects. Future applications of this idea include the manipulation of text and graphical elements within documents and the manipulation of symbolic information such as equations

    Geometric anticipation: assisting users in 2D layout tasks

    Get PDF
    We describe an experimental interface that anticipates a user's intentions and accommodates predicted changes in advance. Our canonical example is an interactive version of ``magnetic poetry'' in which rectangular blocks containing single words can be juxtaposed to form arbitrary sentences or ``poetry.'' The user can rearrange the blocks at will, forming and dissociating word sequences. A crucial attribute of the blocks in our system is that they anticipate insertions and gracefully rearrange themselves in time to make space for a new word or phrase. The challenges in creating such an interface are three fold: 1) the user's intentions must be inferred from noisy input, 2) arrangements must be altered smoothly and intuitively in response to anticipated changes, and 3) new and changing goals must be handled gracefully at any time, even in mid animation. We describe a general approach for handling the dynamic creation and deletion of organizational goals. Fluid motion is achieved by continually applying and correcting goal-directed forces to the objects. Future applications of this idea include the manipulation of text and graphical elements within documents and the manipulation of symbolic information such as equations

    Interfaz gestual para la definiciĂłn de condiciones de ensamblaje para la generaciĂłn de maquetas digitales

    Get PDF
    [ESP] En el presente trabajo, se presenta un prototipo experimental denominado GEGROSS (GEsture & Geometric RecOnstruction based Sketch System) que pretende facilitar al máximo el proceso de ensamblaje de las piezas necesarias para crear una maqueta digital a través de la codificación mediante un lenguaje de gestos gráficos de las condiciones de ensamblaje. Para ello los elementos utilizados han sido por una parte el motor geométrico ACIS, el gestor de restricciones de ensamblaje 3D DCM de la firma D-Cubed, y la biblioteca CALI para la definición de interfaces gestuales. En el artículo se presenta la estrategia seguida para la integración de estas complejas herramientas, y el alfabeto de gestos desarrollado para las diferentes condiciones de ensamblaje.[ENG] In this paper we present an experimental prototype called GEGROSS (GEsture & eometric Reconstruction based Sketch System), that pretends to facility to the maximum the process of assembly in order to create a digital mock up, using the definition and codificationof graphical gestures of assembly condition. The elements used to make this possible have been: the geometric kernel ACIS, the constraint manager 3D DCM from D-Cubed firm and the CALI library for the definition of gestural interfaces. In this paper we present the strategy for the integration of these complex tools, and the gestural alphabet developed for the different assembly conditions.Este trabajo ha sido financiado por la Universidad de La Laguna a través del “Programa de Ayudas a la Investigación para la Formación y Promoción del Profesorado. Ayudas para Estancias de Investigadores Invitados” y por la Generalidad Valenciana, a través del proyecto de referencia CTIDIB/2002/51 de la convocatoria 2002 de Proyectos de I+D

    A new modeling interface for the pen-input displays

    Get PDF
    Abstract Sketch interactions based on interpreting multiple pen markings into a 3D shape is easy to design but not to use. First of all, it is difficult for the user to memorize a complete set of pen markings for a certain 3D shape. Secondly, the system will be waiting for the user to complete the sequence of the pen markings, often causing a certain mode error. To address these problems, we present a novel, interaction framework, suitable for interpretations based on single-stroke marking on pen-input display; within this framework 3D shape modeling operations are designed to create appropriate communication protocols.

    User-defined multimodal interaction to enhance children's number learning

    Get PDF
    Children today are already exposed to the new technology and have experienced excellent number learning applications at an early age. Despite that, most of the children's application softwares either fail to establish the interaction design or are not child-friendly. Involving children in the design phase of any children application is therefore essential as adults or developers do not know the children’s needs and requirements. In other words, designing children's computer applications adapted to the capabilities of children is an important part of today's software development methodology. The goal of this research is to propose a new interaction technique and usability that evaluates children learning performance of numbers. The new interaction technique is designed by participatory design in which children are involved in the design process. A VisionMath interface was implemented with the user-defined multimodal interaction dialogues which was proposed to evaluate the children’s learning ability and subjective satisfaction. An evaluation with 20 participants was conducted using usability testing methods. The result shows that there is a significant difference in the number learning performance between tactile interaction and multimodal interaction. This study reveals the proposed user-defined multimodal interaction dialogue was successful in providing a new interaction technique for children’s number learning by offering alternative input modality and potentially providing a rich field of research in the future

    Unpacking Non-Dualistic Design: The Soma Design Case

    Get PDF
    We report on a somaesthetic design workshop and the subsequent analytical work aiming to demystify what is entailed in a non-dualistic design stance on embodied interaction and why a first-person engagement is crucial to its unfoldings. However, as we will uncover through a detailed account of our process, these first-person engagements are deeply entangled with second- and third-person perspectives, sometimes even overlapping. The analysis furthermore reveals some strategies for bridging the body-mind divide by attending to our inner universe and dissolving or traversing dichotomies between inside and outside; individual and social; body and technology. By detailing the creative process, we show how soma design becomes a process of designing with and through kinesthetic experience, in turn letting us confront several dualisms that run like fault lines through HCI's engagement with embodied interaction

    Gesture-based Human-Machine Interaction in Industrial Environments

    Get PDF
    The traditional human-machine interaction systems for manufacturing processes use peripheral devices that require physical contact between end user and machines. This physical contact causes an alteration of the elements that constitute the system due to the transmission of undesirable particles such as oil, chemical substances and pollution. This Master's thesis gives a solution for this issue which is based on the integration of a device used by game industry in entertainment applications that enables human-machine interaction through a non-physical contact modality. Several purposes are offered by the market such as kinect sensor and leap motion controller (LMC). However, the solution used in this thesis was focused on a hand gesture-based device called "LMC", which promising features that make it an attractive tool for future solutions in the industrial domain. The obtained result was the integration of this gesture sensor with different technologies that enables the interaction via hand gestures with a monitoring system and also a robot cell which is part of a manufacturing system

    Context-aware gestural interaction in the smart environments of the ubiquitous computing era

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophyTechnology is becoming pervasive and the current interfaces are not adequate for the interaction with the smart environments of the ubiquitous computing era. Recently, researchers have started to address this issue introducing the concept of natural user interface, which is mainly based on gestural interactions. Many issues are still open in this emerging domain and, in particular, there is a lack of common guidelines for coherent implementation of gestural interfaces. This research investigates gestural interactions between humans and smart environments. It proposes a novel framework for the high-level organization of the context information. The framework is conceived to provide the support for a novel approach using functional gestures to reduce the gesture ambiguity and the number of gestures in taxonomies and improve the usability. In order to validate this framework, a proof-of-concept has been developed. A prototype has been developed by implementing a novel method for the view-invariant recognition of deictic and dynamic gestures. Tests have been conducted to assess the gesture recognition accuracy and the usability of the interfaces developed following the proposed framework. The results show that the method provides optimal gesture recognition from very different view-points whilst the usability tests have yielded high scores. Further investigation on the context information has been performed tackling the problem of user status. It is intended as human activity and a technique based on an innovative application of electromyography is proposed. The tests show that the proposed technique has achieved good activity recognition accuracy. The context is treated also as system status. In ubiquitous computing, the system can adopt different paradigms: wearable, environmental and pervasive. A novel paradigm, called synergistic paradigm, is presented combining the advantages of the wearable and environmental paradigms. Moreover, it augments the interaction possibilities of the user and ensures better gesture recognition accuracy than with the other paradigms

    Applying touch gesture to improve application accessing speed on mobile devices.

    Get PDF
    The touch gesture shortcut is one of the most significant contributions to Human-Computer Interaction (HCI). It is used in many fields: e.g., performing web browsing tasks (i.e., moving to the next page, adding bookmarks, etc.) on a smartphone, manipulating a virtual object on a tabletop device and communicating between two touch screen devices. Compared with the traditional Graphic User Interface (GUI), the touch gesture shortcut is more efficient, more natural, it is intuitive and easier to use. With the rapid development of smartphone technology, an increasing number of data items are showing up in users’ mobile devices, such as contacts, installed apps and photos. As a result, it has become troublesome to find a target item on a mobile device with traditional GUI. For example, to find a target app, sliding and browsing through several screens is a necessity. This thesis addresses this challenge by proposing two alternative methods of using a touch gesture shortcut to find a target item (an app, as an example) in a mobile device. Current touch gesture shortcut methods either employ a universal built-in system- defined shortcut template or a gesture-item set, which is defined by users before using the device. In either case, the users need to learn/define first and then recall and draw the gesture to reach the target item according to the template/predefined set. Evidence has shown that compared with GUI, the touch gesture shortcut has an advantage when performing several types of tasks e.g., text editing, picture drawing, audio control, etc. but it is unknown whether it is quicker or more effective than the traditional GUI for finding target apps. This thesis first conducts an exploratory study to understand user memorisation of their Personalized Gesture Shortcuts (PGS) for 15 frequently used mobile apps. An experiment will then be conducted to investigate (1) the users’ recall accuracy on the PGS for finding both frequently and infrequently used target apps, (2) and the speed by which users are able to access the target apps relative to GUI. The results show that the PGS produced a clear speed advantage (1.3s faster on average) over the traditional GUI, while there was an approximate 20% failure rate due to unsuccessful recall on the PGS. To address the unsuccessful recall problem, this thesis explores ways of developing a new interactive approach based on the touch gesture shortcut but without requiring recall or having to be predefined before use. It has been named the Intelligent Launcher in this thesis, and it predicts and launches any intended target app from an unconstrained gesture drawn by the user. To explore how to achieve this, this thesis conducted a third experiment to investigate the relationship between the reasons underlying the user’s gesture creation and the gesture shape (handwriting, non-handwriting or abstract) they used as their shortcut. According to the results, unlike the existing approaches, the thesis proposes that the launcher should predict the users’ intended app from three types of gestures. First, the non-handwriting gestures via the visual similarity between it and the app’s icon; second, the handwriting gestures via the app’s library name plus functionality; and third, the abstract gestures via the app’s usage history. In light of these findings mentioned above, we designed and developed the Intelligent Launcher, which is based on the assumptions drawn from the empirical data. This thesis introduces the interaction, the architecture and the technical details of the launcher. How to use the data from the third experiment to improve the predictions based on a machine learning method, i.e., the Markov Model, is described in this thesis. An evaluation experiment, shows that the Intelligent Launcher has achieved user satisfaction with a prediction accuracy of 96%. As of now, it is still difficult to know which type of gesture a user tends to use. Therefore, a fourth experiment, which focused on exploring the factors that influence the choice of touch gesture shortcut type for accessing a target app is also conducted in this thesis. The results of the experiment show that (1) those who preferred a name-based method used it more consistently and used more letter gestures compared with those who preferred the other three methods; (2) those who preferred the keyword app search method created more letter gestures than other types; (3) those who preferred an iOS system created more drawing gestures than other types; (4) letter gestures were more often used for the apps that were used frequently, whereas drawing gestures were more often used for the apps that were used infrequently; (5) the participants tended to use the same creation method as the preferred method on different days of the experiment. This thesis contributes to the body of Human-Computer Interaction knowledge. It proposes two alternative methods which are more efficient and flexible for finding a target item among a large number of items. The PGS method has been confirmed as being effective and has a clear speed advantage. The Intelligent Launcher has been developed and it demonstrates a novel way of predicting a target item via the gesture user’s drawing. The findings concerning the relationship between the user’s choice of gesture for the shortcut and some of the individual factors have informed the design of a more flexible touch gesture shortcut interface for ”target item finding” tasks. When searching for different types of data items, the Intelligent Launcher is a prototype for finding target apps since the variety in visual appearance of an app and its functionality make it more difficult to predict than other targets, such as a standard phone setting, a contact or a website. However, we believe that the ideas that have been presented in this thesis can be further extended to other types of items, such as videos or photos in a Photo Library, places on a map or clothes in an online store. What is more, this study also leads the way in tackling the advantage of a machine learning method in touch gesture shortcut interactions
    corecore