1,355 research outputs found

    Algorithmic Efficiency of Stroke Gesture Recognizers: a Comparative Analysis

    Get PDF
    Gesture interaction is today recognized as a natural, intuitive way to execute commands of an interactive system. For this purpose, several stroke gesture recognizers become more efficient in recognizing end-user gestures from a training set. Although the rate algorithms propose their rates of return there is a deficiency in knowing which is the most recommended algorithm for its use. In the same way, the experiments known by the most successful algorithms have been carried out under different conditions, resulting in non-comparable results. To better understand their respective algorithmic efficiency, this paper compares the recognition rate, the error rate, and the recognition time of five reference stroke gesture recognition algorithms, i.e., 1,1, P, Q,!FTL,andPennyPincher,onthreediversegesturesets,i.e.,NicIcon,HHReco,andUtopianoAlphabet,inauserindependentscenario.Similarconditionswereappliedtoallalgorithms,tobeexecutedunderthesamecharacteristics.Forthealgorithmsstudied,themethodagreedtoevaluatetheerrorrateandperformancerate,aswellastheexecutiontimeofeachofthesealgorithms.AsoftwaretestingenvironmentwasdevelopedinJavaScripttoperformthecomparativeanalysis.Theresultsofthisanalysishelprecommendingarecognizerwhereitturnsouttobethemostefficient.!FTL(NLSD)isthebestrecognitionrateandthemostefficientalgorithmfortheHHrecoandNicIcondatasets.However,PennyPincherwasthefasteralgorithmforHHrecodatasets.Finally,Q, !FTL, and Penny Pincher, on three diverse gesture sets, i.e., NicIcon, HHReco, and Utopiano Alphabet, in a user-independent scenario. Similar conditions were applied to all algorithms, to be executed under the same characteristics. For the algorithms studied, the method agreed to evaluate the error rate and performance rate, as well as the execution time of each of these algorithms. A software testing environment was developed in JavaScript to perform the comparative analysis. The results of this analysis help recommending a recognizer where it turns out to be the most efficient. !FTL (NLSD) is the best recognition rate and the most efficient algorithm for the HHreco and NicIcon datasets. However, Penny Pincher was the faster algorithm for HHreco datasets. Finally, 1 obtained the best recognition rate for the Utopiano Alphabet dataset

    Design and evaluation of braced touch for touchscreen input stabilisation.

    Get PDF
    Incorporating touchscreen interaction into cockpit flight systems offers several potential advantages to aircraft manufacturers, airlines, and pilots. However, vibration and turbulence are challenges to reliable interaction. We examine the design space for braced touch interaction, which allows users to mechanically stabilise selections by bracing multiple fingers on the touchscreen before completing selection. Our goal is to enable fast and accurate target selection during high levels of vibration, without impeding interaction performance when vibration is absent. Three variant methods of braced touch are evaluated, using doubletap, dwell, or a force threshold in combination with heuristic selection criteria to discriminate intentional selection from concurrent braced contacts. We carried out an experiment to test the performance of these methods in both abstract selection tasks and more realistic flight tasks. The study results confirm that bracing improves performance during vibration, and show that doubletap was the best of the tested methods

    Not Just Pointing: Shannon's Information Theory as a General Tool for Performance Evaluation of Input Techniques

    Get PDF
    This article was submitted to the ACM CHI conference in September 2017, and rejected in December 2017. It is currently under revision.Input techniques serving, quite literally, to allow users to send information to the computer, the information theoretic approach seems tailor-made for their quantitative evaluation. Shannon's framework makes it straightforward to measure the performance of any technique as an effective information transmission rate, in bits/s. Apart from pointing, however, evaluators of input techniques have generally ignored Shannon, contenting themselves with less rigorous methods of speed and accuracy measurements borrowed from psychology. We plead for a serious consideration in HCI of Shannon's information theory as a tool for the evaluation of all sorts of input techniques. We start with a primer on Shannon's basic quantities and the theoretical entities of his communication model. We then discuss how the concepts should be applied to the input techniques evaluation problem. Finally we outline two concrete methodologies, one focused on the discrete timing and the other on the continuous time course of information gain by the computer

    GestUI: A Model-driven Method and Tool for Including Gesture-based Interaction in User Interfaces

    Get PDF
    [EN] Among the technological advances in touch-based devices, gesture-based interaction have become a prevalent feature in many application domains. Information systems are starting to explore this type of interaction. As a result, gesture specifications are now being hard-coded by developers at the source code level that hinders their reusability and portability. Similarly, defining new gestures that reflect user requirements is a complex process. This paper describes a model-driven approach to include gesture-based interaction in desktop information systems. It incorporates a tool prototype that captures user-sketched multi-stroke gestures and transforms them into a model by automatically generating the gesture catalogue for gesture-based interaction technologies and gesture-based user interface source codes. We demonstrated our approach in several applications ranging from case tools to form-based information systems.This work was supported by SENESCYT and Universidad de Cuenca from Ecuador, and received financial support from Generalitat Valenciana under Project IDEO (PROMETEOII/2014/039).Parra-González, LO.; España Cubillo, S.; Pastor López, O. (2016). GestUI: A Model-driven Method and Tool for Including Gesture-based Interaction in User Interfaces. Complex Systems Informatics and Modeling Quarterly. 6:73-92. https://doi.org/10.7250/csimq.2016-6.05S7392

    AUGMENTED TOUCH INTERACTIONS WITH FINGER CONTACT SHAPE AND ORIENTATION

    Get PDF
    Touchscreen interactions are far less expressive than the range of touch that human hands are capable of - even considering technologies such as multi-touch and force-sensitive surfaces. Recently, some touchscreens have added the capability to sense the actual contact area of a finger on the touch surface, which provides additional degrees of freedom - the size and shape of the touch, and the finger's orientation. These additional sensory capabilities hold promise for increasing the expressiveness of touch interactions - but little is known about whether users can successfully use the new degrees of freedom. To provide this baseline information, we carried out a study with a finger-contact-sensing touchscreen, and asked participants to produce a range of touches and gestures with different shapes and orientations, with both one and two fingers. We found that people are able to reliably produce two touch shapes and three orientations across a wide range of touches and gestures - a result that was confirmed in another study that used the augmented touches for a screen lock application

    Eliciting Sketched Expressions of Command Intentions in an IDE

    Get PDF
    Software engineers routinely use sketches (informal, ad-hoc drawings) to visualize and communicate complex ideas for colleagues or themselves. We hypothesize that sketching could also be used as a novel interaction modality in integrated software development environments (IDEs), allowing developers to express desired source code manipulations by sketching right on top of the IDE, rather than remembering keyboard shortcuts or using a mouse to navigate menus and dialogs. For an initial assessment of the viability of this idea, we conducted an elicitation study that prompted software developers to express a number of common IDE commands through sketches. For many of our task prompts, we observed considerable agreement in how developers would express the respective commands through sketches, suggesting that further research on a more formal sketch-based visual command language for IDEs would be worthwhile.Icelandic Research Fund (grant no. 196228)Post-prin

    Features extraction scheme for behavioral biometric authentication in touchscreen mobile devices

    Get PDF
    Common authentication mechanisms in mobile devices such as passwords and Personal Identification Number have failed to keep up with the rapid pace of challenges associated with the use of ubiquitous devices over the Internet, since they can easily be lost or stolen. Thus, it is important to develop authentication mechanisms that can be adapted to such an environment. Biometric-based person recognition is a good alternative to overcome the difficulties of password and token approaches, since biometrics cannot be easily stolen or forgotten. An important characteristic of biometric authentication is that there is an explicit connection with the user's identity, since biometrics rely entirely on behavioral and physiological characteristics of human being. There are a variety of biometric authentication options that have emerged so far, all of which can be used on a mobile phone. These options include but are not limited to, face recognition via camera, fingerprint, voice recognition, keystroke and gesture recognition via touch screen. Touch gesture behavioural biometrics are commonly used as an alternative solution to existing traditional biometric mechanism. However, current touch gesture authentication schemes are fraught with authentication accuracy problems. In fact, the extracted features used in some researches on touch gesture schemes are limited to speed, time, position, finger size and finger pressure. However, extracting a few touch features from individual touches is not enough to accurately distinguish various users. In this research, behavioural features are extracted from recorded touch screen data and a discriminative classifier is trained on these extracted features for authentication. While the user performs the gesture, the touch screen sensor is leveraged on and twelve of the user‘s finger touch features are extracted. Eighty four different users participated in this research work, each user drew six gesture with a total of 504 instances. The extracted touch gesture features are normalised by scaling the values so that they fall within a small specified range. Thereafter, five different Feature Selection Algorithm were used to choose the most significant features subset. Six different machine learning classifiers were used to classify each instance in the data set into one of the predefined set of classes. Results from experiments conducted in the proposed touch gesture behavioral biometrics scheme achieved an average False Reject Rate (FRR) of 7.84%, average False Accept Rate (FAR) of 1%, average Equal Error Rate (EER) of 4.02% and authentication accuracy of 91.67%,. The comparative results showed that the proposed scheme outperforms other existing touch gesture authentication schemes in terms of FAR, EER and authentication accuracy by 1.67%, 6.74% and 4.65% respectively. The results of this research affirm that user authentication through gestures is promising, highly viable and can be used for mobile devices

    Contributions to Pen & Touch Human-Computer Interaction

    Full text link
    [EN] Computers are now present everywhere, but their potential is not fully exploited due to some lack of acceptance. In this thesis, the pen computer paradigm is adopted, whose main idea is to replace all input devices by a pen and/or the fingers, given that the origin of the rejection comes from using unfriendly interaction devices that must be replaced by something easier for the user. This paradigm, that was was proposed several years ago, has been only recently fully implemented in products, such as the smartphones. But computers are actual illiterates that do not understand gestures or handwriting, thus a recognition step is required to "translate" the meaning of these interactions to computer-understandable language. And for this input modality to be actually usable, its recognition accuracy must be high enough. In order to realistically think about the broader deployment of pen computing, it is necessary to improve the accuracy of handwriting and gesture recognizers. This thesis is devoted to study different approaches to improve the recognition accuracy of those systems. First, we will investigate how to take advantage of interaction-derived information to improve the accuracy of the recognizer. In particular, we will focus on interactive transcription of text images. Here the system initially proposes an automatic transcript. If necessary, the user can make some corrections, implicitly validating a correct part of the transcript. Then the system must take into account this validated prefix to suggest a suitable new hypothesis. Given that in such application the user is constantly interacting with the system, it makes sense to adapt this interactive application to be used on a pen computer. User corrections will be provided by means of pen-strokes and therefore it is necessary to introduce a recognizer in charge of decoding this king of nondeterministic user feedback. However, this recognizer performance can be boosted by taking advantage of interaction-derived information, such as the user-validated prefix. Then, this thesis focuses on the study of human movements, in particular, hand movements, from a generation point of view by tapping into the kinematic theory of rapid human movements and the Sigma-Lognormal model. Understanding how the human body generates movements and, particularly understand the origin of the human movement variability, is important in the development of a recognition system. The contribution of this thesis to this topic is important, since a new technique (which improves the previous results) to extract the Sigma-lognormal model parameters is presented. Closely related to the previous work, this thesis study the benefits of using synthetic data as training. The easiest way to train a recognizer is to provide "infinite" data, representing all possible variations. In general, the more the training data, the smaller the error. But usually it is not possible to infinitely increase the size of a training set. Recruiting participants, data collection, labeling, etc., necessary for achieving this goal can be time-consuming and expensive. One way to overcome this problem is to create and use synthetically generated data that looks like the human. We study how to create these synthetic data and explore different approaches on how to use them, both for handwriting and gesture recognition. The different contributions of this thesis have obtained good results, producing several publications in international conferences and journals. Finally, three applications related to the work of this thesis are presented. First, we created Escritorie, a digital desk prototype based on the pen computer paradigm for transcribing handwritten text images. Second, we developed "Gestures à Go Go", a web application for bootstrapping gestures. Finally, we studied another interactive application under the pen computer paradigm. In this case, we study how translation reviewing can be done more ergonomically using a pen.[ES] Hoy en día, los ordenadores están presentes en todas partes pero su potencial no se aprovecha debido al "miedo" que se les tiene. En esta tesis se adopta el paradigma del pen computer, cuya idea fundamental es sustituir todos los dispositivos de entrada por un lápiz electrónico o, directamente, por los dedos. El origen del rechazo a los ordenadores proviene del uso de interfaces poco amigables para el humano. El origen de este paradigma data de hace más de 40 años, pero solo recientemente se ha comenzado a implementar en dispositivos móviles. La lenta y tardía implantación probablemente se deba a que es necesario incluir un reconocedor que "traduzca" los trazos del usuario (texto manuscrito o gestos) a algo entendible por el ordenador. Para pensar de forma realista en la implantación del pen computer, es necesario mejorar la precisión del reconocimiento de texto y gestos. El objetivo de esta tesis es el estudio de diferentes estrategias para mejorar esta precisión. En primer lugar, esta tesis investiga como aprovechar información derivada de la interacción para mejorar el reconocimiento, en concreto, en la transcripción interactiva de imágenes con texto manuscrito. En la transcripción interactiva, el sistema y el usuario trabajan "codo con codo" para generar la transcripción. El usuario valida la salida del sistema proporcionando ciertas correcciones, mediante texto manuscrito, que el sistema debe tener en cuenta para proporcionar una mejor transcripción. Este texto manuscrito debe ser reconocido para ser utilizado. En esta tesis se propone aprovechar información contextual, como por ejemplo, el prefijo validado por el usuario, para mejorar la calidad del reconocimiento de la interacción. Tras esto, la tesis se centra en el estudio del movimiento humano, en particular del movimiento de las manos, utilizando la Teoría Cinemática y su modelo Sigma-Lognormal. Entender como se mueven las manos al escribir, y en particular, entender el origen de la variabilidad de la escritura, es importante para el desarrollo de un sistema de reconocimiento, La contribución de esta tesis a este tópico es importante, dado que se presenta una nueva técnica (que mejora los resultados previos) para extraer el modelo Sigma-Lognormal de trazos manuscritos. De forma muy relacionada con el trabajo anterior, se estudia el beneficio de utilizar datos sintéticos como entrenamiento. La forma más fácil de entrenar un reconocedor es proporcionar un conjunto de datos "infinito" que representen todas las posibles variaciones. En general, cuanto más datos de entrenamiento, menor será el error del reconocedor. No obstante, muchas veces no es posible proporcionar más datos, o hacerlo es muy caro. Por ello, se ha estudiado como crear y usar datos sintéticos que se parezcan a los reales. Las diferentes contribuciones de esta tesis han obtenido buenos resultados, produciendo varias publicaciones en conferencias internacionales y revistas. Finalmente, también se han explorado tres aplicaciones relaciones con el trabajo de esta tesis. En primer lugar, se ha creado Escritorie, un prototipo de mesa digital basada en el paradigma del pen computer para realizar transcripción interactiva de documentos manuscritos. En segundo lugar, se ha desarrollado "Gestures à Go Go", una aplicación web para generar datos sintéticos y empaquetarlos con un reconocedor de forma rápida y sencilla. Por último, se presenta un sistema interactivo real bajo el paradigma del pen computer. En este caso, se estudia como la revisión de traducciones automáticas se puede realizar de forma más ergonómica.[CA] Avui en dia, els ordinadors són presents a tot arreu i es comunament acceptat que la seva utilització proporciona beneficis. No obstant això, moltes vegades el seu potencial no s'aprofita totalment. En aquesta tesi s'adopta el paradigma del pen computer, on la idea fonamental és substituir tots els dispositius d'entrada per un llapis electrònic, o, directament, pels dits. Aquest paradigma postula que l'origen del rebuig als ordinadors prové de l'ús d'interfícies poc amigables per a l'humà, que han de ser substituïdes per alguna cosa més coneguda. Per tant, la interacció amb l'ordinador sota aquest paradigma es realitza per mitjà de text manuscrit i/o gestos. L'origen d'aquest paradigma data de fa més de 40 anys, però només recentment s'ha començat a implementar en dispositius mòbils. La lenta i tardana implantació probablement es degui al fet que és necessari incloure un reconeixedor que "tradueixi" els traços de l'usuari (text manuscrit o gestos) a alguna cosa comprensible per l'ordinador, i el resultat d'aquest reconeixement, actualment, és lluny de ser òptim. Per pensar de forma realista en la implantació del pen computer, cal millorar la precisió del reconeixement de text i gestos. L'objectiu d'aquesta tesi és l'estudi de diferents estratègies per millorar aquesta precisió. En primer lloc, aquesta tesi investiga com aprofitar informació derivada de la interacció per millorar el reconeixement, en concret, en la transcripció interactiva d'imatges amb text manuscrit. En la transcripció interactiva, el sistema i l'usuari treballen "braç a braç" per generar la transcripció. L'usuari valida la sortida del sistema donant certes correccions, que el sistema ha d'usar per millorar la transcripció. En aquesta tesi es proposa utilitzar correccions manuscrites, que el sistema ha de reconèixer primer. La qualitat del reconeixement d'aquesta interacció és millorada, tenint en compte informació contextual, com per exemple, el prefix validat per l'usuari. Després d'això, la tesi se centra en l'estudi del moviment humà en particular del moviment de les mans, des del punt de vista generatiu, utilitzant la Teoria Cinemàtica i el model Sigma-Lognormal. Entendre com es mouen les mans en escriure és important per al desenvolupament d'un sistema de reconeixement, en particular, per entendre l'origen de la variabilitat de l'escriptura. La contribució d'aquesta tesi a aquest tòpic és important, atès que es presenta una nova tècnica (que millora els resultats previs) per extreure el model Sigma- Lognormal de traços manuscrits. De forma molt relacionada amb el treball anterior, s'estudia el benefici d'utilitzar dades sintètiques per a l'entrenament. La forma més fàcil d'entrenar un reconeixedor és proporcionar un conjunt de dades "infinit" que representin totes les possibles variacions. En general, com més dades d'entrenament, menor serà l'error del reconeixedor. No obstant això, moltes vegades no és possible proporcionar més dades, o fer-ho és molt car. Per això, s'ha estudiat com crear i utilitzar dades sintètiques que s'assemblin a les reals. Les diferents contribucions d'aquesta tesi han obtingut bons resultats, produint diverses publicacions en conferències internacionals i revistes. Finalment, també s'han explorat tres aplicacions relacionades amb el treball d'aquesta tesi. En primer lloc, s'ha creat Escritorie, un prototip de taula digital basada en el paradigma del pen computer per realitzar transcripció interactiva de documents manuscrits. En segon lloc, s'ha desenvolupat "Gestures à Go Go", una aplicació web per a generar dades sintètiques i empaquetar-les amb un reconeixedor de forma ràpida i senzilla. Finalment, es presenta un altre sistema inter- actiu sota el paradigma del pen computer. En aquest cas, s'estudia com la revisió de traduccions automàtiques es pot realitzar de forma més ergonòmica.Martín-Albo Simón, D. (2016). Contributions to Pen & Touch Human-Computer Interaction [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/68482TESI

    Design and semantics of form and movement (DeSForM 2006)

    Get PDF
    Design and Semantics of Form and Movement (DeSForM) grew from applied research exploring emerging design methods and practices to support new generation product and interface design. The products and interfaces are concerned with: the context of ubiquitous computing and ambient technologies and the need for greater empathy in the pre-programmed behaviour of the ‘machines’ that populate our lives. Such explorative research in the CfDR has been led by Young, supported by Kyffin, Visiting Professor from Philips Design and sponsored by Philips Design over a period of four years (research funding £87k). DeSForM1 was the first of a series of three conferences that enable the presentation and debate of international work within this field: • 1st European conference on Design and Semantics of Form and Movement (DeSForM1), Baltic, Gateshead, 2005, Feijs L., Kyffin S. & Young R.A. eds. • 2nd European conference on Design and Semantics of Form and Movement (DeSForM2), Evoluon, Eindhoven, 2006, Feijs L., Kyffin S. & Young R.A. eds. • 3rd European conference on Design and Semantics of Form and Movement (DeSForM3), New Design School Building, Newcastle, 2007, Feijs L., Kyffin S. & Young R.A. eds. Philips sponsorship of practice-based enquiry led to research by three teams of research students over three years and on-going sponsorship of research through the Northumbria University Design and Innovation Laboratory (nuDIL). Young has been invited on the steering panel of the UK Thinking Digital Conference concerning the latest developments in digital and media technologies. Informed by this research is the work of PhD student Yukie Nakano who examines new technologies in relation to eco-design textiles
    corecore