14 research outputs found

    Assessing Inconspicuous Smartphone Authentication for Blind People

    Full text link
    As people store more personal data in their smartphones, the consequences of having it stolen or lost become an increasing concern. A typical counter-measure to avoid this risk is to set up a secret code that has to be entered to unlock the device after a period of inactivity. However, for blind users, PINs and passwords are inadequate, since entry 1) consumes a non-trivial amount of time, e.g. using screen readers, 2) is susceptible to observation, where nearby people can see or hear the secret code, and 3) might collide with social norms, e.g. disrupting personal interactions. Tap-based authentication methods have been presented and allow unlocking to be performed in a short time and support naturally occurring inconspicuous behavior (e.g. concealing the device inside a jacket) by being usable with a single hand. This paper presents a study with blind users (N = 16) where an authentication method based on tap phrases is evaluated. Results showed the method to be usable and to support the desired inconspicuity.Comment: 4 pages, 1 figur

    Music and HCI

    Get PDF
    Music is an evolutionarily deep-rooted, abstract, real-time, complex, non-verbal, social activity. Consequently, interaction design in music can be a valuable source of challenges and new ideas for HCI. This workshop will reflect on the latest research in Music and HCI (Music Interaction for short), with the aim of strengthening the dialogue between the Music Interaction community and the wider HCI community. We will explore recent ideas from Music Interaction that may contribute new perspectives to general HCI practice, and conversely, recent HCI research in non-musical domains with implications for Music Interaction. We will also identify any concerns of Music Interaction that may require unique approaches. Contributors engaged in research in any area of Music Interaction or HCI who would like to contribute to a sustained widening of the dialogue between the distinctive concerns of the Music Interaction community and the wider HCI community will be welcome

    EarRumble: Discreet Hands- and Eyes-Free Input by Voluntary Tensor Tympani Muscle Contraction

    Get PDF
    We explore how discreet input can be provided using the tensor tympani -a small muscle in the middle ear that some people can voluntarily contract to induce a dull rumbling sound.We investigate the prevalence and ability to control the muscle through an online questionnaire (N=192) in which 43.2% of respondents reported the ability to ear rumble. Data collected from participants (N=16) shows how in-ear barometry can be used to detect voluntary tensor tympani contraction in the sealed ear canal. This data was used to train a classifer based on three simple ear rumble gestures which achieved 95% accuracy. Finally, we evaluate the use of ear rumbling for interaction, grounded in three manual, dual-task application scenarios (N=8). This highlights the applicability of EarRumble as a low-efort and discreet eyes-and hands-free interaction technique that users found magical and almost telepathic.</p

    EarRumble: Discreet Hands- and Eyes-Free Input by Voluntary Tensor Tympani Muscle Contraction

    Get PDF
    We explore how discreet input can be provided using the tensor tympani -a small muscle in the middle ear that some people can voluntarily contract to induce a dull rumbling sound.We investigate the prevalence and ability to control the muscle through an online questionnaire (N=192) in which 43.2% of respondents reported the ability to ear rumble. Data collected from participants (N=16) shows how in-ear barometry can be used to detect voluntary tensor tympani contraction in the sealed ear canal. This data was used to train a classifer based on three simple ear rumble gestures which achieved 95% accuracy. Finally, we evaluate the use of ear rumbling for interaction, grounded in three manual, dual-task application scenarios (N=8). This highlights the applicability of EarRumble as a low-efort and discreet eyes-and hands-free interaction technique that users found magical and almost telepathic.</p

    Supporting Transitions To Expertise In Hidden Toolbars

    Get PDF
    Hidden toolbars are becoming common on mobile devices. These techniques maximize the space available for application content by keeping tools off-screen until needed. However, current designs require several actions to make a selection, and they do not provide shortcuts for users who have become familiar with the toolbar. To better understand the performance capabilities and tradeoffs involved in hidden toolbars, we outline a design space that captures the key elements of these controls and report on an empirical evaluation of four designs. Two of our designs provide shortcuts that are based on the user’s spatial memory of item locations. The study found that toolbars with spatial-memory shortcuts had significantly better performance (700ms faster) than standard designs currently in use. Participants quickly learned the shortcut selection method (although switching to a memory-based method led to higher error rates than the visually-guided techniques). Participants strongly preferred one of the shortcut methods that allowed selections by swiping across the screen bezel at the location of the desired item. This work shows that shortcut techniques are feasible and desirable on touch devices and shows that spatial memory can provide a foundation for designing shortcuts

    Building and evaluating an inconspicuous smartphone authentication method

    Get PDF
    Tese de mestrado em Engenharia Informática, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2013Os smartphones que trazemos connosco estão cada vez mais entranhados nas nossas vidas intimas. Estes dispositivos possibilitam novas formas de trabalhar, de socializar, e ate de nos divertirmos. No entanto, também criaram novos riscos a nossa privacidade. Uma forma comum de mitigar estes riscos e configurar o dispositivo para bloquear apos um período de inatividade. Para voltar a utiliza-lo, e então necessário superar uma barreira de autenticação. Desta forma, se o aparelho cair das mãos de outra pessoa, esta não poderá utiliza-lo de forma a que tal constitua uma ameaça. O desbloqueio com autenticação e, assim, o mecanismo que comummente guarda a privacidade dos utilizadores de smartphones. Porem, os métodos de autenticação atualmente utilizados são maioritariamente um legado dos computadores de mesa. As palavras-passe e códigos de identificação pessoal são tornados menos seguros pelo facto de as pessoas criarem mecanismos para os memorizarem mais facilmente. Alem disso, introduzir estes códigos e inconveniente, especialmente no contexto móvel, em que as interações tendem a ser curtas e a necessidade de autenticação atrapalha a prossecução de outras tarefas. Recentemente, os smartphones Android passaram a oferecer outro método de autenticação, que ganhou um grau de adoção assinalável. Neste método, o código secreto do utilizador e uma sucessão de traços desenhados sobre uma grelha de 3 por 3 pontos apresentada no ecrã táctil. Contudo, quer os códigos textuais/numéricos, quer os padrões Android, são suscetíveis a ataques rudimentares. Em ambos os casos, o canal de entrada e o toque no ecrã táctil; e o canal de saída e o visual. Tal permite que outras pessoas possam observar diretamente a introdução da chave; ou que mais tarde consigam distinguir as marcas deixadas pelos dedos na superfície de toque. Alem disso, estes métodos não são acessíveis a algumas classes de utilizadores, nomeadamente os cegos. Nesta dissertação propõe-se que os métodos de autenticação em smartphones podem ser melhor adaptados ao contexto móvel. Nomeadamente, que a possibilidade de interagir com o dispositivo de forma inconspícua poderá oferecer aos utilizadores um maior grau de controlo e a capacidade de se auto-protegerem contra a observação do seu código secreto. Nesse sentido, foi identificada uma modalidade de entrada que não requer o canal visual: sucessões de toques independentes de localização no ecrã táctil. Estes padrões podem assemelhar-se (mas não estão limitados) a ritmos ou código Morse. A primeira contribuição deste trabalho e uma técnica algorítmica para a deteção destas sucessões de toques, ou frases de toque, como chaves de autenticação. Este reconhecedor requer apenas uma demonstração para configuração, o que o distingue de outras abordagens que necessitam de vários exemplos para treinar o algoritmo. O reconhecedor foi avaliado e demonstrou ser preciso e computacionalmente eficiente. Esta contribuição foi enriquecida com o desenvolvimento de uma aplicação Android que demonstra o conceito. A segunda contribuição e uma exploração de fatores humanos envolvidos no uso de frases de toque para autenticação. E consubstanciada em três estudos com utilizadores, em que o método de autenticação proposto e comparado com as alternativas mais comuns: PIN e o padrão Android. O primeiro estudo (N=30) compara os três métodos no que que diz respeito a resistência a observação e à usabilidade, entendida num sentido lato, que inclui a experiencia de utilização (UX). Os resultados sugerem que a usabilidade das três abordagens e comparável, e que em condições de observação perfeitas, nos três casos existe grande viabilidade de sucesso para um atacante. O segundo estudo (N=19) compara novamente os três métodos mas, desta feita, num cenário de autenticação inconspícua. Com efeito, os participantes tentaram introduzir os códigos com o dispositivo situado por baixo de uma mesa, fora do alcance visual. Neste caso, demonstra-se que a autenticação com frases de toque continua a ser usável. Já com as restantes alternativas existe uma diminuição substancial das medidas de usabilidade. Tal sugere que a autenticação por frases de toque suporta a capacidade de interação inconspícua, criando assim a possibilidade de os utilizadores se protegerem contra possíveis atacantes. O terceiro estudo (N=16) e uma avaliação de usabilidade e aceitação do método de autenticação com utilizadores cegos. Neste estudo, são também elicitadas estratégias de ocultação suportadas pela autenticação por frases de toque. Os resultados sugerem que a técnica e também adequada a estes utilizadores.As our intimate lives become more tangled with the smartphones we carry, privacy has become an increasing concern. A widely available option to mitigate security risks is to set a device so that it locks after a period of inactivity, requiring users to authenticate for subsequent use. Current methods for establishing one's identity are known to be susceptible to even rudimentary observation attacks. The mobile context in which interactions with smartphones are prone to occur further facilitates shoulder-surfing. We submit that smartphone authentication methods can be better adapted to the mobile context. Namely, the ability to interact with the device in an inconspicuous manner could offer users more control and the ability to self-protect against observation. Tapping is a communication modality between a user and a device that can be appropriated for that purpose. This work presents a technique for employing sequences of taps, or tap phrases, as authentication codes. An efficient and accurate tap phrase recognizer, that does not require training, is presented. Three user studies were conducted to compare this approach to the current leading methods. Results indicate that the tapping method remains usable even under inconspicuous authentications scenarios. Furthermore, we found that it is appropriate for blind users, to whom usability barriers and security risks are of special concern

    Leveraging finger identification to integrate multi-touch command selection and parameter manipulation

    Get PDF
    International audienceIdentifying which fingers are touching a multi-touch surface provides a very large input space. We describe FingerCuts, an interaction technique inspired by desktop keyboard shortcuts to exploit this potential. FingerCuts enables integrated command selection and parameter manipulation, it uses feed-forward and feedback to increase discoverability, it is backward compatible with current touch input techniques, and it is adaptable for different touch device form factors. We implemented three variations of FingerCuts, each tailored to a different device form factor: tabletop, tablet, and smartphone. Qualitative and quantitative studies conducted on the tabletop suggests that with some practice, FingerCuts is expressive, easy-to-use, and increases a sense of continuous interaction flow and that interaction with FingerCuts is as fast, or faster than using a graphical user interface. A theoretical analysis of FingerCuts using the Fingerstroke-Level Model (FLM) matches our quantitative study results, justifying our use of FLM to analyse and validate the performance for the other device form factors

    Motion correlation: selecting objects by matching their movement

    Get PDF
    Selection is a canonical task in user interfaces, commonly supported by presenting objects for acquisition by pointing. In this article, we consider motion correlation as an alternative for selection. The principle is to represent available objects by motion in the interface, have users identify a target by mimicking its specific motion, and use the correlation between the system’s output with the user’s input to determine the selection. The resulting interaction has compelling properties, as users are guided by motion feedback, and only need to copy a presented motion. Motion correlation has been explored in earlier work but only recently begun to feature in holistic interface designs. We provide a first comprehensive review of the principle, and present an analysis of five previously published works, in which motion correlation underpinned the design of novel gaze and gesture interfaces for diverse application contexts. We derive guidelines for motion correlation algorithms, motion feedback, choice of modalities, overall design of motion correlation interfaces, and identify opportunities and challenges identified for future research and design

    Dynamic motion coupling of body movement for input control

    Get PDF
    Touchless gestures are used for input when touch is unsuitable or unavailable, such as when interacting with displays that are remote, large, public, or when touch is prohibited for hygienic reasons. Traditionally user input is spatially or semantically mapped to system output, however, in the context of touchless gestures these interaction principles suffer from several disadvantages including memorability, fatigue, and ill-defined mappings. This thesis investigates motion correlation as the third interaction principle for touchless gestures, which maps user input to system output based on spatiotemporal matching of reproducible motion. We demonstrate the versatility of motion correlation by using movement as the primary sensing principle, relaxing the restrictions on how a user provides input. Using TraceMatch, a novel computer vision-based system, we show how users can provide effective input through investigation of input performance with different parts of the body, and how users can switch modes of input spontaneously in realistic application scenarios. Secondly, spontaneous spatial coupling shows how motion correlation can bootstrap spatial input, allowing any body movement, or movement of tangible objects, to be appropriated for ad hoc touchless pointing on a per interaction basis. We operationalise the concept in MatchPoint, and demonstrate the unique capabilities through an exploration of the design space with application examples. Finally, we explore how users synchronise with moving targets in the context of motion correlation, revealing how simple harmonic motion leads to better synchronisation. Using the insights gained we explore the robustness of algorithms used for motion correlation, showing how it is possible to successfully detect a user's intent to interact whilst suppressing accidental activations from common spatial and semantic gestures. Finally, we look across our work to distil guidelines for interface design, and further considerations of how motion correlation can be used, both in general and for touchless gestures
    corecore