466 research outputs found

    SemanticLock: An authentication method for mobile devices using semantically-linked images

    Full text link
    We introduce SemanticLock, a single factor graphical authentication solution for mobile devices. SemanticLock uses a set of graphical images as password tokens that construct a semantically memorable story representing the user`s password. A familiar and quick action of dragging or dropping the images into their respective positions either in a \textit{continous flow} or in \textit{discrete} movements on the the touchscreen is what is required to use our solution. The authentication strength of the SemanticLock is based on the large number of possible semantic constructs derived from the positioning of the image tokens and the type of images selected. Semantic Lock has a high resistance to smudge attacks and it equally exhibits a higher level of memorability due to its graphical paradigm. In a three weeks user study with 21 participants comparing SemanticLock against other authentication systems, we discovered that SemanticLock outperformed the PIN and matched the PATTERN both on speed, memorability, user acceptance and usability. Furthermore, qualitative test also show that SemanticLock was rated more superior in like-ability. SemanticLock was also evaluated while participants walked unencumbered and walked encumbered carrying "everyday" items to analyze the effects of such activities on its usage

    Recent advances in mobile touch screen security authentication methods: a systematic literature review

    Get PDF
    The security of the smartphone touch screen has attracted considerable attention from academics as well as industry and security experts. The maximum security of the mobile phone touch screen is necessary to protect the user’s stored information in the event of loss. Previous reviews in this research domain have focused primarily on biometrics and graphical passwords while leaving out PIN, gesture/pattern and others. In this paper, we present a comprehensive literature review of the recent advances made in mobile touch screen authentication techniques covering PIN, pattern/gesture, biometrics, graphical password and others. A new comprehensive taxonomy of the various multiple class authentication techniques is presented in order to expand the existing taxonomies on single class authentication techniques. The review reveals that the most recent studies that propose new techniques for providing maximum security to smartphone touch screen reveal multi-objective optimization problems. In addition, open research problems and promising future research directions are presented in the paper. Expert researchers can benefit from the review by gaining new insights into touch screen cyber security, and novice researchers may use this paper as a starting point of their inquir

    HapticLock: Eyes-Free Authentication for Mobile Devices

    Get PDF
    Smartphones provide access to increasing amounts of personal and sensitive information, yet are often only secured using methods that are prone to observational attacks. We present HapticLock, a novel authentication method for mobile devices that uses non-visual interaction modalities for discreet PIN entry that is difficult to attack by shoulder surfing. A usability experiment (N=20) finds effective PIN entry in secure conditions: e.g., in 23.5s with 98.3% success rate for a four-digit PIN entered from a random start digit. A shoulder surfing experiment (N=15) finds that HapticLock is highly resistant to observational attacks. Even when interaction is highly visible, attackers need to guess the first digit when PIN entry begins with a random number, yielding a very low success rate for shoulder surfing. Furthermore, a device can be hidden from view during authentication. Our use of haptic interaction modalities gives privacy-conscious mobile device users a usable and secure authentication alternative for sensitive situations

    Building and evaluating an inconspicuous smartphone authentication method

    Get PDF
    Tese de mestrado em Engenharia Informática, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2013Os smartphones que trazemos connosco estão cada vez mais entranhados nas nossas vidas intimas. Estes dispositivos possibilitam novas formas de trabalhar, de socializar, e ate de nos divertirmos. No entanto, também criaram novos riscos a nossa privacidade. Uma forma comum de mitigar estes riscos e configurar o dispositivo para bloquear apos um período de inatividade. Para voltar a utiliza-lo, e então necessário superar uma barreira de autenticação. Desta forma, se o aparelho cair das mãos de outra pessoa, esta não poderá utiliza-lo de forma a que tal constitua uma ameaça. O desbloqueio com autenticação e, assim, o mecanismo que comummente guarda a privacidade dos utilizadores de smartphones. Porem, os métodos de autenticação atualmente utilizados são maioritariamente um legado dos computadores de mesa. As palavras-passe e códigos de identificação pessoal são tornados menos seguros pelo facto de as pessoas criarem mecanismos para os memorizarem mais facilmente. Alem disso, introduzir estes códigos e inconveniente, especialmente no contexto móvel, em que as interações tendem a ser curtas e a necessidade de autenticação atrapalha a prossecução de outras tarefas. Recentemente, os smartphones Android passaram a oferecer outro método de autenticação, que ganhou um grau de adoção assinalável. Neste método, o código secreto do utilizador e uma sucessão de traços desenhados sobre uma grelha de 3 por 3 pontos apresentada no ecrã táctil. Contudo, quer os códigos textuais/numéricos, quer os padrões Android, são suscetíveis a ataques rudimentares. Em ambos os casos, o canal de entrada e o toque no ecrã táctil; e o canal de saída e o visual. Tal permite que outras pessoas possam observar diretamente a introdução da chave; ou que mais tarde consigam distinguir as marcas deixadas pelos dedos na superfície de toque. Alem disso, estes métodos não são acessíveis a algumas classes de utilizadores, nomeadamente os cegos. Nesta dissertação propõe-se que os métodos de autenticação em smartphones podem ser melhor adaptados ao contexto móvel. Nomeadamente, que a possibilidade de interagir com o dispositivo de forma inconspícua poderá oferecer aos utilizadores um maior grau de controlo e a capacidade de se auto-protegerem contra a observação do seu código secreto. Nesse sentido, foi identificada uma modalidade de entrada que não requer o canal visual: sucessões de toques independentes de localização no ecrã táctil. Estes padrões podem assemelhar-se (mas não estão limitados) a ritmos ou código Morse. A primeira contribuição deste trabalho e uma técnica algorítmica para a deteção destas sucessões de toques, ou frases de toque, como chaves de autenticação. Este reconhecedor requer apenas uma demonstração para configuração, o que o distingue de outras abordagens que necessitam de vários exemplos para treinar o algoritmo. O reconhecedor foi avaliado e demonstrou ser preciso e computacionalmente eficiente. Esta contribuição foi enriquecida com o desenvolvimento de uma aplicação Android que demonstra o conceito. A segunda contribuição e uma exploração de fatores humanos envolvidos no uso de frases de toque para autenticação. E consubstanciada em três estudos com utilizadores, em que o método de autenticação proposto e comparado com as alternativas mais comuns: PIN e o padrão Android. O primeiro estudo (N=30) compara os três métodos no que que diz respeito a resistência a observação e à usabilidade, entendida num sentido lato, que inclui a experiencia de utilização (UX). Os resultados sugerem que a usabilidade das três abordagens e comparável, e que em condições de observação perfeitas, nos três casos existe grande viabilidade de sucesso para um atacante. O segundo estudo (N=19) compara novamente os três métodos mas, desta feita, num cenário de autenticação inconspícua. Com efeito, os participantes tentaram introduzir os códigos com o dispositivo situado por baixo de uma mesa, fora do alcance visual. Neste caso, demonstra-se que a autenticação com frases de toque continua a ser usável. Já com as restantes alternativas existe uma diminuição substancial das medidas de usabilidade. Tal sugere que a autenticação por frases de toque suporta a capacidade de interação inconspícua, criando assim a possibilidade de os utilizadores se protegerem contra possíveis atacantes. O terceiro estudo (N=16) e uma avaliação de usabilidade e aceitação do método de autenticação com utilizadores cegos. Neste estudo, são também elicitadas estratégias de ocultação suportadas pela autenticação por frases de toque. Os resultados sugerem que a técnica e também adequada a estes utilizadores.As our intimate lives become more tangled with the smartphones we carry, privacy has become an increasing concern. A widely available option to mitigate security risks is to set a device so that it locks after a period of inactivity, requiring users to authenticate for subsequent use. Current methods for establishing one's identity are known to be susceptible to even rudimentary observation attacks. The mobile context in which interactions with smartphones are prone to occur further facilitates shoulder-surfing. We submit that smartphone authentication methods can be better adapted to the mobile context. Namely, the ability to interact with the device in an inconspicuous manner could offer users more control and the ability to self-protect against observation. Tapping is a communication modality between a user and a device that can be appropriated for that purpose. This work presents a technique for employing sequences of taps, or tap phrases, as authentication codes. An efficient and accurate tap phrase recognizer, that does not require training, is presented. Three user studies were conducted to compare this approach to the current leading methods. Results indicate that the tapping method remains usable even under inconspicuous authentications scenarios. Furthermore, we found that it is appropriate for blind users, to whom usability barriers and security risks are of special concern

    Lightweight Algorithms for Depth Sensor Equipped Embedded Devices

    Full text link
    Depth sensors have appeared in a variety of embedded devices. This includes tablets, smartphones and web cameras. This has provided a new mode of sensing, where it is possible to record an image and the distance to everything in the image. Some pervasive computing applications have taken advantage of depth sensors, such as crowd sourced 3D indoor mapping. However, research into this area is still in its infancy, some questions remain before widespread adoption. These questions are: What kinds of applications can take advantage of depth sensor equipped embedded devices and the question of efficiently implementing algorithms on resource-constrained embedded devices? The purpose of this thesis is address these questions. We do so by presenting 3 prototype systems and accompanying lightweight algorithms. Each algorithm uses depth sensors to overcome problems in visual pattern matching and are lightweight enough to run on embedded platforms. We do this while achieving better results by several metrics compared to the current state of the art. These metrics include pattern matching accuracy, asymptotic complexity, run time and memory use. The first contribution of this thesis is QuickFind, for fast segmentation and object detection algorithm, it is applied to a prototype augmented reality assembly aid. We test it against two related algorithms and implement our prototype on a Raspberry Pi. The two related algorithms are: Histogram of Oriented Gradients (HOG), a popular object detection algorithm. Histogram of Oriented Normal Vectors (HONV), a state of the art algorithm specifically designed for use with depth sensors. Our test data is the RGB-D Scenes v1 dataset consisting of 6 object classes, in 1434 scenes of domestic and office environments. On our test platform QuickFind achieved the best results with 1/18 run time, 1/18 power use, 1/3 memory use compared to HOG and 1/279 run time, 1/279 power use, 1/15 memory use compared to HONV. QuickFind has a lower asymptotic upper bound and almost double the average precision compared to HOG and HONV. The second contribution of this thesis is WashInDepth, for fast hand gestures recognition, it is applied to a prototype to monitor correct hand washing. We test it against HOG, HONV and implement our prototype on a Compute Stick. WashInDepth is an extension of QuickFind. Segmentation is replaced with a background removal step. QuickFind features are used to perform hand gesture recognition, based on video recorded from a depth sensor. We test with 15 participants with 3 videos each for a total of 45 videos. WashInDepth achieved the best results with average of 94% accuracy and a run time of 11 ms. HOG achieved 86% average accuracy and 19 ms average run time. HONV achieved 88% average accuracy and 22 ms average run time. All 3 algorithms had average memory usage within 4 KiB of each other. The third contribution of this thesis is VeinDeep. VeinDeep performs identification using vein pattern recognition. We repurpose depth sensors for this task. As far as we are aware, it is the first instance where depth sensors have been used for this purpose. The prototype application for VeinDeep is designed for securing smartphones with integrated depth sensor. As such devices are not widely available at the time of writing, the system is simulated on a Compute Stick with attached depth sensor. We test VeinDeep against two related algorithms The two related algorithms are: Hausdorff distance, an older but popular algorithm for vein pattern recognition. Kernel distance, an algorithm more recently applied to vein pattern recognition. We test with 20 participants, 6 images per hand for a total of 240 images. On our embedded platform VeinDeep achieved the best results with 1/6 run time, 2/3 memory use compared to Hausdorff distance. 1/3 run time, 1/2 memory use compared to Kernel distance. VeinDeep had precision of 0.98, recall of 0.83. At the same recall level Hausdorff distance had precision of 0.5, Kernel distance had precision of 0.9. VeinDeep also had lower average complexity compared to Hausdorff and Kernel distance. Although the prototypes in this thesis focus on three specific problems. The algorithms accompanying the prototypes are general purpose. We hope that the presented content enlighten the reader and encourages new applications which make use of depth sensor equipped embedded devices

    Investigating New Forms of Single-handed Physical Phone Interaction with Finger Dexterity

    Get PDF
    With phones becoming more powerful and such an essential part of our lives, manufacturers are creating new device forms and interactions to better support even more diverse functions. A common goal is to enable a larger input space and expand the input vocabulary using new physical phone interactions other than touchscreen input. This thesis explores how utilizing our hand and finger dexterity can expand physical phone interactions. To understand how we can physically manipulate a phone using the fine motor skills of finger, we identify and evaluate single-handed "dexterous gestures". Four manipulations are defined: shift, spin (yaw axis), rotate (roll axis) and flip (pitch axis), with a formative survey showing all except flip have been performed for various reasons. A controlled experiment examines the speed, behaviour, and preference of manipulations in the form of dexterous gestures, by considering two directions and two movement magnitudes. Using a heuristic recognizer for spin, rotate, and flip, a one-week usability experiment finds increased practice and familiarity improve the speed and comfort of dexterous gestures. With the confirmation that users can loosen their grip and perform gestures with finger dexterity, we investigate the performance of one-handed touch input on the side of a mobile phone. An experiment examines grip change and subjective preference when reaching for side targets using different fingers. Two following experiments examine taps and flicks using the thumb and index finger in a new two-dimensional input space. We simulate a side-touch sensor with a combination of capacitive sensing and motion tracking to distinguish touches on the lower, middle, or upper edges. We further focus on physical phone interaction with a new phone form factor by exploring and evaluating single-handed folding interactions suitable for "modern flip phones": smartphones with a bendable full screen touch display. Three categories of interactions are identified: only-fold, touch-enhanced fold, and fold-enhanced touch; in which gestures are created using fold direction, fold magnitude, and touch position. A prototype evaluation device is built to resemble current flip phones, but with a modified spring system to enable folding in both directions. A study investigates performance and preference for 30 fold gestures, revealing which are most promising. Overall, our exploration shows that users can loosen their grip to physically interact with phones in new ways, and these interactions could be practically integrated into daily phone applications

    Interacting with mobile devices using magnetic fields

    Get PDF
    Enhancing the user experience in mobile devices by taking advantage of embedded sensors has become an increasingly popular research field. In this thesis we investigate how magnetic sensors together with magnets can be exploited to create a new type of input device for current smartphones. In particular we use a small, yet varied in shape and strength, set of magnets and an Android powered smartphone. We alter the magnetic field sensed by the magnetometer by moving a magnet close to it in different ways, then we try to identify a recurring pattern and associate it with a predefined action. We created an open-source Android application to show the possible interactions on and around the device. Readings from the sensor were filtered and displayed in a number of different and more convenient ways. We conclude our work with a list of possible scenarios that could benefit from this approach

    Moving usable security research out of the lab: evaluating the use of VR studies for real-world authentication research

    Get PDF
    Empirical evaluations of real-world research artefacts that derive results from observations and experiments are a core aspect of usable security research. Expert interviews as part of this thesis revealed that the costs associated with developing and maintaining physical research artefacts often amplify human-centred usability and security research challenges. On top of that, ethical and legal barriers often make usability and security research in the field infeasible. Researchers have begun simulating real-life conditions in the lab to contribute to ecological validity. However, studies of this type are still restricted to what can be replicated in physical laboratory settings. Furthermore, historically, user study subjects were mainly recruited from local areas only when evaluating hardware prototypes. The human-centred research communities have recognised and partially addressed these challenges using online studies such as surveys that allow for the recruitment of large and diverse samples as well as learning about user behaviour. However, human-centred security research involving hardware prototypes is often concerned with human factors and their impact on the prototypes’ usability and security, which cannot be studied using traditional online surveys. To work towards addressing the current challenges and facilitating research in this space, this thesis explores if – and how – virtual reality (VR) studies can be used for real-world usability and security research. It first validates the feasibility and then demonstrates the use of VR studies for human-centred usability and security research through six empirical studies, including remote and lab VR studies as well as video prototypes as part of online surveys. It was found that VR-based usability and security evaluations of authentication prototypes, where users provide touch, mid-air, and eye-gaze input, greatly match the findings from the original real-world evaluations. This thesis further investigated the effectiveness of VR studies by exploring three core topics in the authentication domain: First, the challenges around in-the-wild shoulder surfing studies were addressed. Two novel VR shoulder surfing methods were implemented to contribute towards realistic shoulder surfing research and explore the use of VR studies for security evaluations. This was found to allow researchers to provide a bridge over the methodological gap between lab and field studies. Second, the ethical and legal barriers when conducting in situ usability research on authentication systems were addressed. It was found that VR studies can represent plausible authentication environments and that a prototype’s in situ usability evaluation results deviate from traditional lab evaluations. Finally, this thesis contributes a novel evaluation method to remotely study interactive VR replicas of real-world prototypes, allowing researchers to move experiments that involve hardware prototypes out of physical laboratories and potentially increase a sample’s diversity and size. The thesis concludes by discussing the implications of using VR studies for prototype usability and security evaluations. It lays the foundation for establishing VR studies as a powerful, well-evaluated research method and unfolds its methodological advantages and disadvantages
    corecore