75 research outputs found

    Different strokes for different folks? Revealing the physical characteristics of smartphone users from their swipe gestures

    Get PDF
    AbstractAnthropometrics show that the lengths of many human body segments follow a common proportional relationship. To know the length of one body segment – such as a thumb – potentially provides a predictive route to other physical characteristics, such as overall standing height. In this study, we examined whether it is feasible that the length of a person׳s thumb could be revealed from the way in which they complete swipe gestures on a touchscreen-based smartphone.From a corpus of approx. 19,000 swipe gestures captured from 178 volunteers, we found that people with longer thumbs complete swipe gestures with shorter completion times, higher speeds and with higher accelerations than people with shorter thumbs. These differences were also observed to exist between our male and female volunteers, along with additional differences in the amount of touch pressure applied to the screen.Results are discussed in terms of linking behavioural and physical biometrics

    Using Hover to Compromise the Confidentiality of User Input on Android

    Full text link
    We show that the new hover (floating touch) technology, available in a number of today's smartphone models, can be abused by any Android application running with a common SYSTEM_ALERT_WINDOW permission to record all touchscreen input into other applications. Leveraging this attack, a malicious application running on the system is therefore able to profile user's behavior, capture sensitive input such as passwords and PINs as well as record all user's social interactions. To evaluate our attack we implemented Hoover, a proof-of-concept malicious application that runs in the system background and records all input to foreground applications. We evaluated Hoover with 40 users, across two different Android devices and two input methods, stylus and finger. In the case of touchscreen input by finger, Hoover estimated the positions of users' clicks within an error of 100 pixels and keyboard input with an accuracy of 79%. Hoover captured users' input by stylus even more accurately, estimating users' clicks within 2 pixels and keyboard input with an accuracy of 98%. We discuss ways of mitigating this attack and show that this cannot be done by simply restricting access to permissions or imposing additional cognitive load on the users since this would significantly constrain the intended use of the hover technology.Comment: 11 page

    Machine learning techniques for implicit interaction using mobile sensors

    Get PDF
    Interactions in mobile devices normally happen in an explicit manner, which means that they are initiated by the users. Yet, users are typically unaware that they also interact implicitly with their devices. For instance, our hand pose changes naturally when we type text messages. Whilst the touchscreen captures finger touches, hand movements during this interaction however are unused. If this implicit hand movement is observed, it can be used as additional information to support or to enhance the users’ text entry experience. This thesis investigates how implicit sensing can be used to improve existing, standard interaction technique qualities. In particular, this thesis looks into enhancing front-of-device interaction through back-of-device and hand movement implicit sensing. We propose the investigation through machine learning techniques. We look into problems on how sensor data via implicit sensing can be used to predict a certain aspect of an interaction. For instance, one of the questions that this thesis attempts to answer is whether hand movement during a touch targeting task correlates with the touch position. This is a complex relationship to understand but can be best explained through machine learning. Using machine learning as a tool, such correlation can be measured, quantified, understood and used to make predictions on future touch position. Furthermore, this thesis also evaluates the predictive power of the sensor data. We show this through a number of studies. In Chapter 5 we show that probabilistic modelling of sensor inputs and recorded touch locations can be used to predict the general area of future touches on touchscreen. In Chapter 7, using SVM classifiers, we show that data from implicit sensing from general mobile interactions is user-specific. This can be used to identify users implicitly. In Chapter 6, we also show that touch interaction errors can be detected from sensor data. In our experiment, we show that there are sufficient distinguishable patterns between normal interaction signals and signals that are strongly correlated with interaction error. In all studies, we show that performance gain can be achieved by combining sensor inputs

    Behaviour-aware mobile touch interfaces

    Get PDF
    Mobile touch devices have become ubiquitous everyday tools for communication, information, as well as capturing, storing and accessing personal data. They are often seen as personal devices, linked to individual users, who access the digital part of their daily lives via hand-held touchscreens. This personal use and the importance of the touch interface motivate the main assertion of this thesis: Mobile touch interaction can be improved by enabling user interfaces to assess and take into account how the user performs these interactions. This thesis introduces the new term "behaviour-aware" to characterise such interfaces. These behaviour-aware interfaces aim to improve interaction by utilising behaviour data: Since users perform touch interactions for their main tasks anyway, inferring extra information from said touches may, for example, save users' time and reduce distraction, compared to explicitly asking them for this information (e.g. user identity, hand posture, further context). Behaviour-aware user interfaces may utilise this information in different ways, in particular to adapt to users and contexts. Important questions for this research thus concern understanding behaviour details and influences, modelling said behaviour, and inference and (re)action integrated into the user interface. In several studies covering both analyses of basic touch behaviour and a set of specific prototype applications, this thesis addresses these questions and explores three application areas and goals: 1) Enhancing input capabilities – by modelling users' individual touch targeting behaviour to correct future touches and increase touch accuracy. The research reveals challenges and opportunities of behaviour variability arising from factors including target location, size and shape, hand and finger, stylus use, mobility, and device size. The work further informs modelling and inference based on targeting data, and presents approaches for simulating touch targeting behaviour and detecting behaviour changes. 2) Facilitating privacy and security – by observing touch targeting and typing behaviour patterns to implicitly verify user identity or distinguish multiple users during use. The research shows and addresses mobile-specific challenges, in particular changing hand postures. It also reveals that touch targeting characteristics provide useful biometric value both in the lab as well as in everyday typing. Influences of common evaluation assumptions are assessed and discussed as well. 3) Increasing expressiveness – by enabling interfaces to pass on behaviour variability from input to output space, studied with a keyboard that dynamically alters the font based on current typing behaviour. Results show that with these fonts users can distinguish basic contexts as well as individuals. They also explicitly control font influences for personal communication with creative effects. This thesis further contributes concepts and implemented tools for collecting touch behaviour data, analysing and modelling touch behaviour, and creating behaviour-aware and adaptive mobile touch interfaces. Together, these contributions support researchers and developers in investigating and building such user interfaces. Overall, this research shows how variability in mobile touch behaviour can be addressed and exploited for the benefit of the users. The thesis further discusses opportunities for transfer and reuse of touch behaviour models and information across applications and devices, for example to address tradeoffs of privacy/security and usability. Finally, the work concludes by reflecting on the general role of behaviour-aware user interfaces, proposing to view them as a way of embedding expectations about user input into interactive artefacts

    Autenticación Biométrica basada en interacción con pantalla táctil

    Full text link
    La gran popularidad de los smarthphones y el incremento en su uso para aplicaciones diariamente ha provocado que lleven información sensible, como los detalles de nuestras cuentas bancarias, contraseñas o correos electrónicos. Motivados por las limitaciones en seguridad de los sistemas tradicionales (por ejemplo, códigos PIN, patrones secretos), que pueden romperse fácilmente, se han desarrollado nuevos métodos usando biometrías para autenticar a los usuarios. Uno de estos métodos es la autenticación continua, en la cual un usuario es autenticado de forma pasiva, haciendo uso de sus biometrías. De esta manera, se garantiza la seguridad más allá del punto de acceso, asegurando que la persona que usa el dispositivo es la misma que se inscribió. Entre estos métodos para autenticación continua, este trabajo se centra en el uso de la interacción habitual de los usuarios con la pantalla táctil. Cada persona se comporta de forma diferente al deslizar los dedos por la pantalla. Teniendo en cuenta la frecuencia con la cual se efectúan las distintas operaciones, hábitos característicos, como la fuerza, el ritmo o el ángulo usados dan como resultado patrones discriminativos que se pueden usar para autenticar a los usuarios. En el presente trabajo se exploran dos enfoques distintos para la autenticación basada en interacción con pantalla táctil: discriminativo basado en máquinas vector-soporte, y estadístico basado en mezclas de Gaussianas. Adicionalmente, se estudia un sistema basado en la fusión de los dos anteriores. La base de datos usada para el análisis se compone de datos táctiles de las operaciones más comunes, como por ejemplo los trazos hechos al deslizar un dedo por la pantalla, obtenidos de 190 sujetos. Se utiliza como referencia un artículo de la literatura, mejorando sus resultados. Usando bloques de diez trazos para la autenticación, se obtienen tasas de Equal Error Rate entre el 8% y el 22% para diferentes operaciones táctiles. Aunque el enfoque estadístico obtiene resultados ligeramente peores que las máquinas vector-soporte, es capaz de autenticar usuarios que tienen mal rendimiento en el otro sistema debido a la gran variabilidad intra-usuario. De esta forma, al fusionar los sistemas éstos se complementan entre sí. El rendimiento en distintas operaciones muestra que algunos gestos contienen más información del usuario y son más discriminativos que otros (en particular, los trazos horizontales son más discriminativos que los verticales). Los resultados experimentales muestran que las biometrías táctiles son lo suficientemente discriminativas para reconocimiento de personas y que son un método prometedor para la autenciación activa.The great popularity of smartphones and the increase in their use in everyday applications has led to sensitive information being carried in them, such as our bank account details, passwords or emails. Motivated by the limited security of traditional systems (i.e. PIN codes, secret patterns), which can be easily broken, new methods using biometrics to authenticate users have been developed. One of these methods is active authentication, where the user is passively being authenticated in the background, based in his biometrics. This way, security in guaranteed beyond the entry point, ensuring that the person who uses the device is the same user who enrolled. Among the methods for active authentication, this work studies the users’ normal interaction with touchscreens. Every person behaves differently when swiping their fingers on a touchscreen. Giventhefrequencyinwhichtouchoperationsareperformed, characteristichabits, like the strength, rhythm or angle used result in discriminative patterns that can be used to authenticate users. In the present work, we explore two recognition approaches for authentication based on touchscreen interaction: discriminative based on Support Vector Machines, and statistical based on adapted Gaussian Mixture Models. Additionally, a system based on the fusion of the two previous systems is studied. The database used for the analysis consists of touch data from the most common operations, i.e., swipes made with one finger on the screen, collected from 190 subjects. An article in the literature is used as a reference, improving its results. Using blocks of ten strokes for authentication, Equal Error Rates between 8% and 22% are obtained for different kind of touch operations. While the statistical approach obtains slightly worse performance than Support Vector Machines, it is capable of authenticating users who obtain bad performances with the other system because of large intra-user variability. That way, both systems complement each other when fusing them. The performance across different kinds of touch operations shows that some gestures hold more user-specific information and are more discriminative than others (in particular, horizontal swipes appear to be more discriminative than verticalones). Theexperimentalresultsshowthattouchbiometricshaveenoughdiscriminability for person recognition and that they are a promising method for active authentication

    Effects of age on smartphone and tablet usability, based on eye-movement tracking and touch-gesture interactions

    Get PDF
    The aim of this thesis is to provide an insight into the effects of user age on interactions with smartphones and tablets applications. The study considered two interaction methods to investigate the effects of user age on the usability of smartphones and tablets of different sizes: 1) eye-movements/browsing and 2) touch-gesture interactions. In eye movement studies, an eye tracker was used to trace and record users’ eye movements which were later analysed to understand the effects of age and screen-size on browsing effectiveness. Whilst in gesture interactions, an application developed for smartphones traced and recorded users’ touch-gestures data, which were later analysed to investigate the effects of age and screensize on touch-gesture performance. The motivation to conduct our studies is summarised as follows: 1) increasing number of elderly people in our society, 2) widespread use of smartphone technology across the world, 3) understanding difficulties for elderly when interacting smartphones technology, and 4) provide the existing body of literature with new understanding on the effects of ageing on smartphone usability. The work of this thesis includes five research projects conducted in two stages. Stage One included two researches used eye movement analysis to investigate the effects of user age and the influence of screen size on browsing smartphone interfaces. The first research examined the scan-paths dissimilarity of browsing smartphones applications or elderly users (60+) and younger users (20-39). The results revealed that the scan-paths dissimilarity in browsing smartphone applications was higher for elderly users (i.e., age-driven) than the younger users. The results also revealed that browsing smartphone applications were stimulus-driven rather than screen size-driven. The second study was conducted to understand the difficulties of information processing when browsing smartphone applications for elderly (60+), middle-age (40-59) and younger (20-39) users. The evaluation was performed using three different screen sizes of smartphone and tablet devices. The results revealed that processing of both local and global information on a smartphone/tablet interfaces was more difficult for elderly users than it was for the other age groups. Across all age groups, browsing on the smaller smartphone size proved to be more difficult compared to the larger screen sizes. Stage Two included three researches to investigate: the difficulties in interacting with gesture-based applications for elderly compared to younger users; and to evaluate the possibility of classifying user’s age-group based on on-screen gestures. The first research investigated the effects of user age and screen size on performing gesture swiping intuitively for four swiping directions: down, left, right, and up. The results revealed that the performance of gesture swiping was influenced by user age, screen size, as well as by the swiping orientation. The purpose of the second research was to investigate the effects of user age, screen sizes, and gesture complexity in performing accurate gestures on smartphones and tablets using gesture-based features. The results revealed that the elderly were less accurate, less efficient, slower, and exerted more pressure on the touch-screen when performing gestures than the younger users. On a small smartphone, all users were less accurate in gesture performance – more so for elderly – compared to mini-sized tablets. Also, the users, especially the elderly, were less efficient and less accurate when performing complex gestures on the small smartphone compared to the mini-tablet. The third research investigated the possibility of classifying a user’s age-group using touch gesture-based features (i.e., gesture speed, gesture accuracy, movement time, and finger pressure) on smartphones. In the third research, we provide evidence for the possibility of classifying a user’s age-group using gesture-based applications on smartphones for user-dependent and user-independent scenarios. The accuracy of age-group classification on smaller screens was higher than that on devices with larger screens due to larger screens being much easier to use for all users across both age groups. In addition, it was found that the age-group classification accuracy was higher for younger users than elderly users. This was due to the fact that some elderly users performed the gestures in the same way as the younger users do, which could be due to their longer experience in using smartphones than the typical elderly user. Overall, our results provided evidence that elderly users encounter difficulties when interacting with smartphones and tablet devices compared to younger users. Also, it was possible to classify user’s age-group based on users’ ability to perform touch-gestures on smartphones and tablets. The designers of smartphone interfaces should remove barriers that make browsing and processing local and global information on smartphones’ applications difficult. Furthermore, larger screen sizes should be considered for elderly users. Also, smartphones could include automatically customisable user interfaces to suite elderly users' abilities to accommodate their needs so that they can be equally efficient as younger users. The outcomes of this research could enhance the design of smartphones and tablets as well the applications that run on such devices, especially those that are aimed at elderly users. Such devices and applications could play an effective role in enhancing elderly peoples’ activities of daily lives

    Continuous touchscreen biometrics: authentication and privacy concerns

    Get PDF
    In the age of instant communication, smartphones have become an integral part of our daily lives, with a significant portion of the population using them for a variety of tasks such as messaging, banking, and even recording sensitive health information. However, the increasing reliance on smartphones has also made them a prime target for cybercriminals, who can use various tactics to gain access to our sensitive data. In light of this, it is crucial that individuals and organisations prioritise the security of their smartphones to protect against the abundance of threats around us. While there are dozens of methods to verify the identity of users before granting them access to a device, many of them lack effectiveness in terms of usability and potential vulnerabilities. In this thesis, we aim to advance the field of touchscreen biometrics which promises to alleviate some of the recurring issues. This area of research deals with the use of touch interactions, such as gestures and finger movements, as a means of identifying or authenticating individuals. First, we provide a detailed explanation of the common procedure for evaluating touch-based authentication systems and examine the potential pitfalls and concerns that can arise during this process. The impact of the pitfalls is evaluated and quantified on a newly collected large-scale dataset. We also discuss the prevalence of these issues in the related literature and provide recommendations for best practices when developing continuous touch-based authentication systems. Then we provide a comprehensive overview of the techniques that are commonly used for modelling touch-based authentication, including the various features, classifiers, and aggregation methods that are employed in this field. We compare the approaches under controlled, fair conditions in order to determine the top-performing techniques. Based on our findings, we introduce methods that outperform the current state-of-the-art. Finally, as a conclusion to our advancements in the development of touchscreen authentication technology, we explore any negative effects our work may cause to an ordinary user of mobile websites and applications. In particular, we look into any threats that can affect the privacy of the user, such as tracking them and revealing their personal information based on their behaviour on smartphones
    corecore