645 research outputs found

    Exploring the Touch and Motion Features in Game-Based Cognitive Assessments

    Get PDF
    Early detection of cognitive decline is important for timely intervention and treatment strategies to prevent further deterioration or development of more severe cognitive impairment, as well as identify at risk individuals for research. In this paper, we explore the feasibility of using data collected from built-in sensors of mobile phone and gameplay performance in mobile-game-based cognitive assessments. Twenty-two healthy participants took part in the two-session experiment where they were asked to take a series of standard cognitive assessments followed by playing three popular mobile games in which user-game interaction data were passively collected. The results from bivariate analysis reveal correlations between our proposed features and scores obtained from paper-based cognitive assessments. Our results show that touch gestural interaction and device motion patterns can be used as supplementary features on mobile game-based cognitive measurement. This study provides initial evidence that game related metrics on existing off-the-shelf games have potential to be used as proxies for conventional cognitive measures, specifically for visuospatial function, visual search capability, mental flexibility, memory and attention

    Discoverable Free Space Gesture Sets for Walk-Up-and-Use Interactions

    Get PDF
    abstract: Advances in technology are fueling a movement toward ubiquity for beyond-the-desktop systems. Novel interaction modalities, such as free space or full body gestures are becoming more common, as demonstrated by the rise of systems such as the Microsoft Kinect. However, much of the interaction design research for such systems is still focused on desktop and touch interactions. Current thinking in free-space gestures are limited in capability and imagination and most gesture studies have not attempted to identify gestures appropriate for public walk-up-and-use applications. A walk-up-and-use display must be discoverable, such that first-time users can use the system without any training, flexible, and not fatiguing, especially in the case of longer-term interactions. One mechanism for defining gesture sets for walk-up-and-use interactions is a participatory design method called gesture elicitation. This method has been used to identify several user-generated gesture sets and shown that user-generated sets are preferred by users over those defined by system designers. However, for these studies to be successfully implemented in walk-up-and-use applications, there is a need to understand which components of these gestures are semantically meaningful (i.e. do users distinguish been using their left and right hand, or are those semantically the same thing?). Thus, defining a standardized gesture vocabulary for coding, characterizing, and evaluating gestures is critical. This dissertation presents three gesture elicitation studies for walk-up-and-use displays that employ a novel gesture elicitation methodology, alongside a novel coding scheme for gesture elicitation data that focuses on features most important to users’ mental models. Generalizable design principles, based on the three studies, are then derived and presented (e.g. changes in speed are meaningful for scroll actions in walk up and use displays but not for paging or selection). The major contributions of this work are: (1) an elicitation methodology that aids users in overcoming biases from existing interaction modalities; (2) a better understanding of the gestural features that matter, e.g. that capture the intent of the gestures; and (3) generalizable design principles for walk-up-and-use public displays.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    An Examination into the Putative Mechanisms Underlying Human Sensorimotor Learning and Decision Making

    Get PDF
    Sensorimotor learning can be defined as a process by which an organism benefits from its experience, such that its future behaviour is better adapted to its environment. Humans are sensorimotor learners par excellence, and neurologically intact adults possess an incredible repertoire of skilled behaviours. Nevertheless, despite the topic fascinating scientists for centuries, there remains a lack of understanding about how humans truly learn. There is a need to better understand sensorimotor learning mechanisms in order to develop treatments for individuals with movement problems, improve training regimes (e.g. surgery) and accelerate motor learning in tasks such as handwriting in children and stroke rehabilitation. This thesis set out to improve our understanding of sensorimotor learning processes and develop methodologies and tools that enable other scientists to tackle these research questions using the power of recent developments in computer science (particularly immersive technologies). Errors in sensorimotor learning are the specific focus of the experimental chapters of this thesis, where the goal is to address our understanding of error perception and correction in motor learning and provide a computational understanding of how we process different types of error to inform subsequent behaviour. A brief summary of the approaches employed, and tools developed over the course of this thesis are presented below. Chapter 1 of this thesis provides a concise overview of the literature on human sensorimotor learning. It introduces the concept of internal models of human interactions with the environment, constructed and refined by the brain in the learning process. Highlighted in this chapter are potential mechanisms for promoting learning (e.g. error augmentation, motor variability) and outstanding challenges for the field (e.g. redundancy, credit assignment). In Chapter 2 a computational model based on information acquisition is developed. The model suggests that disruptive forces applied to human movements during training could improve learning because they allow the learner to sample more information from their environment. Chapter 3 investigates whether sensorimotor learning can be accelerated through forcing participants to explore (and thus acquire more information) a novel workspace. The results imply that exploration may be a necessary component of learning but manipulating it in this way is not sufficient to accelerate learning. This work serves to highlight the critical role of error correction in learning. The process of conducting the experimental work in Chapters 2 and 3 highlighted the need for an application programme interface that would allow researchers to rapidly deploy experiments that allow one to examine learning in a controlled but ecologically relevant manner. Virtual reality systems (that measure human interactions with computer generated worlds) provide a powerful tool for exploring sensorimotor learning and their use in the study of human behaviour is now more feasible due to recent technological advances. To this end, Chapter 4 reports the development of the Unity Experiment Framework - a new tool to assist in the development of virtual reality experiments in the Unity game engine. Chapter 5 builds on the findings from Chapters 2 & 3 on learning by addressing the specific contributions of visual error. It utilises the Unity Experiment Framework to explore whether visually increasing the error signal in a novel aiming task can accelerate motor learning. A novel aiming task is developed which requires participants to learn the mapping between rotations of the handheld virtual reality controllers and the movement of a cursor in Cartesian space. The results show that the visual disturbance does not accelerate the learning of skilled movements, implying a crucial role for mechanical forces, or physical error correction, which is consistent with the findings reported in Chapter 2. Uncontrolled manifold analysis provides insight into how the variability in selected solutions related to learning and performance, as the task deliberately allowed a variety of solutions from a redundant parameter space. Chapter 6 extends the scope of this thesis by examining how error information from the sensorimotor system influences higher order action selection processes. Chapter 5 highlighted the loose definition of “error” in sensorimotor learning and here, the goal was to advance our understanding of error learning by discriminating between different sources of error to better understand their contributions to future behaviour. This issue is illustrated through the example of a tennis player who, on a given point, has the options of selecting a backhand or forehand shot available to her. If the shot is ineffective (and produces an error signal), to optimise future behaviour, the brain needs to rapidly determine whether the error was due to poor shot selection, or whether the correct shot was selected but just poorly executed. To examine these questions, a novel ‘action bandit’ task was developed where participants made reaching movements towards targets, with each target having distinct probabilities of execution and selection error. The results revealed a significant selection bias towards a target that produced a higher frequency of execution errors (rather than a target associated with more selection error) despite no difference in expected value. This behaviour may be explained by a gating mechanism, where learning from the lack of reward is discounted following sensorimotor errors. However, execution errors also increase uncertainty about the appropriateness of a selected choice and the need to reduce uncertainty could equally account for these results. Subsequent experiments test these competing hypotheses and show this putative gating mechanism can be dynamically regulated though coupling of selections and execution errors. Development of models of these processes highlighted the dynamics of the mechanisms that drive the behaviour. In Chapter 7, the motor component of the task was removed to examine whether this effect is not unique to execution errors, but a feature of any two-stage decision-making process with, multiple error types which are presumed to be dissociated. These observations highlight the complex role error plays in learning and suggest the credit assignment process is guided and modulated by internal models of the task at hand. Finally, Chapter 8 closes this thesis with a summary of the key findings and arising from this work in the context of the literature on motor learning and decision making. It is noted here that this thesis sought to cover two broad research topics of motor learning and decision making that have, until recently, been studied by separate groups of researchers, with very little overlap in literature. A key goal of this programme of research was to contribute towards bringing together these hitherto disparate fields by focussing on breadth to establish common ground. As the experimental work developed, it became clear that the processing of error required a multi-pronged approach. Within each experimental chapter, the focus on error was accordingly narrowed and definitions refined. This culminated in developing and testing how individuals discriminate between errors in the sensorimotor and cognitive domains, thus presenting a framework for understanding how motor learning and decision making interact

    Continuous touchscreen biometrics: authentication and privacy concerns

    Get PDF
    In the age of instant communication, smartphones have become an integral part of our daily lives, with a significant portion of the population using them for a variety of tasks such as messaging, banking, and even recording sensitive health information. However, the increasing reliance on smartphones has also made them a prime target for cybercriminals, who can use various tactics to gain access to our sensitive data. In light of this, it is crucial that individuals and organisations prioritise the security of their smartphones to protect against the abundance of threats around us. While there are dozens of methods to verify the identity of users before granting them access to a device, many of them lack effectiveness in terms of usability and potential vulnerabilities. In this thesis, we aim to advance the field of touchscreen biometrics which promises to alleviate some of the recurring issues. This area of research deals with the use of touch interactions, such as gestures and finger movements, as a means of identifying or authenticating individuals. First, we provide a detailed explanation of the common procedure for evaluating touch-based authentication systems and examine the potential pitfalls and concerns that can arise during this process. The impact of the pitfalls is evaluated and quantified on a newly collected large-scale dataset. We also discuss the prevalence of these issues in the related literature and provide recommendations for best practices when developing continuous touch-based authentication systems. Then we provide a comprehensive overview of the techniques that are commonly used for modelling touch-based authentication, including the various features, classifiers, and aggregation methods that are employed in this field. We compare the approaches under controlled, fair conditions in order to determine the top-performing techniques. Based on our findings, we introduce methods that outperform the current state-of-the-art. Finally, as a conclusion to our advancements in the development of touchscreen authentication technology, we explore any negative effects our work may cause to an ordinary user of mobile websites and applications. In particular, we look into any threats that can affect the privacy of the user, such as tracking them and revealing their personal information based on their behaviour on smartphones

    A case report of COVID-19 monitoring in the Austrian professional football league

    Get PDF
    Since the beginning of the COVID -19 pandemic, many contact sport teams are facing major challenges to safely continue training and competition. We present the design and implementation of a structured monitoring concept for the Austrian national football league. 146 professional players from five clubs of the professional Austrian football league were monitored for a period of 12 weeks. Subjective health parameters, PCR- test results and data obtained from a geo-tracking app were collected. Simulations modelling the consequences of a COVID-19 case with increasing reproduction number were computed. No COVID -19 infection occurred during the observation period in the players. Infections in the nearer surroundings lead to increased perceived risk of infection. Geo tracking was particularly hindered due to technical problems and reluctance of users. Simulation models suggested a hypothetical shut-down of all training and competition activities. A structured monitoring concept can help to continue contact sports safely in times of a pandemic. Cooperation of all involved is essential. Trial registration: ID: DRKS00022166 15/6/2020 https://www.who.int/ictrp/search/en/

    USER AUTHENTICATION ACROSS DEVICES, MODALITIES AND REPRESENTATION: BEHAVIORAL BIOMETRIC METHODS

    Get PDF
    Biometrics eliminate the need for a person to remember and reproduce complex secretive information or carry additional hardware in order to authenticate oneself. Behavioral biometrics is a branch of biometrics that focuses on using a person’s behavior or way of doing a task as means of authentication. These tasks can be any common, day to day tasks like walking, sleeping, talking, typing and so on. As interactions with computers and other smart-devices like phones and tablets have become an essential part of modern life, a person’s style of interaction with them can be used as a powerful means of behavioral biometrics. In this dissertation, we present insights from the analysis of our proposed set of contextsensitive or word-specific keystroke features on desktop, tablet and phone. We show that the conventional features are not highly discriminatory on desktops and are only marginally better on hand-held devices for user identification. By using information of the context, our proposed word-specific features offer superior discrimination among users on all devices. Classifiers, built using our proposed features, perform user identification with high accuracies in range of 90% to 97%, average precision and recall values of 0.914 and 0.901 respectively. Analysis of the word-based impact factors reveal that four or five character words, words with about 50% vowels, and those that are ranked higher on the frequency lists might give better results for the extraction and use of the proposed features for user identification. We also examine a large umbrella of behavioral biometric data such as; keystroke latencies, gait and swipe data on desktop, phone and tablet for the assumption of an underlying normal distribution, which is common in many research works. Using suitable nonparametric normality tests (Lilliefors test and Shapiro-Wilk test) we show that a majority of the features from all activities and all devices, do not follow a normal distribution. In most cases less than 25% of the samples that were tested had p values \u3e 0.05. We discuss alternate solutions to address the non-normality in behavioral biometric data. Openly available datasets did not provide the wide range of modalities and activities required for our research. Therefore, we have collected and shared an open access, large benchmark dataset for behavioral biometrics on IEEEDataport. We describe the collection and analysis of our Syracuse University and Assured Information Security - Behavioral Biometrics Multi-device and multi -Activity data from Same users (SU-AIS BB-MAS) Dataset. Which is an open access dataset on IEEEdataport, with data from 117 subjects for typing (both fixed and free text), gait (walking, upstairs and downstairs) and touch on Desktop, Tablet and Phone. The dataset consists a total of about: 3.5 million keystroke events; 57.1 million data-points for accelerometer and gyroscope each; 1.7 million datapoints for swipes and is listed as one of the most popular datasets on the portal (through IEEE emails to all members on 05/13/2020 and 07/21/2020). We also show that keystroke dynamics (KD) on a desktop can be used to classify the type of activity, either benign or adversarial, that a text sample originates from. We show the inefficiencies of popular temporal features for this task. With our proposed set of 14 features we achieve high accuracies (93% to 97%) and low Type 1 and Type 2 errors (3% to 8%) in classifying text samples of different sizes. We also present exploratory research in (a) authenticating users through musical notes generated by mapping their keystroke latencies to music and (b) authenticating users through the relationship between their keystroke latencies on multiple devices

    Touch-screen Behavioural Biometrics on Mobile Devices

    Get PDF
    Robust user verification on mobile devices is one of the top priorities globally from a financial security and privacy viewpoint and has led to biometric verification complementing or replacing PIN and password methods. Research has shown that behavioural biometric methods, with their promise of improved security due to inimitable nature and the lure of unintrusive, implicit, continuous verification, could define the future of privacy and cyber security in an increasingly mobile world. Considering the real-life nature of problems relating to mobility, this study aims to determine the impact of user interaction factors that affect verification performance and usability for behavioural biometric modalities on mobile devices. Building on existing work on biometric performance assessments, it asks: To what extent does the biometric performance remain stable when faced with movements or change of environment, over time and other device related factors influencing usage of mobile devices in real-life applications? Further it seeks to provide answers to: What could further improve the performance for behavioural biometric modalities? Based on a review of the literature, a series of experiments were executed to collect a dataset consisting of touch dynamics based behavioural data mirroring various real-life usage scenarios of a mobile device. Responses were analysed using various uni-modal and multi-modal frameworks. Analysis demonstrated that existing verification methods using touch modalities of swipes, signatures and keystroke dynamics adapt poorly when faced with a variety of usage scenarios and have challenges related to time persistence. The results indicate that a multi-modal solution does have a positive impact towards improving the verification performance. On this basis, it is recommended to explore alternatives in the form of dynamic, variable thresholds and smarter template selection strategy which hold promise. We believe that the evaluation results presented in this thesis will streamline development of future solutions for improving the security of behavioural-based modalities on mobile biometrics

    Machine learning techniques for implicit interaction using mobile sensors

    Get PDF
    Interactions in mobile devices normally happen in an explicit manner, which means that they are initiated by the users. Yet, users are typically unaware that they also interact implicitly with their devices. For instance, our hand pose changes naturally when we type text messages. Whilst the touchscreen captures finger touches, hand movements during this interaction however are unused. If this implicit hand movement is observed, it can be used as additional information to support or to enhance the users’ text entry experience. This thesis investigates how implicit sensing can be used to improve existing, standard interaction technique qualities. In particular, this thesis looks into enhancing front-of-device interaction through back-of-device and hand movement implicit sensing. We propose the investigation through machine learning techniques. We look into problems on how sensor data via implicit sensing can be used to predict a certain aspect of an interaction. For instance, one of the questions that this thesis attempts to answer is whether hand movement during a touch targeting task correlates with the touch position. This is a complex relationship to understand but can be best explained through machine learning. Using machine learning as a tool, such correlation can be measured, quantified, understood and used to make predictions on future touch position. Furthermore, this thesis also evaluates the predictive power of the sensor data. We show this through a number of studies. In Chapter 5 we show that probabilistic modelling of sensor inputs and recorded touch locations can be used to predict the general area of future touches on touchscreen. In Chapter 7, using SVM classifiers, we show that data from implicit sensing from general mobile interactions is user-specific. This can be used to identify users implicitly. In Chapter 6, we also show that touch interaction errors can be detected from sensor data. In our experiment, we show that there are sufficient distinguishable patterns between normal interaction signals and signals that are strongly correlated with interaction error. In all studies, we show that performance gain can be achieved by combining sensor inputs

    QUALITATIVE CONSIDERATIONS FOR KINAESTHETICS IN HCI

    Get PDF
    ABSTRACT With recent technological developments in motion capture there is an opportunity to redefine the physical interactions we have with products, considering human needs in movement at the forefront rather than subservient to the machine. This paper reports on the exploration of emotional reaction to gestural interface design using Laban's Movement Analysis from the field of dance and drama. After outlining the current status of Gesture Controlled User Interfaces and why the use of Laban is appropriate to help understand the effects of movement, the results of a workshop on new interface design are presented. Teams were asked to re-imagine a number of product experiences that utilised appropriate Laban effort actions and to prototype and present these to the group. Several categories of devices, including direct manipulation, remote control and gesture recognition were identified. In aligning appropriate movements to device functionality, utilising culture and analogy and where necessary increasing complexity, the interfaces embody a number of concepts relating to gestural interface concepts
    • 

    corecore