135 research outputs found

    Improving the Security of Mobile Devices Through Multi-Dimensional and Analog Authentication

    Get PDF
    Mobile devices are ubiquitous in today\u27s society, and the usage of these devices for secure tasks like corporate email, banking, and stock trading grows by the day. The first, and often only, defense against attackers who get physical access to the device is the lock screen: the authentication task required to gain access to the device. To date mobile devices have languished under insecure authentication scheme offerings like PINs, Pattern Unlock, and biometrics-- or slow offerings like alphanumeric passwords. This work addresses the design and creation of five proof-of-concept authentication schemes that seek to increase the security of mobile authentication without compromising memorability or usability. These proof-of-concept schemes demonstrate the concept of Multi-Dimensional Authentication, a method of using data from unrelated dimensions of information, and the concept of Analog Authentication, a method utilizing continuous rather than discrete information. Security analysis will show that these schemes can be designed to exceed the security strength of alphanumeric passwords, resist shoulder-surfing in all but the worst-case scenarios, and offer significantly fewer hotspots than existing approaches. Usability analysis, including data collected from user studies in each of the five schemes, will show promising results for entry times, in some cases on-par with existing PIN or Pattern Unlock approaches, and comparable qualitative ratings with existing approaches. Memorability results will demonstrate that the psychological advantages utilized by these schemes can lead to real-world improvements in recall, in some instances leading to near-perfect recall after two weeks, significantly exceeding the recall rates of similarly secure alphanumeric passwords

    Evaluation of the Accessibility of Touchscreens for Individuals who are Blind or have Low Vision: Where to go from here

    Get PDF
    Touchscreen devices are well integrated into daily life and can be found in both personal and public spaces, but the inclusion of accessible features and interfaces continues to lag behind technology’s exponential advancement. This thesis aims to explore the experiences of individuals who are blind or have low vision (BLV) while interacting with non-tactile touchscreens, such as smartphones, tablets, smartwatches, coffee machines, smart home devices, kiosks, ATM machines, and more. The goal of this research is to create a set of recommended guidelines that can be used in designing and developing either personal devices or shared public technologies with accessible touchscreens. This study consists of three phases, the first being an exploration of existing research related to accessibility of non-tactile touchscreens, followed by semi-structured interviews of 20 BLV individuals to address accessibility gaps in previous work, and finally a survey in order to get a better understanding of the experiences, thoughts, and barriers for BLV individuals while interacting with touchscreen devices. Some of the common themes found include: loss of independence, lack or uncertainty of accessibility features, and the need and desire for improvements. Common approaches for interaction were: the use of high markings, asking for sighted assistance, and avoiding touchscreen devices. These findings were used to create a set of recommended guidelines which include a universal feature setup, the setup of accessibility settings, universal headphone jack position, tactile feedback, ask for help button, situational lighting, and the consideration of time

    A survey on touch dynamics authentication in mobile devices

    Get PDF
    © 2016 Elsevier Ltd. All rights reserved. There have been research activities in the area of keystroke dynamics biometrics on physical keyboards (desktop computers or conventional mobile phones) undertaken in the past three decades. However, in terms of touch dynamics biometrics on virtual keyboards (modern touchscreen mobile devices), there has been little published work. Particularly, there is a lack of an extensive survey and evaluation of the methodologies adopted in the area. Owing to the widespread use of touchscreen mobile devices, it is necessary for us to examine the techniques and their effectiveness in the domain of touch dynamics biometrics. The aim of this paper is to provide some insights and comparative analysis of the current state of the art in the topic area, including data acquisition protocols, feature data representations, decision making techniques, as well as experimental settings and evaluations. With such a survey, we can gain a better understanding of the current state of the art, thus identifying challenging issues and knowledge gaps for further research

    An Investigation of Power Saving and Privacy Protection on Smartphones

    Get PDF
    With the advancements in mobile technology, smartphones have become ubiquitous in people\u27s daily lives and have greatly facilitated users in many aspects. For a smartphone user, power saving and privacy protection are two important issues that matter and draw serious attentions from research communities. In this dissertation, we present our studies on some specific issues of power saving and privacy protection on a smartphone. Although IEEE 802.11 standards provide Power Save Mode (PSM) to help mobile devices conserve energy, PSM fails to bring expected benefits in many real scenarios. We define an energy conserving model to describe the general PSM traffic contention problem, and propose a solution called HPSM to address one specific case, in which multiple PSM clients associate to a single AP. In HPSM, we first use a basic sociological concept to define the richness of a PSM client based on the link resource it consumes. Then we separate these poor PSM clients from rich PSM clients in terms of link resource consumption, and favor the former to save power when they face PSM transmission contention. Our evaluations show that HPSM can help the poor PSM clients effectively save power while only slightly degrading the rich\u27s performance in comparison to the existing PSM solutions. Traditional user authentication methods using passcode or finger movement on smartphones are vulnerable to shoulder surfing attack, smudge attack, and keylogger attack. These attacks are able to infer a passcode based on the information collection of user\u27s finger movement or tapping input. as an alternative user authentication approach, eye tracking can reduce the risk of suffering those attacks effectively because no hand input is required. We propose a new eye tracking method for user authentication on a smartphone. It utilizes the smartphone\u27s front camera to capture a user\u27s eye movement trajectories which are used as the input of user authentication. No special hardware or calibration process is needed. We develop a prototype and evaluate its effectiveness on an android smartphone. Our evaluation results show that the proposed eye tracking technique achieves very high accuracy in user authentication. While LBS-based apps facilitate users in many application scenarios, they raise concerns on the breach of privacy related to location access. We perform the first measurement of this background action on the Google app market. Our investigation demonstrates that many popular apps conduct location access in background within short intervals. This enables these apps to collect a user\u27s location trace, from which the important personal information, Points of Interest (PoIs), can be recognized. We further extract a user\u27s movement pattern from the PoIs, and utilize it to measure the potential privacy breach. The measurement results also show that using the combination of movement pattern related metrics and the other PoI related metrics can help detect the privacy breach in an earlier manner than using either one of them alone. We then propose a preliminary solution to properly handle these location requests from background

    Context-aware gestural interaction in the smart environments of the ubiquitous computing era

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophyTechnology is becoming pervasive and the current interfaces are not adequate for the interaction with the smart environments of the ubiquitous computing era. Recently, researchers have started to address this issue introducing the concept of natural user interface, which is mainly based on gestural interactions. Many issues are still open in this emerging domain and, in particular, there is a lack of common guidelines for coherent implementation of gestural interfaces. This research investigates gestural interactions between humans and smart environments. It proposes a novel framework for the high-level organization of the context information. The framework is conceived to provide the support for a novel approach using functional gestures to reduce the gesture ambiguity and the number of gestures in taxonomies and improve the usability. In order to validate this framework, a proof-of-concept has been developed. A prototype has been developed by implementing a novel method for the view-invariant recognition of deictic and dynamic gestures. Tests have been conducted to assess the gesture recognition accuracy and the usability of the interfaces developed following the proposed framework. The results show that the method provides optimal gesture recognition from very different view-points whilst the usability tests have yielded high scores. Further investigation on the context information has been performed tackling the problem of user status. It is intended as human activity and a technique based on an innovative application of electromyography is proposed. The tests show that the proposed technique has achieved good activity recognition accuracy. The context is treated also as system status. In ubiquitous computing, the system can adopt different paradigms: wearable, environmental and pervasive. A novel paradigm, called synergistic paradigm, is presented combining the advantages of the wearable and environmental paradigms. Moreover, it augments the interaction possibilities of the user and ensures better gesture recognition accuracy than with the other paradigms

    Assisting Navigation and Object Selection with Vibrotactile Cues

    Get PDF
    Our lives have been drastically altered by information technology in the last decades, leading to evolutionary mismatches between human traits and the modern environment. One particular mismatch occurs when visually demanding information technology overloads the perceptual, cognitive or motor capabilities of the human nervous system. This information overload could be partly alleviated by complementing visual interaction with haptics. The primary aim of this thesis was to investigate how to assist movement control with vibrotactile cues. Vibrotactile cues refer to technologymediated vibrotactile signals that notify users of perceptual events, propose users to make decisions, and give users feedback from actions. To explore vibrotactile cues, we carried out five experiments in two contexts of movement control: navigation and object selection. The goal was to find ways to reduce information load in these tasks, thus helping users to accomplish the tasks more effectively. We employed measurements such as reaction times, error rates, and task completion times. We also used subjective rating scales, short interviews, and free-form participant comments to assess the vibrotactile assisted interactive systems. The findings of this thesis can be summarized as follows. First, if the context of movement control allows the use of both feedback and feedforward cues, feedback cues are a reasonable first option. Second, when using vibrotactile feedforward cues, using low-level abstractions and supporting the interaction with other modalities can keep the information load as low as possible. Third, the temple area is a feasible actuation location for vibrotactile cues in movement control, including navigation cues and object selection cues with head turns. However, the usability of the area depends on contextual factors such as spatial congruency, the actuation device, and the pace of the interaction task

    User experience guidelines for mobile natural user interfaces: a case study of physically disabled users

    Get PDF
    Motor impaired people are faced with many challenges, one being the of lack integration into certain spheres of society. Access to information is seen as a major issue for the motor impaired since most forms of interaction or interactive devices are not suited to the needs of motor impaired people. People with motor impairments, like the rest of the population, are increasingly using mobile phones. As a result of the current devices and methods used for interaction with content on mobile phones, various factors prohibit a pleasant experience for users with motor impairments. To counter these factors, this study recognizes the need to implement better suited methods of interaction and navigation to improve accessibility, usability and user experience for motor impaired users. The objective of the study was to gain an understanding of the nature of motor impairments and the challenges that this group of people face when using mobile phones. Once this was determined, a solution to address this problem was found in the form of natural user interfaces. In order to gain a better understanding of this technology, various forms of NUIs and the benefits thereof were studied by the researcher in order to determine how this technology can be implemented to meet the needs of motor impaired people. To test theory, the Samsung Galaxy s5 was selected as the NUI device for the study. It must be noted that this study started in the year 2013 and the Galaxy S5 was the latest device claiming to improve interaction for disabled people at the time. This device was used in a case study that made use of various data collection methods, including participant interviews. Various motor impaired participants were requested to perform predefined tasks on the device, along with the completion of a set of user experience questionnaires. Based on the results of the study, it was found that interaction with mobile phones is an issue for people with motor impairments and that alternative methods of interaction need to be implemented. These results contributed to the final output of this study, namely a set of user experience guidelines for the design of mobile human computer interaction for motor impaired users

    Improving Mobile MOOC Learning via Implicit Physiological Signal Sensing

    Get PDF
    Massive Open Online Courses (MOOCs) are becoming a promising solution for delivering high- quality education on a large scale at low cost in recent years. Despite the great potential, today’s MOOCs also suffer from challenges such as low student engagement, lack of personalization, and most importantly, lack of direct, immediate feedback channels from students to instructors. This dissertation explores the use of physiological signals implicitly collected via a "sensorless" approach as a rich feedback channel to understand, model, and improve learning in mobile MOOC contexts. I first demonstrate AttentiveLearner, a mobile MOOC system which captures learners' physiological signals implicitly during learning on unmodified mobile phones. AttentiveLearner uses on-lens finger gestures for video control and monitors learners’ photoplethysmography (PPG) signals based on the fingertip transparency change captured by the back camera. Through series of usability studies and follow-up analyses, I show that the tangible video control interface of AttentiveLearner is intuitive to use and easy to operate, and the PPG signals implicitly captured by AttentiveLearner can be used to infer both learners’ cognitive states (boredom and confusion levels) and divided attention (multitasking and external auditory distractions). Building on top of AttentiveLearner, I design, implement, and evaluate a novel intervention technology, Context and Cognitive State triggered Feed-Forward (C2F2), which infers and responds to learners’ boredom and disengagement events in real time via a combination of PPG-based cognitive state inference and learning topic importance monitoring. C2F2 proactively reminds a student of important upcoming content (feed-forward interventions) when disengagement is detected. A 48-participant user study shows that C2F2 on average improves learning gains by 20.2% compared with a non-interactive baseline system and is especially effective for bottom performers (improving their learning gains by 41.6%). Finally, to gain a holistic understanding of the dynamics of MOOC learning, I investigate the temporal dynamics of affective states of MOOC learners in a 22 participant study. Through both a quantitative analysis of the temporal transitions of affective states and a qualitative analysis of subjective feedback, I investigate differences between mobile MOOC learning and complex learning activities in terms of affect dynamics, and discuss pedagogical implications in detail
    corecore