158 research outputs found

    Détection de la manualité via les capteurs d'orientation du smartphone lors de la prise en main

    Get PDF
    National audiencePeople often switch hands while holding their phones, based on task and context. Ideally, we would be able to detect which hand they are using to hold the device, and use this information to optimize the interaction. We introduce a method to use built-in orientation sensors to detect which hand is holding a smartphone prior to first interaction. Based on logs of people picking up and unlocking a smartphone in a controlled study, we show that a dynamic-time warping approach trained with user-specific examples achieves 83.6% accuracy for determining which hand is holding the phone, prior to touching the screen.En fonction de la tâche et du contexte, les utilisateurs de smartphone ont pour habitude de changer de main pour tenir leur appareil. Idéalement, nous souhaiterions connaître la main utilisée afin d'optimiser l'interaction. A cet effet, nous introduisons une méthode utilisant les capteurs d'orientation intégrés afin de déterminer la main tenant le smartphone avant toute interaction. Nous montrons, par l'analyse des données de participants prenant et déverrouillant leurs smartphones durant une expérience contrôlée, qu'une approche utilisant l'algorithme Dynamic-Time Warping permet d'obtenir une précision de 83.6% afin de détecter la main utilisée

    Sensor-based user interface concepts for continuous, around-device and gestural interaction on mobile devices

    Get PDF
    A generally observable trend of the past 10 years is that the amount of sensors embedded in mobile devices such as smart phones and tablets is rising steadily. Arguably, the available sensors are mostly underutilized by existing mobile user interfaces. In this dissertation, we explore sensor-based user interface concepts for mobile devices with the goal of making better use of the available sensing capabilities on mobile devices as well as gaining insights on the types of sensor technologies that could be added to future mobile devices. We are particularly interested how novel sensor technologies could be used to implement novel and engaging mobile user interface concepts. We explore three particular areas of interest for research into sensor-based user interface concepts for mobile devices: continuous interaction, around-device interaction and motion gestures. For continuous interaction, we explore the use of dynamic state-space systems to implement user interfaces based on a constant sensor data stream. In particular, we examine zoom automation in tilt-based map scrolling interfaces. We show that although fully automatic zooming is desirable in certain situations, adding a manual override capability of the zoom level (Semi-Automatic Zooming) will increase the usability of such a system, as shown through a decrease in task completion times and improved user ratings of user study. The presented work on continuous interaction also highlights how the sensors embedded in current mobile devices can be used to support complex interaction tasks. We go on to introduce the concept of Around-Device Interaction (ADI). By extending the interactive area of the mobile device to its entire surface and the physical volume surrounding it we aim to show how the expressivity and possibilities of mobile input can be improved this way. We derive a design space for ADI and evaluate three prototypes in this context. HoverFlow is a prototype allowing coarse hand gesture recognition around a mobile device using only a simple set of sensors. PalmSpace a prototype exploring the use of depth cameras on mobile devices to track the user's hands in direct manipulation interfaces through spatial gestures. Lastly, the iPhone Sandwich is a prototype supporting dual-sided pressure-sensitive multi-touch interaction. Through the results of user studies, we show that ADI can lead to improved usability for mobile user interfaces. Furthermore, the work on ADI contributes suggestions for the types of sensors could be incorporated in future mobile devices to expand the input capabilities of those devices. In order to broaden the scope of uses for mobile accelerometer and gyroscope data, we conducted research on motion gesture recognition. With the aim of supporting practitioners and researchers in integrating motion gestures into their user interfaces at early development stages, we developed two motion gesture recognition algorithms, the $3 Gesture Recognizer and Protractor 3D that are easy to incorporate into existing projects, have good recognition rates and require a low amount of training data. To exemplify an application area for motion gestures, we present the results of a study on the feasibility and usability of gesture-based authentication. With the goal of making it easier to connect meaningful functionality with gesture-based input, we developed Mayhem, a graphical end-user programming tool for users without prior programming skills. Mayhem can be used to for rapid prototyping of mobile gestural user interfaces. The main contribution of this dissertation is the development of a number of novel user interface concepts for sensor-based interaction. They will help developers of mobile user interfaces make better use of the existing sensory capabilities of mobile devices. Furthermore, manufacturers of mobile device hardware obtain suggestions for the types of novel sensor technologies that are needed in order to expand the input capabilities of mobile devices. This allows the implementation of future mobile user interfaces with increased input capabilities, more expressiveness and improved usability

    Touch-screen Behavioural Biometrics on Mobile Devices

    Get PDF
    Robust user verification on mobile devices is one of the top priorities globally from a financial security and privacy viewpoint and has led to biometric verification complementing or replacing PIN and password methods. Research has shown that behavioural biometric methods, with their promise of improved security due to inimitable nature and the lure of unintrusive, implicit, continuous verification, could define the future of privacy and cyber security in an increasingly mobile world. Considering the real-life nature of problems relating to mobility, this study aims to determine the impact of user interaction factors that affect verification performance and usability for behavioural biometric modalities on mobile devices. Building on existing work on biometric performance assessments, it asks: To what extent does the biometric performance remain stable when faced with movements or change of environment, over time and other device related factors influencing usage of mobile devices in real-life applications? Further it seeks to provide answers to: What could further improve the performance for behavioural biometric modalities? Based on a review of the literature, a series of experiments were executed to collect a dataset consisting of touch dynamics based behavioural data mirroring various real-life usage scenarios of a mobile device. Responses were analysed using various uni-modal and multi-modal frameworks. Analysis demonstrated that existing verification methods using touch modalities of swipes, signatures and keystroke dynamics adapt poorly when faced with a variety of usage scenarios and have challenges related to time persistence. The results indicate that a multi-modal solution does have a positive impact towards improving the verification performance. On this basis, it is recommended to explore alternatives in the form of dynamic, variable thresholds and smarter template selection strategy which hold promise. We believe that the evaluation results presented in this thesis will streamline development of future solutions for improving the security of behavioural-based modalities on mobile biometrics

    Dynamic motion coupling of body movement for input control

    Get PDF
    Touchless gestures are used for input when touch is unsuitable or unavailable, such as when interacting with displays that are remote, large, public, or when touch is prohibited for hygienic reasons. Traditionally user input is spatially or semantically mapped to system output, however, in the context of touchless gestures these interaction principles suffer from several disadvantages including memorability, fatigue, and ill-defined mappings. This thesis investigates motion correlation as the third interaction principle for touchless gestures, which maps user input to system output based on spatiotemporal matching of reproducible motion. We demonstrate the versatility of motion correlation by using movement as the primary sensing principle, relaxing the restrictions on how a user provides input. Using TraceMatch, a novel computer vision-based system, we show how users can provide effective input through investigation of input performance with different parts of the body, and how users can switch modes of input spontaneously in realistic application scenarios. Secondly, spontaneous spatial coupling shows how motion correlation can bootstrap spatial input, allowing any body movement, or movement of tangible objects, to be appropriated for ad hoc touchless pointing on a per interaction basis. We operationalise the concept in MatchPoint, and demonstrate the unique capabilities through an exploration of the design space with application examples. Finally, we explore how users synchronise with moving targets in the context of motion correlation, revealing how simple harmonic motion leads to better synchronisation. Using the insights gained we explore the robustness of algorithms used for motion correlation, showing how it is possible to successfully detect a user's intent to interact whilst suppressing accidental activations from common spatial and semantic gestures. Finally, we look across our work to distil guidelines for interface design, and further considerations of how motion correlation can be used, both in general and for touchless gestures

    Advances in automated surgery skills evaluation

    Get PDF
    Training a surgeon to be skilled and competent to perform a given surgical procedure, is an important step in providing a high quality of care and reducing the risk of complications. Traditional surgical training is carried out by expert surgeons who observe and assess the trainees directly during a given procedure. However, these traditional training methods are time-consuming, subjective, costly, and do not offer an overall surgical expertise evaluation criterion. The solution for these subjective evaluation methods is a sensor-based methodology able to objectively assess the surgeon's skill level. The development and advances in sensor technologies enable capturing and studying the information obtained from complex surgery procedures. If the surgical activities that occur during a procedure are captured using a set of sensors, then the skill evaluation methodology can be defined as a motion and time series analysis problem. This work aims at developing machine learning approaches for automated surgical skill assessment based on hand motion analysis. Specifically, this work presents several contributions to the field of objective surgical techniques using multi-dimensional time series, such as 1) introduce a new distance measure for the surgical activities based on the alignment of two multi-dimensional time series, 2) develop an automated classification framework to identify the surgeon proficiency level using wrist worn sensors, 3) develop a classification technique to identify elementary surgical tasks: suturing, needle passing, and knot tying , 4) introduce a new surgemes mean feature reduction technique which help improve the machine learning algorithms, 5) develop a framework for surgical gesture classification by employing the mean feature reduction method, 6) design an unsupervised method to identify the surgemes in a given procedure.Includes bibliographical references

    Effective Identity Management on Mobile Devices Using Multi-Sensor Measurements

    Get PDF
    Due to the dramatic increase in popularity of mobile devices in the past decade, sensitive user information is stored and accessed on these devices every day. Securing sensitive data stored and accessed from mobile devices, makes user-identity management a problem of paramount importance. The tension between security and usability renders the task of user-identity verification on mobile devices challenging. Meanwhile, an appropriate identity management approach is missing since most existing technologies for user-identity verification are either one-shot user verification or only work in restricted controlled environments. To solve the aforementioned problems, we investigated and sought approaches from the sensor data generated by human-mobile interactions. The data are collected from the on-board sensors, including voice data from microphone, acceleration data from accelerometer, angular acceleration data from gyroscope, magnetic force data from magnetometer, and multi-touch gesture input data from touchscreen. We studied the feasibility of extracting biometric and behaviour features from the on-board sensor data and how to efficiently employ the features extracted to perform user-identity verification on the smartphone device. Based on the experimental results of the single-sensor modalities, we further investigated how to integrate them with hardware such as fingerprint and Trust Zone to practically fulfill a usable identity management system for both local application and remote services control. User studies and on-device testing sessions were held for privacy and usability evaluation.Computer Science, Department o

    Exploring At-Your-Side Gestural Interaction for Ubiquitous Environments

    Get PDF
    International audienceFree-space gestural systems are faced with two major issues: a lack of subtlety due to explicit mid-air arm movements, and the highly effortful nature of such interactions. With an ever-growing ubiquity of interactive devices, displays, and appliances with non-standard interfaces, lower-effort and more socially acceptable interaction paradigms are essential. To address these issues, we explore at-one's-side gestural input. Within this space, we present the results of two studies that investigate the use of side-gesture input for interaction. First, we investigate end-user preference through a gesture elicitation study, present a gesture set, and validate the need for dynamic, diverse, and variable-length gestures. We then explore the feasibility of designing such a gesture recognition system, dubbed WatchTrace, which supports alphanumeric gestures of up to length three with an average accuracy of up to 82%, providing a rich, dynamic, and feasible gestural vocabulary

    Gesture passwords: concepts, methods and challenges

    Full text link
    Biometrics are a convenient alternative to traditional forms of access control such as passwords and pass-cards since they rely solely on user-specific traits. Unlike alphanumeric passwords, biometrics cannot be given or told to another person, and unlike pass-cards, are always “on-hand.” Perhaps the most well-known biometrics with these properties are: face, speech, iris, and gait. This dissertation proposes a new biometric modality: gestures. A gesture is a short body motion that contains static anatomical information and changing behavioral (dynamic) information. This work considers both full-body gestures such as a large wave of the arms, and hand gestures such as a subtle curl of the fingers and palm. For access control, a specific gesture can be selected as a “password” and used for identification and authentication of a user. If this particular motion were somehow compromised, a user could readily select a new motion as a “password,” effectively changing and renewing the behavioral aspect of the biometric. This thesis describes a novel framework for acquiring, representing, and evaluating gesture passwords for the purpose of general access control. The framework uses depth sensors, such as the Kinect, to record gesture information from which depth maps or pose features are estimated. First, various distance measures, such as the log-euclidean distance between feature covariance matrices and distances based on feature sequence alignment via dynamic time warping, are used to compare two gestures, and train a classifier to either authenticate or identify a user. In authentication, this framework yields an equal error rate on the order of 1-2% for body and hand gestures in non-adversarial scenarios. Next, through a novel decomposition of gestures into posture, build, and dynamic components, the relative importance of each component is studied. The dynamic portion of a gesture is shown to have the largest impact on biometric performance with its removal causing a significant increase in error. In addition, the effects of two types of threats are investigated: one due to self-induced degradations (personal effects and the passage of time) and the other due to spoof attacks. For body gestures, both spoof attacks (with only the dynamic component) and self-induced degradations increase the equal error rate as expected. Further, the benefits of adding additional sensor viewpoints to this modality are empirically evaluated. Finally, a novel framework that leverages deep convolutional neural networks for learning a user-specific “style” representation from a set of known gestures is proposed and compared to a similar representation for gesture recognition. This deep convolutional neural network yields significantly improved performance over prior methods. A byproduct of this work is the creation and release of multiple publicly available, user-centric (as opposed to gesture-centric) datasets based on both body and hand gestures

    Enabling mobile microinteractions

    Get PDF
    While much attention has been paid to the usability of desktop computers, mobile com- puters are quickly becoming the dominant platform. Because mobile computers may be used in nearly any situation--including while the user is actually in motion, or performing other tasks--interfaces designed for stationary use may be inappropriate, and alternative interfaces should be considered. In this dissertation I consider the idea of microinteractions--interactions with a device that take less than four seconds to initiate and complete. Microinteractions are desirable because they may minimize interruption; that is, they allow for a tiny burst of interaction with a device so that the user can quickly return to the task at hand. My research concentrates on methods for applying microinteractions through wrist- based interaction. I consider two modalities for this interaction: touchscreens and motion- based gestures. In the case of touchscreens, I consider the interface implications of making touchscreen watches usable with the finger, instead of the usual stylus, and investigate users' performance with a round touchscreen. For gesture-based interaction, I present a tool, MAGIC, for designing gesture-based interactive system, and detail the evaluation of the tool.Ph.D.Committee Chair: Starner, Thad; Committee Member: Abowd, Gregory; Committee Member: Isbell, Charles; Committee Member: Landay, james; Committee Member: McIntyre, Blai
    • …
    corecore