9 research outputs found

    Using Hidden Markov Models to Segment and Classify Wrist Motions Related to Eating Activities

    Get PDF
    Advances in body sensing and mobile health technology have created new opportunities for empowering people to take a more active role in managing their health. Measurements of dietary intake are commonly used for the study and treatment of obesity. However, the most widely used tools rely upon self-report and require considerable manual effort, leading to underreporting of consumption, non-compliance, and discontinued use over the long term. We are investigating the use of wrist-worn accelerometers and gyroscopes to automatically recognize eating gestures. In order to improve recognition accuracy, we studied the sequential ependency of actions during eating. In chapter 2 we first undertook the task of finding a set of wrist motion gestures which were small and descriptive enough to model the actions performed by an eater during consumption of a meal. We found a set of four actions: rest, utensiling, bite, and drink; any alternative gestures is referred as the other gesture. The stability of the definitions for gestures was evaluated using an inter-rater reliability test. Later, in chapter 3, 25 meals were hand labeled and used to study the existence of sequential dependence of the gestures. To study this, three types of classifiers were built: 1) a K-nearest neighbor classifier which uses no sequential context, 2) a hidden Markov model (HMM) which captures the sequential context of sub-gesture motions, and 3) HMMs that model inter-gesture sequential dependencies. We built first-order to sixth-order HMMs to evaluate the usefulness of increasing amounts of sequential dependence to aid recognition. The first two were our baseline algorithms. We found that the adding knowledge of the sequential dependence of gestures achieved an accuracy of 96.5%, which is an improvement of 20.7% and 12.2% over the KNN and sub-gesture HMM. Lastly, in chapter 4, we automatically segmented a continuous wrist motion signal and assessed its classification performance for each of the three classifiers. Again, the knowledge of sequential dependence enhances the recognition of gestures in unsegmented data, achieving 90% accuracy and improving 30.1% and 18.9% over the KNN and the sub-gesture HMM

    Multimodal, Embodied and Location-Aware Interaction

    Get PDF
    This work demonstrates the development of mobile, location-aware, eyes-free applications which utilise multiple sensors to provide a continuous, rich and embodied interaction. We bring together ideas from the fields of gesture recognition, continuous multimodal interaction, probability theory and audio interfaces to design and develop location-aware applications and embodied interaction in both a small-scale, egocentric body-based case and a large-scale, exocentric `world-based' case. BodySpace is a gesture-based application, which utilises multiple sensors and pattern recognition enabling the human body to be used as the interface for an application. As an example, we describe the development of a gesture controlled music player, which functions by placing the device at different parts of the body. We describe a new approach to the segmentation and recognition of gestures for this kind of application and show how simulated physical model-based interaction techniques and the use of real world constraints can shape the gestural interaction. GpsTunes is a mobile, multimodal navigation system equipped with inertial control that enables users to actively explore and navigate through an area in an augmented physical space, incorporating and displaying uncertainty resulting from inaccurate sensing and unknown user intention. The system propagates uncertainty appropriately via Monte Carlo sampling and output is displayed both visually and in audio, with audio rendered via granular synthesis. We demonstrate the use of uncertain prediction in the real world and show that appropriate display of the full distribution of potential future user positions with respect to sites-of-interest can improve the quality of interaction over a simplistic interpretation of the sensed data. We show that this system enables eyes-free navigation around set trajectories or paths unfamiliar to the user for varying trajectory width and context. We demon- strate the possibility to create a simulated model of user behaviour, which may be used to gain an insight into the user behaviour observed in our field trials. The extension of this application to provide a general mechanism for highly interactive context aware applications via density exploration is also presented. AirMessages is an example application enabling users to take an embodied approach to scanning a local area to find messages left in their virtual environment

    Multimodal, Embodied and Location-Aware Interaction

    Get PDF
    This work demonstrates the development of mobile, location-aware, eyes-free applications which utilise multiple sensors to provide a continuous, rich and embodied interaction. We bring together ideas from the fields of gesture recognition, continuous multimodal interaction, probability theory and audio interfaces to design and develop location-aware applications and embodied interaction in both a small-scale, egocentric body-based case and a large-scale, exocentric `world-based' case. BodySpace is a gesture-based application, which utilises multiple sensors and pattern recognition enabling the human body to be used as the interface for an application. As an example, we describe the development of a gesture controlled music player, which functions by placing the device at different parts of the body. We describe a new approach to the segmentation and recognition of gestures for this kind of application and show how simulated physical model-based interaction techniques and the use of real world constraints can shape the gestural interaction. GpsTunes is a mobile, multimodal navigation system equipped with inertial control that enables users to actively explore and navigate through an area in an augmented physical space, incorporating and displaying uncertainty resulting from inaccurate sensing and unknown user intention. The system propagates uncertainty appropriately via Monte Carlo sampling and output is displayed both visually and in audio, with audio rendered via granular synthesis. We demonstrate the use of uncertain prediction in the real world and show that appropriate display of the full distribution of potential future user positions with respect to sites-of-interest can improve the quality of interaction over a simplistic interpretation of the sensed data. We show that this system enables eyes-free navigation around set trajectories or paths unfamiliar to the user for varying trajectory width and context. We demon- strate the possibility to create a simulated model of user behaviour, which may be used to gain an insight into the user behaviour observed in our field trials. The extension of this application to provide a general mechanism for highly interactive context aware applications via density exploration is also presented. AirMessages is an example application enabling users to take an embodied approach to scanning a local area to find messages left in their virtual environment

    The Dollar General: Continuous Custom Gesture Recognition Techniques At Everyday Low Prices

    Get PDF
    Humans use gestures to emphasize ideas and disseminate information. Their importance is apparent in how we continuously augment social interactions with motion—gesticulating in harmony with nearly every utterance to ensure observers understand that which we wish to communicate, and their relevance has not escaped the HCI community\u27s attention. For almost as long as computers have been able to sample human motion at the user interface boundary, software systems have been made to understand gestures as command metaphors. Customization, in particular, has great potential to improve user experience, whereby users map specific gestures to specific software functions. However, custom gesture recognition remains a challenging problem, especially when training data is limited, input is continuous, and designers who wish to use customization in their software are limited by mathematical attainment, machine learning experience, domain knowledge, or a combination thereof. Data collection, filtering, segmentation, pattern matching, synthesis, and rejection analysis are all non-trivial problems a gesture recognition system must solve. To address these issues, we introduce The Dollar General (TDG), a complete pipeline composed of several novel continuous custom gesture recognition techniques. Specifically, TDG comprises an automatic low-pass filter tuner that we use to improve signal quality, a segmenter for identifying gesture candidates in a continuous input stream, a classifier for discriminating gesture candidates from non-gesture motions, and a synthetic data generation module we use to train the classifier. Our system achieves high recognition accuracy with as little as one or two training samples per gesture class, is largely input device agnostic, and does not require advanced mathematical knowledge to understand and implement. In this dissertation, we motivate the importance of gestures and customization, describe each pipeline component in detail, and introduce strategies for data collection and prototype selection

    Advancing Gesture Recognition with Millimeter Wave Radar

    Full text link
    Wireless sensing has attracted significant interest over the years, and with the dawn of emerging technologies, it has become more integrated into our daily lives. Among the various wireless communication platforms, WiFi has gained widespread deployment in indoor settings. Consequently, the utilization of ubiquitous WiFi signals for detecting indoor human activities has garnered considerable attention in the past decade. However, more recently, mmWave Radar-based sensing has emerged as a promising alternative, offering advantages such as enhanced sensitivity to motion and increased bandwidth. This thesis introduces innovative approaches to enhance contactless gesture recognition by leveraging emerging low-cost millimeter wave radar technology. It makes three key contributions. Firstly, a cross-modality training technique is proposed, using mmWave radar as a supplementary aid for training WiFi-based deep learning models. The proposed model enables precise gesture detection based solely on WiFi signals, significantly improving WiFi-based recognition. Secondly, a novel beamforming-based gesture detection system is presented, utilizing commodity mmWave radars for accurate detection in low signal-to-noise scenarios. By steering multiple beams around the gesture performer, independent views of the gesture are captured. A selfattention based deep neural network intelligently fuses information from these beams, surpassing single-beam accuracy. The model incorporates a unique data augmentation algorithm accounting for Doppler shift and multipath effects, enhancing generalization. Notably, the proposed method achieves superior gesture classification performance, outperforming state-of-the-art approaches by 31-43% with only two beams. Thirdly, the research explores receiver antenna diversity in mmWave radars to further improve gesture recognition accuracy by deep learning techniques to combine data from multiple receiver antennas, leveraging inherent diversity for enhanced detection. Extensive experimentation and evaluation demonstrate substantial advancements in contactless gesture recognition using low-cost mmWave radar technology

    Risks and potentials of graphical and gesture-based authentication for touchscreen mobile devices

    Get PDF
    While a few years ago, mobile phones were mainly used for making phone calls and texting short messages, the functionality of mobile devices has massively grown. We are surfing the web, sending emails and we are checking our bank accounts on the go. As a consequence, these internet-enabled devices store a lot of potentially sensitive data and require enhanced protection. We argue that authentication often represents the only countermeasure to protect mobile devices from unwanted access. Knowledge-based concepts (e.g., PIN) are the most used authentication schemes on mobile devices. They serve as the main protection barrier for many users and represent the fallback solution whenever alternative mechanisms fail (e.g., fingerprint recognition). This thesis focuses on the risks and potentials of gesture-based authentication concepts that particularly exploit the touch feature of mobile devices. The contribution of our work is threefold. Firstly, the problem space of mobile authentication is explored. Secondly, the design space is systematically evaluated utilizing interactive prototypes. Finally, we provide generalized insights into the impact of specific design factors and present recommendations for the design and the evaluation of graphical gesture-based authentication mechanisms. The problem space exploration is based on four research projects that reveal important real-world issues of gesture-based authentication on mobile devices. The first part focuses on authentication behavior in the wild and shows that the mobile context makes great demands on the usability of authentication concepts. The second part explores usability features of established concepts and indicates that gesture-based approaches have several benefits in the mobile context. The third part focuses on observability and presents a prediction model for the vulnerability of a given grid-based gesture. Finally, the fourth part investigates the predictability of user-selected gesture-based secrets. The design space exploration is based on a design-oriented research approach and presents several practical solutions to existing real-world problems. The novel authentication mechanisms are implemented into working prototypes and evaluated in the lab and the field. In the first part, we discuss smudge attacks and present alternative authentication concepts that are significantly more secure against such attacks. The second part focuses on observation attacks. We illustrate how relative touch gestures can support eyes-free authentication and how they can be utilized to make traditional PIN-entry secure against observation attacks. The third part addresses the problem of predictable gesture choice and presents two concepts which nudge users to select a more diverse set of gestures. Finally, the results of the basic research and the design-oriented applied research are combined to discuss the interconnection of design space and problem space. We contribute by outlining crucial requirements for mobile authentication mechanisms and present empirically proven objectives for future designs. In addition, we illustrate a systematic goal-oriented development process and provide recommendations for the evaluation of authentication on mobile devices.Während Mobiltelefone vor einigen Jahren noch fast ausschließlich zum Telefonieren und zum SMS schreiben genutzt wurden, sind die Anwendungsmöglichkeiten von Mobilgeräten in den letzten Jahren erheblich gewachsen. Wir surfen unterwegs im Netz, senden E-Mails und überprüfen Bankkonten. In der Folge speichern moderne internetfähigen Mobilgeräte eine Vielfalt potenziell sensibler Daten und erfordern einen erhöhten Schutz. In diesem Zusammenhang stellen Authentifizierungsmethoden häufig die einzige Möglichkeit dar, um Mobilgeräte vor ungewolltem Zugriff zu schützen. Wissensbasierte Konzepte (bspw. PIN) sind die meistgenutzten Authentifizierungssysteme auf Mobilgeräten. Sie stellen für viele Nutzer den einzigen Schutzmechanismus dar und dienen als Ersatzlösung, wenn alternative Systeme (bspw. Fingerabdruckerkennung) versagen. Diese Dissertation befasst sich mit den Risiken und Potenzialen gestenbasierter Konzepte, welche insbesondere die Touch-Funktion moderner Mobilgeräte ausschöpfen. Der wissenschaftliche Beitrag dieser Arbeit ist vielschichtig. Zum einen wird der Problemraum mobiler Authentifizierung erforscht. Zum anderen wird der Gestaltungsraum anhand interaktiver Prototypen systematisch evaluiert. Schließlich stellen wir generelle Einsichten bezüglich des Einflusses bestimmter Gestaltungsaspekte dar und geben Empfehlungen für die Gestaltung und Bewertung grafischer gestenbasierter Authentifizierungsmechanismen. Die Untersuchung des Problemraums basiert auf vier Forschungsprojekten, welche praktische Probleme gestenbasierter Authentifizierung offenbaren. Der erste Teil befasst sich mit dem Authentifizierungsverhalten im Alltag und zeigt, dass der mobile Kontext hohe Ansprüche an die Benutzerfreundlichkeit eines Authentifizierungssystems stellt. Der zweite Teil beschäftigt sich mit der Benutzerfreundlichkeit etablierter Methoden und deutet darauf hin, dass gestenbasierte Konzepte vor allem im mobilen Bereich besondere Vorzüge bieten. Im dritten Teil untersuchen wir die Beobachtbarkeit gestenbasierter Eingabe und präsentieren ein Vorhersagemodell, welches die Angreifbarkeit einer gegebenen rasterbasierten Geste abschätzt. Schließlich beschäftigen wir uns mit der Erratbarkeit nutzerselektierter Gesten. Die Untersuchung des Gestaltungsraums basiert auf einem gestaltungsorientierten Forschungsansatz, welcher zu mehreren praxisgerechte Lösungen führt. Die neuartigen Authentifizierungskonzepte werden als interaktive Prototypen umgesetzt und in Labor- und Feldversuchen evaluiert. Im ersten Teil diskutieren wir Fettfingerattacken ("smudge attacks") und präsentieren alternative Authentifizierungskonzepte, welche effektiv vor diesen Angriffen schützen. Der zweite Teil beschäftigt sich mit Angriffen durch Beobachtung und verdeutlicht wie relative Gesten dazu genutzt werden können, um blickfreie Authentifizierung zu gewährleisten oder um PIN-Eingaben vor Beobachtung zu schützen. Der dritte Teil beschäftigt sich mit dem Problem der vorhersehbaren Gestenwahl und präsentiert zwei Konzepte, welche Nutzer dazu bringen verschiedenartige Gesten zu wählen. Die Ergebnisse der Grundlagenforschung und der gestaltungsorientierten angewandten Forschung werden schließlich verknüpft, um die Verzahnung von Gestaltungsraum und Problemraum zu diskutieren. Wir präsentieren wichtige Anforderungen für mobile Authentifizierungsmechanismen und erläutern empirisch nachgewiesene Zielvorgaben für zukünftige Konzepte. Zusätzlich zeigen wir einen zielgerichteten Entwicklungsprozess auf, welcher bei der Entwicklung neuartiger Konzepte helfen wird und geben Empfehlungen für die Evaluation mobiler Authentifizierungsmethoden

    Robust and reliable hand gesture recognition for myoelectric control

    Get PDF
    Surface Electromyography (sEMG) is a physiological signal to record the electrical activity of muscles by electrodes applied to the skin. In the context of Muscle Computer Interaction (MCI), systems are controlled by transforming myoelectric signals into interaction commands that convey the intent of user movement, mostly for rehabilitation purposes. Taking the myoeletric hand prosthetic control as an example, using sEMG recorded from the remaining muscles of the stump can be considered as the most natural way for amputees who lose their limbs to perform activities of daily living with the aid of prostheses. Although the earliest myoelectric control research can date back to the 1950s, there still exist considerable challenges to address the significant gap between academic research and industrial applications. Most recently, pattern recognition-based control is being developed rapidly to improve the dexterity of myoelectric prosthetic devices due to the recent development of machine learning and deep learning techniques. It is clear that the performance of Hand Gesture Recognition (HGR) plays an essential role in pattern recognition-based control systems. However, in reality, the tremendous success in achieving very high sEMG-based HGR accuracy (≥ 90%) reported in scientific articles produced only limited clinical or commercial impact. As many have reported, its real-time performance tends to degrade significantly as a result of many confounding factors, such as electrode shift, sweating, fatigue, and day-to-day variation. The main interest of the present thesis is, therefore, to improve the robustness of sEMG-based HGR by taking advantage of the most recent advanced deep learning techniques to address several practical concerns. Furthermore, the challenge of this research problem has been reinforced by only considering using raw sparse multichannel sEMG signals as input. Firstly, a framework for designing an uncertainty-aware sEMG-based hand gesture classifier is proposed. Applying it allows us to quickly build a model with the ability to make its inference along with explainable quantified multidimensional uncertainties. This addresses the black-box concern of the HGR process directly. Secondly, to fill the gap of lacking consensus on the definition of model reliability in this field, a proper definition of model reliability is proposed. Based on it, reliability analysis can be performed as a new dimension of evaluation to help select the best model without relying only on classification accuracy. Our extensive experimental results have shown the efficiency of the proposed reliability analysis, which encourages researchers to use it as a supplementary tool for model evaluation. Next, an uncertainty-aware model is designed based on the proposed framework to address the low robustness of hand grasp recognition. This offers an opportunity to investigate whether reliable models can achieve robust performance. The results show that the proposed model can improve the long-term robustness of hand grasp recognition by rejecting highly uncertain predictions. Finally, a simple but effective normalisation approach is proposed to improve the robustness of inter-subject HGR, thus addressing the clinical challenge of having only a limited amount of data from any individual. The comparison results show that better performance can be obtained by it compared to a state-of-the-art (SoA) transfer learning method when only one training cycle is available. In summary, this study presents promising methods to pursue an accurate, robust, and reliable classifier, which is the overarching goal for sEMG-based HGR. The direction for future work would be the inclusion of these in real-time myoelectric control applications

    Semi-supervised machine learning techniques for classification of evolving data in pattern recognition

    Get PDF
    The amount of data recorded and processed over recent years has increased exponentially. To create intelligent systems that can learn from this data, we need to be able to identify patterns hidden in the data itself, learn these pattern and predict future results based on our current observations. If we think about this system in the context of time, the data itself evolves and so does the nature of the classification problem. As more data become available, different classification algorithms are suitable for a particular setting. At the beginning of the learning cycle when we have a limited amount of data, online learning algorithms are more suitable. When truly large amounts of data become available, we need algorithms that can handle large amounts of data that might be only partially labeled as a result of the bottleneck in the learning pipeline from human labeling of the data. An excellent example of evolving data is gesture recognition, and it is present throughout our work. We need a gesture recognition system to work fast and with very few examples at the beginning. Over time, we are able to collect more data and the system can improve. As the system evolves, the user expects it to work better and not to have to become involved when the classifier is unsure about decisions. This latter situation produces additional unlabeled data. Another example of an application is medical classification, where experts’ time is a rare resource and the amount of received and labeled data disproportionately increases over time. Although the process of data evolution is continuous, we identify three main discrete areas of contribution in different scenarios. When the system is very new and not enough data are available, online learning is used to learn after every single example and to capture the knowledge very fast. With increasing amounts of data, offline learning techniques are applicable. Once the amount of data is overwhelming and the teacher cannot provide labels for all the data, we have another setup that combines labeled and unlabeled data. These three setups define our areas of contribution; and our techniques contribute in each of them with applications to pattern recognition scenarios, such as gesture recognition and sketch recognition. An online learning setup significantly restricts the range of techniques that can be used. In our case, the selected baseline technique is the Evolving TS-Fuzzy Model. The semi-supervised aspect we use is a relation between rules created by this model. Specifically, we propose a transductive similarity model that utilizes the relationship between generated rules based on their decisions about a query sample during the inference time. The activation of each of these rules is adjusted according to the transductive similarity, and the new decision is obtained using the adjusted activation. We also propose several new variations to the transductive similarity itself. Once the amount of data increases, we are not limited to the online learning setup, and we can take advantage of the offline learning scenario, which normally performs better than the online one because of the independence of sample ordering and global optimization with respect to all samples. We use generative methods to obtain data outside of the training set. Specifically, we aim to improve the previously mentioned TS Fuzzy Model by incorporating semi-supervised learning in the offline learning setup without unlabeled data. We use the Universum learning approach and have developed a method called UFuzzy. This method relies on artificially generated examples with high uncertainty (Universum set), and it adjusts the cost function of the algorithm to force the decision boundary to be close to the Universum data. We were able to prove the hypothesis behind the design of the UFuzzy classifier that Universum learning can improve the TS Fuzzy Model and have achieved improved performance on more than two dozen datasets and applications. With increasing amounts of data, we use the last scenario, in which the data comprises both labeled data and additional non-labeled data. This setting is one of the most common ones for semi-supervised learning problems. In this part of our work, we aim to improve the widely popular tecjniques of self-training (and its successor help-training) that are both meta-frameworks over regular classifier methods but require probabilistic representation of output, which can be hard to obtain in the case of discriminative classifiers. Therefore, we develop a new algorithm that uses the modified active learning technique Query-by-Committee (QbC) to sample data with high certainty from the unlabeled set and subsequently embed them into the original training set. Our new method allows us to achieve increased performance over both a range of datasets and a range of classifiers. These three works are connected by gradually relaxing the constraints on the learning setting in which we operate. Although our main motivation behind the development was to increase performance in various real-world tasks (gesture recognition, sketch recognition), we formulated our work as general methods in such a way that they can be used outside a specific application setup, the only restriction being that the underlying data evolve over time. Each of these methods can successfully exist on its own. The best setting in which they can be used is a learning problem where the data evolve over time and it is possible to discretize the evolutionary process. Overall, this work represents a significant contribution to the area of both semi-supervised learning and pattern recognition. It presents new state-of-the-art techniques that overperform baseline solutions, and it opens up new possibilities for future research

    Risks and potentials of graphical and gesture-based authentication for touchscreen mobile devices

    Get PDF
    While a few years ago, mobile phones were mainly used for making phone calls and texting short messages, the functionality of mobile devices has massively grown. We are surfing the web, sending emails and we are checking our bank accounts on the go. As a consequence, these internet-enabled devices store a lot of potentially sensitive data and require enhanced protection. We argue that authentication often represents the only countermeasure to protect mobile devices from unwanted access. Knowledge-based concepts (e.g., PIN) are the most used authentication schemes on mobile devices. They serve as the main protection barrier for many users and represent the fallback solution whenever alternative mechanisms fail (e.g., fingerprint recognition). This thesis focuses on the risks and potentials of gesture-based authentication concepts that particularly exploit the touch feature of mobile devices. The contribution of our work is threefold. Firstly, the problem space of mobile authentication is explored. Secondly, the design space is systematically evaluated utilizing interactive prototypes. Finally, we provide generalized insights into the impact of specific design factors and present recommendations for the design and the evaluation of graphical gesture-based authentication mechanisms. The problem space exploration is based on four research projects that reveal important real-world issues of gesture-based authentication on mobile devices. The first part focuses on authentication behavior in the wild and shows that the mobile context makes great demands on the usability of authentication concepts. The second part explores usability features of established concepts and indicates that gesture-based approaches have several benefits in the mobile context. The third part focuses on observability and presents a prediction model for the vulnerability of a given grid-based gesture. Finally, the fourth part investigates the predictability of user-selected gesture-based secrets. The design space exploration is based on a design-oriented research approach and presents several practical solutions to existing real-world problems. The novel authentication mechanisms are implemented into working prototypes and evaluated in the lab and the field. In the first part, we discuss smudge attacks and present alternative authentication concepts that are significantly more secure against such attacks. The second part focuses on observation attacks. We illustrate how relative touch gestures can support eyes-free authentication and how they can be utilized to make traditional PIN-entry secure against observation attacks. The third part addresses the problem of predictable gesture choice and presents two concepts which nudge users to select a more diverse set of gestures. Finally, the results of the basic research and the design-oriented applied research are combined to discuss the interconnection of design space and problem space. We contribute by outlining crucial requirements for mobile authentication mechanisms and present empirically proven objectives for future designs. In addition, we illustrate a systematic goal-oriented development process and provide recommendations for the evaluation of authentication on mobile devices.Während Mobiltelefone vor einigen Jahren noch fast ausschließlich zum Telefonieren und zum SMS schreiben genutzt wurden, sind die Anwendungsmöglichkeiten von Mobilgeräten in den letzten Jahren erheblich gewachsen. Wir surfen unterwegs im Netz, senden E-Mails und überprüfen Bankkonten. In der Folge speichern moderne internetfähigen Mobilgeräte eine Vielfalt potenziell sensibler Daten und erfordern einen erhöhten Schutz. In diesem Zusammenhang stellen Authentifizierungsmethoden häufig die einzige Möglichkeit dar, um Mobilgeräte vor ungewolltem Zugriff zu schützen. Wissensbasierte Konzepte (bspw. PIN) sind die meistgenutzten Authentifizierungssysteme auf Mobilgeräten. Sie stellen für viele Nutzer den einzigen Schutzmechanismus dar und dienen als Ersatzlösung, wenn alternative Systeme (bspw. Fingerabdruckerkennung) versagen. Diese Dissertation befasst sich mit den Risiken und Potenzialen gestenbasierter Konzepte, welche insbesondere die Touch-Funktion moderner Mobilgeräte ausschöpfen. Der wissenschaftliche Beitrag dieser Arbeit ist vielschichtig. Zum einen wird der Problemraum mobiler Authentifizierung erforscht. Zum anderen wird der Gestaltungsraum anhand interaktiver Prototypen systematisch evaluiert. Schließlich stellen wir generelle Einsichten bezüglich des Einflusses bestimmter Gestaltungsaspekte dar und geben Empfehlungen für die Gestaltung und Bewertung grafischer gestenbasierter Authentifizierungsmechanismen. Die Untersuchung des Problemraums basiert auf vier Forschungsprojekten, welche praktische Probleme gestenbasierter Authentifizierung offenbaren. Der erste Teil befasst sich mit dem Authentifizierungsverhalten im Alltag und zeigt, dass der mobile Kontext hohe Ansprüche an die Benutzerfreundlichkeit eines Authentifizierungssystems stellt. Der zweite Teil beschäftigt sich mit der Benutzerfreundlichkeit etablierter Methoden und deutet darauf hin, dass gestenbasierte Konzepte vor allem im mobilen Bereich besondere Vorzüge bieten. Im dritten Teil untersuchen wir die Beobachtbarkeit gestenbasierter Eingabe und präsentieren ein Vorhersagemodell, welches die Angreifbarkeit einer gegebenen rasterbasierten Geste abschätzt. Schließlich beschäftigen wir uns mit der Erratbarkeit nutzerselektierter Gesten. Die Untersuchung des Gestaltungsraums basiert auf einem gestaltungsorientierten Forschungsansatz, welcher zu mehreren praxisgerechte Lösungen führt. Die neuartigen Authentifizierungskonzepte werden als interaktive Prototypen umgesetzt und in Labor- und Feldversuchen evaluiert. Im ersten Teil diskutieren wir Fettfingerattacken ("smudge attacks") und präsentieren alternative Authentifizierungskonzepte, welche effektiv vor diesen Angriffen schützen. Der zweite Teil beschäftigt sich mit Angriffen durch Beobachtung und verdeutlicht wie relative Gesten dazu genutzt werden können, um blickfreie Authentifizierung zu gewährleisten oder um PIN-Eingaben vor Beobachtung zu schützen. Der dritte Teil beschäftigt sich mit dem Problem der vorhersehbaren Gestenwahl und präsentiert zwei Konzepte, welche Nutzer dazu bringen verschiedenartige Gesten zu wählen. Die Ergebnisse der Grundlagenforschung und der gestaltungsorientierten angewandten Forschung werden schließlich verknüpft, um die Verzahnung von Gestaltungsraum und Problemraum zu diskutieren. Wir präsentieren wichtige Anforderungen für mobile Authentifizierungsmechanismen und erläutern empirisch nachgewiesene Zielvorgaben für zukünftige Konzepte. Zusätzlich zeigen wir einen zielgerichteten Entwicklungsprozess auf, welcher bei der Entwicklung neuartiger Konzepte helfen wird und geben Empfehlungen für die Evaluation mobiler Authentifizierungsmethoden
    corecore