300 research outputs found

    To Draw or Not to Draw: Recognizing Stroke-Hover Intent in Gesture-Free Bare-Hand Mid-Air Drawing Tasks

    Get PDF
    Over the past several decades, technological advancements have introduced new modes of communication with the computers, introducing a shift from traditional mouse and keyboard interfaces. While touch based interactions are abundantly being used today, latest developments in computer vision, body tracking stereo cameras, and augmented and virtual reality have now enabled communicating with the computers using spatial input in the physical 3D space. These techniques are now being integrated into several design critical tasks like sketching, modeling, etc. through sophisticated methodologies and use of specialized instrumented devices. One of the prime challenges in design research is to make this spatial interaction with the computer as intuitive as possible for the users. Drawing curves in mid-air with fingers, is a fundamental task with applications to 3D sketching, geometric modeling, handwriting recognition, and authentication. Sketching in general, is a crucial mode for effective idea communication between designers. Mid-air curve input is typically accomplished through instrumented controllers, specific hand postures, or pre-defined hand gestures, in presence of depth and motion sensing cameras. The user may use any of these modalities to express the intention to start or stop sketching. However, apart from suffering with issues like lack of robustness, the use of such gestures, specific postures, or the necessity of instrumented controllers for design specific tasks further result in an additional cognitive load on the user. To address the problems associated with different mid-air curve input modalities, the presented research discusses the design, development, and evaluation of data driven models for intent recognition in non-instrumented, gesture-free, bare-hand mid-air drawing tasks. The research is motivated by a behavioral study that demonstrates the need for such an approach due to the lack of robustness and intuitiveness while using hand postures and instrumented devices. The main objective is to study how users move during mid-air sketching, develop qualitative insights regarding such movements, and consequently implement a computational approach to determine when the user intends to draw in mid-air without the use of an explicit mechanism (such as an instrumented controller or a specified hand-posture). By recording the user’s hand trajectory, the idea is to simply classify this point as either hover or stroke. The resulting model allows for the classification of points on the user’s spatial trajectory. Drawing inspiration from the way users sketch in mid-air, this research first specifies the necessity for an alternate approach for processing bare hand mid-air curves in a continuous fashion. Further, this research presents a novel drawing intent recognition work flow for every recorded drawing point, using three different approaches. We begin with recording mid-air drawing data and developing a classification model based on the extracted geometric properties of the recorded data. The main goal behind developing this model is to identify drawing intent from critical geometric and temporal features. In the second approach, we explore the variations in prediction quality of the model by improving the dimensionality of data used as mid-air curve input. Finally, in the third approach, we seek to understand the drawing intention from mid-air curves using sophisticated dimensionality reduction neural networks such as autoencoders. Finally, the broad level implications of this research are discussed, with potential development areas in the design and research of mid-air interactions

    Applying touch gesture to improve application accessing speed on mobile devices.

    Get PDF
    The touch gesture shortcut is one of the most significant contributions to Human-Computer Interaction (HCI). It is used in many fields: e.g., performing web browsing tasks (i.e., moving to the next page, adding bookmarks, etc.) on a smartphone, manipulating a virtual object on a tabletop device and communicating between two touch screen devices. Compared with the traditional Graphic User Interface (GUI), the touch gesture shortcut is more efficient, more natural, it is intuitive and easier to use. With the rapid development of smartphone technology, an increasing number of data items are showing up in users’ mobile devices, such as contacts, installed apps and photos. As a result, it has become troublesome to find a target item on a mobile device with traditional GUI. For example, to find a target app, sliding and browsing through several screens is a necessity. This thesis addresses this challenge by proposing two alternative methods of using a touch gesture shortcut to find a target item (an app, as an example) in a mobile device. Current touch gesture shortcut methods either employ a universal built-in system- defined shortcut template or a gesture-item set, which is defined by users before using the device. In either case, the users need to learn/define first and then recall and draw the gesture to reach the target item according to the template/predefined set. Evidence has shown that compared with GUI, the touch gesture shortcut has an advantage when performing several types of tasks e.g., text editing, picture drawing, audio control, etc. but it is unknown whether it is quicker or more effective than the traditional GUI for finding target apps. This thesis first conducts an exploratory study to understand user memorisation of their Personalized Gesture Shortcuts (PGS) for 15 frequently used mobile apps. An experiment will then be conducted to investigate (1) the users’ recall accuracy on the PGS for finding both frequently and infrequently used target apps, (2) and the speed by which users are able to access the target apps relative to GUI. The results show that the PGS produced a clear speed advantage (1.3s faster on average) over the traditional GUI, while there was an approximate 20% failure rate due to unsuccessful recall on the PGS. To address the unsuccessful recall problem, this thesis explores ways of developing a new interactive approach based on the touch gesture shortcut but without requiring recall or having to be predefined before use. It has been named the Intelligent Launcher in this thesis, and it predicts and launches any intended target app from an unconstrained gesture drawn by the user. To explore how to achieve this, this thesis conducted a third experiment to investigate the relationship between the reasons underlying the user’s gesture creation and the gesture shape (handwriting, non-handwriting or abstract) they used as their shortcut. According to the results, unlike the existing approaches, the thesis proposes that the launcher should predict the users’ intended app from three types of gestures. First, the non-handwriting gestures via the visual similarity between it and the app’s icon; second, the handwriting gestures via the app’s library name plus functionality; and third, the abstract gestures via the app’s usage history. In light of these findings mentioned above, we designed and developed the Intelligent Launcher, which is based on the assumptions drawn from the empirical data. This thesis introduces the interaction, the architecture and the technical details of the launcher. How to use the data from the third experiment to improve the predictions based on a machine learning method, i.e., the Markov Model, is described in this thesis. An evaluation experiment, shows that the Intelligent Launcher has achieved user satisfaction with a prediction accuracy of 96%. As of now, it is still difficult to know which type of gesture a user tends to use. Therefore, a fourth experiment, which focused on exploring the factors that influence the choice of touch gesture shortcut type for accessing a target app is also conducted in this thesis. The results of the experiment show that (1) those who preferred a name-based method used it more consistently and used more letter gestures compared with those who preferred the other three methods; (2) those who preferred the keyword app search method created more letter gestures than other types; (3) those who preferred an iOS system created more drawing gestures than other types; (4) letter gestures were more often used for the apps that were used frequently, whereas drawing gestures were more often used for the apps that were used infrequently; (5) the participants tended to use the same creation method as the preferred method on different days of the experiment. This thesis contributes to the body of Human-Computer Interaction knowledge. It proposes two alternative methods which are more efficient and flexible for finding a target item among a large number of items. The PGS method has been confirmed as being effective and has a clear speed advantage. The Intelligent Launcher has been developed and it demonstrates a novel way of predicting a target item via the gesture user’s drawing. The findings concerning the relationship between the user’s choice of gesture for the shortcut and some of the individual factors have informed the design of a more flexible touch gesture shortcut interface for ”target item finding” tasks. When searching for different types of data items, the Intelligent Launcher is a prototype for finding target apps since the variety in visual appearance of an app and its functionality make it more difficult to predict than other targets, such as a standard phone setting, a contact or a website. However, we believe that the ideas that have been presented in this thesis can be further extended to other types of items, such as videos or photos in a Photo Library, places on a map or clothes in an online store. What is more, this study also leads the way in tackling the advantage of a machine learning method in touch gesture shortcut interactions

    Usability, Efficiency and Security of Personal Computing Technologies

    Get PDF
    New personal computing technologies such as smartphones and personal fitness trackers are widely integrated into user lifestyles. Users possess a wide range of skills, attributes and backgrounds. It is important to understand user technology practices to ensure that new designs are usable and productive. Conversely, it is important to leverage our understanding of user characteristics to optimize new technology efficiency and effectiveness. Our work initially focused on studying older users, and personal fitness tracker users. We applied the insights from these investigations to develop new techniques improving user security protections, computational efficiency, and also enhancing the user experience. We offer that by increasing the usability, efficiency and security of personal computing technology, users will enjoy greater privacy protections along with experiencing greater enjoyment of their personal computing devices. Our first project resulted in an improved authentication system for older users based on familiar facial images. Our investigation revealed that older users are often challenged by traditional text passwords, resulting in decreased technology use or less than optimal password practices. Our graphical password-based system relies on memorable images from the user\u27s personal past history. Our usability study demonstrated that this system was easy to use, enjoyable, and fast. We show that this technique is extendable to smartphones. Personal fitness trackers are very popular devices, often worn by users all day. Our personal fitness tracker investigation provides the first quantitative baseline of usage patterns with this device. By exploring public data, real-world user motivations, reliability concerns, activity levels, and fitness-related socialization patterns were discerned. This knowledge lends insight to active user practices. Personal user movement data is captured by sensors, then analyzed to provide benefits to the user. The dynamic time warping technique enables comparison of unequal data sequences, and sequences containing events at offset times. Existing techniques target short data sequences. Our Phase-aware Dynamic Time Warping algorithm focuses on a class of sinusoidal user movement patterns, resulting in improved efficiency over existing methods. Lastly, we address user data privacy concerns in an environment where user data is increasingly flowing to manufacturer remote cloud servers for analysis. Our secure computation technique protects the user\u27s privacy while data is in transit and while resident on cloud computing resources. Our technique also protects important data on cloud servers from exposure to individual users

    Integrating passive ubiquitous surfaces into human-computer interaction

    Get PDF
    Mobile technologies enable people to interact with computers ubiquitously. This dissertation investigates how ordinary, ubiquitous surfaces can be integrated into human-computer interaction to extend the interaction space beyond the edge of the display. It turns out that acoustic and tactile features generated during an interaction can be combined to identify input events, the user, and the surface. In addition, it is shown that a heterogeneous distribution of different surfaces is particularly suitable for realizing versatile interaction modalities. However, privacy concerns must be considered when selecting sensors, and context can be crucial in determining whether and what interaction to perform.Mobile Technologien ermöglichen den Menschen eine allgegenwĂ€rtige Interaktion mit Computern. Diese Dissertation untersucht, wie gewöhnliche, allgegenwĂ€rtige OberflĂ€chen in die Mensch-Computer-Interaktion integriert werden können, um den Interaktionsraum ĂŒber den Rand des Displays hinaus zu erweitern. Es stellt sich heraus, dass akustische und taktile Merkmale, die wĂ€hrend einer Interaktion erzeugt werden, kombiniert werden können, um Eingabeereignisse, den Benutzer und die OberflĂ€che zu identifizieren. DarĂŒber hinaus wird gezeigt, dass eine heterogene Verteilung verschiedener OberflĂ€chen besonders geeignet ist, um vielfĂ€ltige InteraktionsmodalitĂ€ten zu realisieren. Bei der Auswahl der Sensoren mĂŒssen jedoch Datenschutzaspekte berĂŒcksichtigt werden, und der Kontext kann entscheidend dafĂŒr sein, ob und welche Interaktion durchgefĂŒhrt werden soll

    From wearable towards epidermal computing : soft wearable devices for rich interaction on the skin

    Get PDF
    Human skin provides a large, always available, and easy to access real-estate for interaction. Recent advances in new materials, electronics, and human-computer interaction have led to the emergence of electronic devices that reside directly on the user's skin. These conformal devices, referred to as Epidermal Devices, have mechanical properties compatible with human skin: they are very thin, often thinner than human hair; they elastically deform when the body is moving, and stretch with the user's skin. Firstly, this thesis provides a conceptual understanding of Epidermal Devices in the HCI literature. We compare and contrast them with other technical approaches that enable novel on-skin interactions. Then, through a multi-disciplinary analysis of Epidermal Devices, we identify the design goals and challenges that need to be addressed for advancing this emerging research area in HCI. Following this, our fundamental empirical research investigated how epidermal devices of different rigidity levels affect passive and active tactile perception. Generally, a correlation was found between the device rigidity and tactile sensitivity thresholds as well as roughness discrimination ability. Based on these findings, we derive design recommendations for realizing epidermal devices. Secondly, this thesis contributes novel Epidermal Devices that enable rich on-body interaction. SkinMarks contributes to the fabrication and design of novel Epidermal Devices that are highly skin-conformal and enable touch, squeeze, and bend sensing with co-located visual output. These devices can be deployed on highly challenging body locations, enabling novel interaction techniques and expanding the design space of on-body interaction. Multi-Touch Skin enables high-resolution multi-touch input on the body. We present the first non-rectangular and high-resolution multi-touch sensor overlays for use on skin and introduce a design tool that generates such sensors in custom shapes and sizes. Empirical results from two technical evaluations confirm that the sensor achieves a high signal-to-noise ratio on the body under various grounding conditions and has a high spatial accuracy even when subjected to strong deformations. Thirdly, Epidermal Devices are in contact with the skin, they offer opportunities for sensing rich physiological signals from the body. To leverage this unique property, this thesis presents rapid fabrication and computational design techniques for realizing Multi-Modal Epidermal Devices that can measure multiple physiological signals from the human body. Devices fabricated through these techniques can measure ECG (Electrocardiogram), EMG (Electromyogram), and EDA (Electro-Dermal Activity). We also contribute a computational design and optimization method based on underlying human anatomical models to create optimized device designs that provide an optimal trade-off between physiological signal acquisition capability and device size. The graphical tool allows for easily specifying design preferences and to visually analyze the generated designs in real-time, enabling designer-in-the-loop optimization. Experimental results show high quantitative agreement between the prediction of the optimizer and experimentally collected physiological data. Finally, taking a multi-disciplinary perspective, we outline the roadmap for future research in this area by highlighting the next important steps, opportunities, and challenges. Taken together, this thesis contributes towards a holistic understanding of Epidermal Devices}: it provides an empirical and conceptual understanding as well as technical insights through contributions in DIY (Do-It-Yourself), rapid fabrication, and computational design techniques.Die menschliche Haut bietet eine große, stets verfĂŒgbare und leicht zugĂ€ngliche FlĂ€che fĂŒr Interaktion. JĂŒngste Fortschritte in den Bereichen Materialwissenschaft, Elektronik und Mensch-Computer-Interaktion (Human-Computer-Interaction, HCI) [so that you can later use the Englisch abbreviation] haben zur Entwicklung elektronischer GerĂ€te gefĂŒhrt, die sich direkt auf der Haut des Benutzers befinden. Diese sogenannten EpidermisgerĂ€te haben mechanische Eigenschaften, die mit der menschlichen Haut kompatibel sind: Sie sind sehr dĂŒnn, oft dĂŒnner als ein menschliches Haar; sie verformen sich elastisch, wenn sich der Körper bewegt, und dehnen sich mit der Haut des Benutzers. Diese Thesis bietet, erstens, ein konzeptionelles VerstĂ€ndnis von EpidermisgerĂ€ten in der HCI-Literatur. Wir vergleichen sie mit anderen technischen AnsĂ€tzen, die neuartige Interaktionen auf der Haut ermöglichen. Dann identifizieren wir durch eine multidisziplinĂ€re Analyse von EpidermisgerĂ€ten die Designziele und Herausforderungen, die angegangen werden mĂŒssen, um diesen aufstrebenden Forschungsbereich voranzubringen. Im Anschluss daran untersuchten wir in unserer empirischen Grundlagenforschung, wie epidermale GerĂ€te unterschiedlicher Steifigkeit die passive und aktive taktile Wahrnehmung beeinflussen. Im Allgemeinen wurde eine Korrelation zwischen der Steifigkeit des GerĂ€ts und den taktilen Empfindlichkeitsschwellen sowie der FĂ€higkeit zur Rauheitsunterscheidung festgestellt. Basierend auf diesen Ergebnissen leiten wir Designempfehlungen fĂŒr die Realisierung epidermaler GerĂ€te ab. Zweitens trĂ€gt diese Thesis zu neuartigen EpidermisgerĂ€ten bei, die eine reichhaltige Interaktion am Körper ermöglichen. SkinMarks trĂ€gt zur Herstellung und zum Design neuartiger EpidermisgerĂ€te bei, die hochgradig an die Haut angepasst sind und BerĂŒhrungs-, Quetsch- und Biegesensoren mit gleichzeitiger visueller Ausgabe ermöglichen. Diese GerĂ€te können an sehr schwierigen Körperstellen eingesetzt werden, ermöglichen neuartige Interaktionstechniken und erweitern den Designraum fĂŒr die Interaktion am Körper. Multi-Touch Skin ermöglicht hochauflösende Multi-Touch-Eingaben am Körper. Wir prĂ€sentieren die ersten nicht-rechteckigen und hochauflösenden Multi-Touch-Sensor-Overlays zur Verwendung auf der Haut und stellen ein Design-Tool vor, das solche Sensoren in benutzerdefinierten Formen und GrĂ¶ĂŸen erzeugt. Empirische Ergebnisse aus zwei technischen Evaluierungen bestĂ€tigen, dass der Sensor auf dem Körper unter verschiedenen Bedingungen ein hohes Signal-Rausch-VerhĂ€ltnis erreicht und eine hohe rĂ€umliche Auflösung aufweist, selbst wenn er starken Verformungen ausgesetzt ist. Drittens, da EpidermisgerĂ€te in Kontakt mit der Haut stehen, bieten sie die Möglichkeit, reichhaltige physiologische Signale des Körpers zu erfassen. Um diese einzigartige Eigenschaft zu nutzen, werden in dieser Arbeit Techniken zur schnellen Herstellung und zum computergestĂŒtzten Design von multimodalen EpidermisgerĂ€ten vorgestellt, die mehrere physiologische Signale des menschlichen Körpers messen können. Die mit diesen Techniken hergestellten GerĂ€te können EKG (Elektrokardiogramm), EMG (Elektromyogramm) und EDA (elektrodermale AktivitĂ€t) messen. DarĂŒber hinaus stellen wir eine computergestĂŒtzte Design- und Optimierungsmethode vor, die auf den zugrunde liegenden anatomischen Modellen des Menschen basiert, um optimierte GerĂ€tedesigns zu erstellen. Diese Designs bieten einen optimalen Kompromiss zwischen der FĂ€higkeit zur Erfassung physiologischer Signale und der GrĂ¶ĂŸe des GerĂ€ts. Das grafische Tool ermöglicht die einfache Festlegung von DesignprĂ€ferenzen und die visuelle Analyse der generierten Designs in Echtzeit, was eine Optimierung durch den Designer im laufenden Betrieb ermöglicht. Experimentelle Ergebnisse zeigen eine hohe quantitative Übereinstimmung zwischen den Vorhersagen des Optimierers und den experimentell erfassten physiologischen Daten. Schließlich skizzieren wir aus einer multidisziplinĂ€ren Perspektive einen Fahrplan fĂŒr zukĂŒnftige Forschung in diesem Bereich, indem wir die nĂ€chsten wichtigen Schritte, Möglichkeiten und Herausforderungen hervorheben. Insgesamt trĂ€gt diese Arbeit zu einem ganzheitlichen VerstĂ€ndnis von EpidermisgerĂ€ten bei: Sie liefert ein empirisches und konzeptionelles VerstĂ€ndnis sowie technische Einblicke durch BeitrĂ€ge zu DIY (Do-It-Yourself), schneller Fertigung und computergestĂŒtzten Entwurfstechniken

    Personalized Interaction with High-Resolution Wall Displays

    Get PDF
    Fallende Hardwarepreise sowie eine zunehmende Offenheit gegenĂŒber neuartigen InteraktionsmodalitĂ€ten haben in den vergangen Jahren den Einsatz von wandgroßen interaktiven Displays möglich gemacht, und in der Folge ist ihre Anwendung, unter anderem in den Bereichen Visualisierung, Bildung, und der UnterstĂŒtzung von Meetings, erfolgreich demonstriert worden. Aufgrund ihrer GrĂ¶ĂŸe sind Wanddisplays fĂŒr die Interaktion mit mehreren Benutzern prĂ€destiniert. Gleichzeitig kann angenommen werden, dass Zugang zu persönlichen Daten und Einstellungen — mithin personalisierte Interaktion — weiterhin essentieller Bestandteil der meisten AnwendungsfĂ€lle sein wird. Aktuelle Benutzerschnittstellen im Desktop- und Mobilbereich steuern Zugriffe ĂŒber ein initiales Login. Die Annahme, dass es nur einen Benutzer pro Bildschirm gibt, zieht sich durch das gesamte System, und ermöglicht unter anderem den Zugriff auf persönliche Daten und Kommunikation sowie persönliche Einstellungen. Gibt es hingegen mehrere Benutzer an einem großen Bildschirm, mĂŒssen hierfĂŒr Alternativen gefunden werden. Die daraus folgende Forschungsfrage dieser Dissertation lautet: Wie können wir im Kontext von Mehrbenutzerinteraktion mit wandgroßen Displays personalisierte Schnittstellen zur VerfĂŒgung stellen? Die Dissertation befasst sich sowohl mit personalisierter Interaktion in der NĂ€he (mit Touch als EingabemodalitĂ€t) als auch in etwas weiterer Entfernung (unter Nutzung zusĂ€tzlicher mobiler GerĂ€te). Grundlage fĂŒr personalisierte Mehrbenutzerinteraktion sind technische Lösungen fĂŒr die Zuordnung von Benutzern zu einzelnen Interaktionen. Hierzu werden zwei Alternativen untersucht: In der ersten werden Nutzer via Kamera verfolgt, und in der zweiten werden MobilgerĂ€te anhand von Ultraschallsignalen geortet. Darauf aufbauend werden Interaktionstechniken vorgestellt, die personalisierte Interaktion unterstĂŒtzen. Diese nutzen zusĂ€tzliche MobilgerĂ€te, die den Zugriff auf persönliche Daten sowie Interaktion in einigem Abstand von der Displaywand ermöglichen. Einen weiteren Teil der Arbeit bildet die Untersuchung der praktischen Auswirkungen der Ausgabe- und InteraktionsmodalitĂ€ten fĂŒr personalisierte Interaktion. Hierzu wird eine qualitative Studie vorgestellt, die Nutzerverhalten anhand des kooperativen Mehrbenutzerspiels Miners analysiert. Der abschließende Beitrag beschĂ€ftigt sich mit dem Analyseprozess selber: Es wird das Analysetoolkit fĂŒr Wandinteraktionen GIAnT vorgestellt, das Nutzerbewegungen, Interaktionen, und Blickrichtungen visualisiert und dadurch die Untersuchung der Interaktionen stark vereinfacht.An increasing openness for more diverse interaction modalities as well as falling hardware prices have made very large interactive vertical displays more feasible, and consequently, applications in settings such as visualization, education, and meeting support have been demonstrated successfully. Their size makes wall displays inherently usable for multi-user interaction. At the same time, we can assume that access to personal data and settings, and thus personalized interaction, will still be essential in most use-cases. In most current desktop and mobile user interfaces, access is regulated via an initial login and the complete user interface is then personalized to this user: Access to personal data, configurations and communications all assume a single user per screen. In the case of multiple people using one screen, this is not a feasible solution and we must find alternatives. Therefore, this thesis addresses the research question: How can we provide personalized interfaces in the context of multi-user interaction with wall displays? The scope spans personalized interaction both close to the wall (using touch as input modality) and further away (using mobile devices). Technical solutions that identify users at each interaction can replace logins and enable personalized interaction for multiple users at once. This thesis explores two alternative means of user identification: Tracking using RGB+depth-based cameras and leveraging ultrasound positioning of the users' mobile devices. Building on this, techniques that support personalized interaction using personal mobile devices are proposed. In the first contribution on interaction, HyDAP, we examine pointing from the perspective of moving users, and in the second, SleeD, we propose using an arm-worn device to facilitate access to private data and personalized interface elements. Additionally, the work contributes insights on practical implications of personalized interaction at wall displays: We present a qualitative study that analyses interaction using a multi-user cooperative game as application case, finding awareness and occlusion issues. The final contribution is a corresponding analysis toolkit that visualizes users' movements, touch interactions and gaze points when interacting with wall displays and thus allows fine-grained investigation of the interactions

    Ubiquitous Computing

    Get PDF
    The aim of this book is to give a treatment of the actively developed domain of Ubiquitous computing. Originally proposed by Mark D. Weiser, the concept of Ubiquitous computing enables a real-time global sensing, context-aware informational retrieval, multi-modal interaction with the user and enhanced visualization capabilities. In effect, Ubiquitous computing environments give extremely new and futuristic abilities to look at and interact with our habitat at any time and from anywhere. In that domain, researchers are confronted with many foundational, technological and engineering issues which were not known before. Detailed cross-disciplinary coverage of these issues is really needed today for further progress and widening of application range. This book collects twelve original works of researchers from eleven countries, which are clustered into four sections: Foundations, Security and Privacy, Integration and Middleware, Practical Applications

    Big Data Security (Volume 3)

    Get PDF
    After a short description of the key concepts of big data the book explores on the secrecy and security threats posed especially by cloud based data storage. It delivers conceptual frameworks and models along with case studies of recent technology
    • 

    corecore