4,496 research outputs found

    Intimate interfaces in action: assessing the usability and subtlety of emg-based motionless gestures

    No full text
    Mobile communication devices, such as mobile phones and networked personal digital assistants (PDAs), allow users to be constantly connected and communicate anywhere and at any time, often resulting in personal and private communication taking place in public spaces. This private -- public contrast can be problematic. As a remedy, we promote intimate interfaces: interfaces that allow subtle and minimal mobile interaction, without disruption of the surrounding environment. In particular, motionless gestures sensed through the electromyographic (EMG) signal have been proposed as a solution to allow subtle input in a mobile context. In this paper we present an expansion of the work on EMG-based motionless gestures including (1) a novel study of their usability in a mobile context for controlling a realistic, multimodal interface and (2) a formal assessment of how noticeable they are to informed observers. Experimental results confirm that subtle gestures can be profitably used within a multimodal interface and that it is difficult for observers to guess when someone is performing a gesture, confirming the hypothesis of subtlety

    Enhancing Usability, Security, and Performance in Mobile Computing

    Get PDF
    We have witnessed the prevalence of smart devices in every aspect of human life. However, the ever-growing smart devices present significant challenges in terms of usability, security, and performance. First, we need to design new interfaces to improve the device usability which has been neglected during the rapid shift from hand-held mobile devices to wearables. Second, we need to protect smart devices with abundant private data against unauthorized users. Last, new applications with compute-intensive tasks demand the integration of emerging mobile backend infrastructure. This dissertation focuses on addressing these challenges. First, we present GlassGesture, a system that improves the usability of Google Glass through a head gesture user interface with gesture recognition and authentication. We accelerate the recognition by employing a novel similarity search scheme, and improve the authentication performance by applying new features of head movements in an ensemble learning method. as a result, GlassGesture achieves 96% gesture recognition accuracy. Furthermore, GlassGesture accepts authorized users in nearly 92% of trials, and rejects attackers in nearly 99% of trials. Next, we investigate the authentication between a smartphone and a paired smartwatch. We design and implement WearLock, a system that utilizes one\u27s smartwatch to unlock one\u27s smartphone via acoustic tones. We build an acoustic modem with sub-channel selection and adaptive modulation, which generates modulated acoustic signals to maximize the unlocking success rate against ambient noise. We leverage the motion similarities of the devices to eliminate unnecessary unlocking. We also offload heavy computation tasks from the smartwatch to the smartphone to shorten response time and save energy. The acoustic modem achieves a low bit error rate (BER) of 8%. Compared to traditional manual personal identification numbers (PINs) entry, WearLock not only automates the unlocking but also speeds it up by at least 18%. Last, we consider low-latency video analytics on mobile devices, leveraging emerging mobile backend infrastructure. We design and implement LAVEA, a system which offloads computation from mobile clients to edge nodes, to accomplish tasks with intensive computation at places closer to users in a timely manner. We formulate an optimization problem for offloading task selection and prioritize offloading requests received at the edge node to minimize the response time. We design and compare various task placement schemes for inter-edge collaboration to further improve the overall response time. Our results show that the client-edge configuration has a speedup ranging from 1.3x to 4x against running solely by the client and 1.2x to 1.7x against the client-cloud configuration

    Enabling mobile microinteractions

    Get PDF
    While much attention has been paid to the usability of desktop computers, mobile com- puters are quickly becoming the dominant platform. Because mobile computers may be used in nearly any situation--including while the user is actually in motion, or performing other tasks--interfaces designed for stationary use may be inappropriate, and alternative interfaces should be considered. In this dissertation I consider the idea of microinteractions--interactions with a device that take less than four seconds to initiate and complete. Microinteractions are desirable because they may minimize interruption; that is, they allow for a tiny burst of interaction with a device so that the user can quickly return to the task at hand. My research concentrates on methods for applying microinteractions through wrist- based interaction. I consider two modalities for this interaction: touchscreens and motion- based gestures. In the case of touchscreens, I consider the interface implications of making touchscreen watches usable with the finger, instead of the usual stylus, and investigate users' performance with a round touchscreen. For gesture-based interaction, I present a tool, MAGIC, for designing gesture-based interactive system, and detail the evaluation of the tool.Ph.D.Committee Chair: Starner, Thad; Committee Member: Abowd, Gregory; Committee Member: Isbell, Charles; Committee Member: Landay, james; Committee Member: McIntyre, Blai

    Design and recognition of microgestures for always-available input

    Get PDF
    Gestural user interfaces for computing devices most commonly require the user to have at least one hand free to interact with the device, for example, moving a mouse, touching a screen, or performing mid-air gestures. Consequently, users find it difficult to operate computing devices while holding or manipulating everyday objects. This limits the users from interacting with the digital world during a significant portion of their everyday activities, such as, using tools in the kitchen or workshop, carrying items, or workout with sports equipment. This thesis pushes the boundaries towards the bigger goal of enabling always-available input. Microgestures have been recognized for their potential to facilitate direct and subtle interactions. However, it remains an open question how to interact using gestures with computing devices when both of the user’s hands are occupied holding everyday objects. We take a holistic approach and focus on three core contributions: i) To understand end-users preferences, we present an empirical analysis of users’ choice of microgestures when holding objects of diverse geometries. Instead of designing a gesture set for a specific object or geometry and to identify gestures that generalize, this thesis leverages the taxonomy of grasp types established from prior research. ii) We tackle the critical problem of avoiding false activation by introducing a novel gestural input concept that leverages a single-finger movement, which stands out from everyday finger motions during holding and manipulating objects. Through a data-driven approach, we also systematically validate the concept’s robustness with different everyday actions. iii) While full sensor coverage on the user’s hand would allow detailed hand-object interaction, minimal instrumentation is desirable for real-world use. This thesis addresses the problem of identifying sparse sensor layouts. We present the first rapid computational method, along with a GUI-based design tool that enables iterative design based on the designer’s high-level requirements. Furthermore, we demonstrate that minimal form-factor devices, like smart rings, can be used to effectively detect microgestures in hands-free and busy scenarios. Overall, the presented findings will serve as both conceptual and technical foundations for enabling interaction with computing devices wherever and whenever users need them.Benutzerschnittstellen fĂŒr ComputergerĂ€te auf Basis von Gesten erfordern fĂŒr eine Interaktion meist mindestens eine freie Hand, z.B. um eine Maus zu bewegen, einen Bildschirm zu berĂŒhren oder Gesten in der Luft auszufĂŒhren. Daher ist es fĂŒr Nutzer schwierig, GerĂ€te zu bedienen, wĂ€hrend sie GegenstĂ€nde halten oder manipulieren. Dies schrĂ€nkt die Interaktion mit der digitalen Welt wĂ€hrend eines Großteils ihrer alltĂ€glichen AktivitĂ€ten ein, etwa wenn sie KĂŒchengerĂ€te oder Werkzeug verwenden, GegenstĂ€nde tragen oder mit SportgerĂ€ten trainieren. Diese Arbeit erforscht neue Wege in Richtung des grĂ¶ĂŸeren Ziels, immer verfĂŒgbare Eingaben zu ermöglichen. Das Potential von Mikrogesten fĂŒr die Erleichterung von direkten und feinen Interaktionen wurde bereits erkannt. Die Frage, wie der Nutzer mit GerĂ€ten interagiert, wenn beide HĂ€nde mit dem Halten von GegenstĂ€nden belegt sind, bleibt jedoch offen. Wir verfolgen einen ganzheitlichen Ansatz und konzentrieren uns auf drei KernbeitrĂ€ge: i) Um die PrĂ€ferenzen der Endnutzer zu verstehen, prĂ€sentieren wir eine empirische Analyse der Wahl von Mikrogesten beim Halten von Objekte mit diversen Geometrien. Anstatt einen Satz an Gesten fĂŒr ein bestimmtes Objekt oder eine bestimmte Geometrie zu entwerfen, nutzt diese Arbeit die aus frĂŒheren Forschungen stammenden Taxonomien an Griff-Typen. ii) Wir adressieren das Problem falscher Aktivierungen durch ein neuartiges Eingabekonzept, das die sich von alltĂ€glichen Fingerbewegungen abhebende Bewegung eines einzelnen Fingers nutzt. Durch einen datengesteuerten Ansatz validieren wir zudem systematisch die Robustheit des Konzepts bei diversen alltĂ€glichen Aktionen. iii) Auch wenn eine vollstĂ€ndige Sensorabdeckung an der Hand des Nutzers eine detaillierte Hand-Objekt-Interaktion ermöglichen wĂŒrde, ist eine minimale Ausstattung fĂŒr den Einsatz in der realen Welt wĂŒnschenswert. Diese Arbeit befasst sich mit der Identifizierung reduzierter Sensoranordnungen. Wir prĂ€sentieren die erste, schnelle Berechnungsmethode in einem GUI-basierten Designtool, das iteratives Design basierend auf den Anforderungen des Designers ermöglicht. Wir zeigen zudem, dass GerĂ€te mit minimalem Formfaktor wie smarte Ringe fĂŒr die Erkennung von Mikrogesten verwendet werden können. Insgesamt dienen die vorgestellten Ergebnisse sowohl als konzeptionelle als auch als technische Grundlage fĂŒr die Realisierung von Interaktion mit ComputergerĂ€ten wo und wann immer Nutzer sie benötigen.Bosch Researc

    Implicit Smartphone User Authentication with Sensors and Contextual Machine Learning

    Full text link
    Authentication of smartphone users is important because a lot of sensitive data is stored in the smartphone and the smartphone is also used to access various cloud data and services. However, smartphones are easily stolen or co-opted by an attacker. Beyond the initial login, it is highly desirable to re-authenticate end-users who are continuing to access security-critical services and data. Hence, this paper proposes a novel authentication system for implicit, continuous authentication of the smartphone user based on behavioral characteristics, by leveraging the sensors already ubiquitously built into smartphones. We propose novel context-based authentication models to differentiate the legitimate smartphone owner versus other users. We systematically show how to achieve high authentication accuracy with different design alternatives in sensor and feature selection, machine learning techniques, context detection and multiple devices. Our system can achieve excellent authentication performance with 98.1% accuracy with negligible system overhead and less than 2.4% battery consumption.Comment: Published on the IEEE/IFIP International Conference on Dependable Systems and Networks (DSN) 2017. arXiv admin note: substantial text overlap with arXiv:1703.0352
    • 

    corecore