1,460 research outputs found

    Around-Body Interaction: Leveraging Limb Movements for Interacting in a Digitally Augmented Physical World

    Full text link
    Recent technological advances have made head-mounted displays (HMDs) smaller and untethered, fostering the vision of ubiquitous interaction with information in a digitally augmented physical world. For interacting with such devices, three main types of input - besides not very intuitive finger gestures - have emerged so far: 1) Touch input on the frame of the devices or 2) on accessories (controller) as well as 3) voice input. While these techniques have both advantages and disadvantages depending on the current situation of the user, they largely ignore the skills and dexterity that we show when interacting with the real world: Throughout our lives, we have trained extensively to use our limbs to interact with and manipulate the physical world around us. This thesis explores how the skills and dexterity of our upper and lower limbs, acquired and trained in interacting with the real world, can be transferred to the interaction with HMDs. Thus, this thesis develops the vision of around-body interaction, in which we use the space around our body, defined by the reach of our limbs, for fast, accurate, and enjoyable interaction with such devices. This work contributes four interaction techniques, two for the upper limbs and two for the lower limbs: The first contribution shows how the proximity between our head and hand can be used to interact with HMDs. The second contribution extends the interaction with the upper limbs to multiple users and illustrates how the registration of augmented information in the real world can support cooperative use cases. The third contribution shifts the focus to the lower limbs and discusses how foot taps can be leveraged as an input modality for HMDs. The fourth contribution presents how lateral shifts of the walking path can be exploited for mobile and hands-free interaction with HMDs while walking.Comment: thesi

    User interface guidelines for the control of interactive television systems via smart phone applications

    Get PDF
    International audienceThere are a growing number of smart phone applications allowing the user to control their television, set-top box or other entertainment devices. The success of these applications is limited. Based on findings from media studies in Austria and France focusing on how people currently use their TV and iTV systems and associated devices, this article describes recommendations for the design of a smart phone application enabling users to control Internet Protocol Television (IPTV) systems including all connected entertainment devices. Recommendations include the need to allow users to control devices that are related to the IPTV experience (not only the set-top box or television set) and the focus on scenarios of usage like supporting listening to music, enjoying a movie or controlling the connected home. Based on similarities and differences found in the two samples, future smart phone applications for controlling TV will only succeed if they provide meaningful functionalities that satisfy the (varying) user needs, support personalisation and personal usage and respect the limitations of mobile phones with respect to possible parallel activities performed

    TicTacToes: Assessing Toe Movements as an Input Modality

    Full text link
    From carrying grocery bags to holding onto handles on the bus, there are a variety of situations where one or both hands are busy, hindering the vision of ubiquitous interaction with technology. Voice commands, as a popular hands-free alternative, struggle with ambient noise and privacy issues. As an alternative approach, research explored movements of various body parts (e.g., head, arms) as input modalities, with foot-based techniques proving particularly suitable for hands-free interaction. Whereas previous research only considered the movement of the foot as a whole, in this work, we argue that our toes offer further degrees of freedom that can be leveraged for interaction. To explore the viability of toe-based interaction, we contribute the results of a controlled experiment with 18 participants assessing the impact of five factors on the accuracy, efficiency and user experience of such interfaces. Based on the findings, we provide design recommendations for future toe-based interfaces.Comment: To appear in Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI 23), April 23-28, 2023, Hamburg, Germany. ACM, New York, NY, USA, 17 page

    Naval Reserve support to information Operations Warfighting

    Get PDF
    Since the mid-1990s, the Fleet Information Warfare Center (FIWC) has led the Navy's Information Operations (IO) support to the Fleet. Within the FIWC manning structure, there are in total 36 officer and 84 enlisted Naval Reserve billets that are manned to approximately 75 percent and located in Norfolk and San Diego Naval Reserve Centers. These Naval Reserve Force personnel could provide support to FIWC far and above what they are now contributing specifically in the areas of Computer Network Operations, Psychological Operations, Military Deception and Civil Affairs. Historically personnel conducting IO were primarily reservists and civilians in uniform with regular military officers being by far the minority. The Naval Reserve Force has the personnel to provide skilled IO operators but the lack of an effective manning document and training plans is hindering their opportunity to enhance FIWC's capabilities in lull spectrum IO. This research investigates the skill requirements of personnel in IO to verify that the Naval Reserve Force has the talent base for IO support and the feasibility of their expanded use in IO.http://archive.org/details/navalreservesupp109451098

    From wearable towards epidermal computing : soft wearable devices for rich interaction on the skin

    Get PDF
    Human skin provides a large, always available, and easy to access real-estate for interaction. Recent advances in new materials, electronics, and human-computer interaction have led to the emergence of electronic devices that reside directly on the user's skin. These conformal devices, referred to as Epidermal Devices, have mechanical properties compatible with human skin: they are very thin, often thinner than human hair; they elastically deform when the body is moving, and stretch with the user's skin. Firstly, this thesis provides a conceptual understanding of Epidermal Devices in the HCI literature. We compare and contrast them with other technical approaches that enable novel on-skin interactions. Then, through a multi-disciplinary analysis of Epidermal Devices, we identify the design goals and challenges that need to be addressed for advancing this emerging research area in HCI. Following this, our fundamental empirical research investigated how epidermal devices of different rigidity levels affect passive and active tactile perception. Generally, a correlation was found between the device rigidity and tactile sensitivity thresholds as well as roughness discrimination ability. Based on these findings, we derive design recommendations for realizing epidermal devices. Secondly, this thesis contributes novel Epidermal Devices that enable rich on-body interaction. SkinMarks contributes to the fabrication and design of novel Epidermal Devices that are highly skin-conformal and enable touch, squeeze, and bend sensing with co-located visual output. These devices can be deployed on highly challenging body locations, enabling novel interaction techniques and expanding the design space of on-body interaction. Multi-Touch Skin enables high-resolution multi-touch input on the body. We present the first non-rectangular and high-resolution multi-touch sensor overlays for use on skin and introduce a design tool that generates such sensors in custom shapes and sizes. Empirical results from two technical evaluations confirm that the sensor achieves a high signal-to-noise ratio on the body under various grounding conditions and has a high spatial accuracy even when subjected to strong deformations. Thirdly, Epidermal Devices are in contact with the skin, they offer opportunities for sensing rich physiological signals from the body. To leverage this unique property, this thesis presents rapid fabrication and computational design techniques for realizing Multi-Modal Epidermal Devices that can measure multiple physiological signals from the human body. Devices fabricated through these techniques can measure ECG (Electrocardiogram), EMG (Electromyogram), and EDA (Electro-Dermal Activity). We also contribute a computational design and optimization method based on underlying human anatomical models to create optimized device designs that provide an optimal trade-off between physiological signal acquisition capability and device size. The graphical tool allows for easily specifying design preferences and to visually analyze the generated designs in real-time, enabling designer-in-the-loop optimization. Experimental results show high quantitative agreement between the prediction of the optimizer and experimentally collected physiological data. Finally, taking a multi-disciplinary perspective, we outline the roadmap for future research in this area by highlighting the next important steps, opportunities, and challenges. Taken together, this thesis contributes towards a holistic understanding of Epidermal Devices}: it provides an empirical and conceptual understanding as well as technical insights through contributions in DIY (Do-It-Yourself), rapid fabrication, and computational design techniques.Die menschliche Haut bietet eine große, stets verfügbare und leicht zugängliche Fläche für Interaktion. Jüngste Fortschritte in den Bereichen Materialwissenschaft, Elektronik und Mensch-Computer-Interaktion (Human-Computer-Interaction, HCI) [so that you can later use the Englisch abbreviation] haben zur Entwicklung elektronischer Geräte geführt, die sich direkt auf der Haut des Benutzers befinden. Diese sogenannten Epidermisgeräte haben mechanische Eigenschaften, die mit der menschlichen Haut kompatibel sind: Sie sind sehr dünn, oft dünner als ein menschliches Haar; sie verformen sich elastisch, wenn sich der Körper bewegt, und dehnen sich mit der Haut des Benutzers. Diese Thesis bietet, erstens, ein konzeptionelles Verständnis von Epidermisgeräten in der HCI-Literatur. Wir vergleichen sie mit anderen technischen Ansätzen, die neuartige Interaktionen auf der Haut ermöglichen. Dann identifizieren wir durch eine multidisziplinäre Analyse von Epidermisgeräten die Designziele und Herausforderungen, die angegangen werden müssen, um diesen aufstrebenden Forschungsbereich voranzubringen. Im Anschluss daran untersuchten wir in unserer empirischen Grundlagenforschung, wie epidermale Geräte unterschiedlicher Steifigkeit die passive und aktive taktile Wahrnehmung beeinflussen. Im Allgemeinen wurde eine Korrelation zwischen der Steifigkeit des Geräts und den taktilen Empfindlichkeitsschwellen sowie der Fähigkeit zur Rauheitsunterscheidung festgestellt. Basierend auf diesen Ergebnissen leiten wir Designempfehlungen für die Realisierung epidermaler Geräte ab. Zweitens trägt diese Thesis zu neuartigen Epidermisgeräten bei, die eine reichhaltige Interaktion am Körper ermöglichen. SkinMarks trägt zur Herstellung und zum Design neuartiger Epidermisgeräte bei, die hochgradig an die Haut angepasst sind und Berührungs-, Quetsch- und Biegesensoren mit gleichzeitiger visueller Ausgabe ermöglichen. Diese Geräte können an sehr schwierigen Körperstellen eingesetzt werden, ermöglichen neuartige Interaktionstechniken und erweitern den Designraum für die Interaktion am Körper. Multi-Touch Skin ermöglicht hochauflösende Multi-Touch-Eingaben am Körper. Wir präsentieren die ersten nicht-rechteckigen und hochauflösenden Multi-Touch-Sensor-Overlays zur Verwendung auf der Haut und stellen ein Design-Tool vor, das solche Sensoren in benutzerdefinierten Formen und Größen erzeugt. Empirische Ergebnisse aus zwei technischen Evaluierungen bestätigen, dass der Sensor auf dem Körper unter verschiedenen Bedingungen ein hohes Signal-Rausch-Verhältnis erreicht und eine hohe räumliche Auflösung aufweist, selbst wenn er starken Verformungen ausgesetzt ist. Drittens, da Epidermisgeräte in Kontakt mit der Haut stehen, bieten sie die Möglichkeit, reichhaltige physiologische Signale des Körpers zu erfassen. Um diese einzigartige Eigenschaft zu nutzen, werden in dieser Arbeit Techniken zur schnellen Herstellung und zum computergestützten Design von multimodalen Epidermisgeräten vorgestellt, die mehrere physiologische Signale des menschlichen Körpers messen können. Die mit diesen Techniken hergestellten Geräte können EKG (Elektrokardiogramm), EMG (Elektromyogramm) und EDA (elektrodermale Aktivität) messen. Darüber hinaus stellen wir eine computergestützte Design- und Optimierungsmethode vor, die auf den zugrunde liegenden anatomischen Modellen des Menschen basiert, um optimierte Gerätedesigns zu erstellen. Diese Designs bieten einen optimalen Kompromiss zwischen der Fähigkeit zur Erfassung physiologischer Signale und der Größe des Geräts. Das grafische Tool ermöglicht die einfache Festlegung von Designpräferenzen und die visuelle Analyse der generierten Designs in Echtzeit, was eine Optimierung durch den Designer im laufenden Betrieb ermöglicht. Experimentelle Ergebnisse zeigen eine hohe quantitative Übereinstimmung zwischen den Vorhersagen des Optimierers und den experimentell erfassten physiologischen Daten. Schließlich skizzieren wir aus einer multidisziplinären Perspektive einen Fahrplan für zukünftige Forschung in diesem Bereich, indem wir die nächsten wichtigen Schritte, Möglichkeiten und Herausforderungen hervorheben. Insgesamt trägt diese Arbeit zu einem ganzheitlichen Verständnis von Epidermisgeräten bei: Sie liefert ein empirisches und konzeptionelles Verständnis sowie technische Einblicke durch Beiträge zu DIY (Do-It-Yourself), schneller Fertigung und computergestützten Entwurfstechniken

    Understanding Mode and Modality Transfer in Unistroke Gesture Input

    Get PDF
    Unistroke gestures are an attractive input method with an extensive research history, but one challenge with their usage is that the gestures are not always self-revealing. To obtain expertise with these gestures, interaction designers often deploy a guided novice mode -- where users can rely on recognizing visual UI elements to perform a gestural command. Once a user knows the gesture and associated command, they can perform it without guidance; thus, relying on recall. The primary aim of my thesis is to obtain a comprehensive understanding of why, when, and how users transfer from guided modes or modalities to potentially more efficient, or novel, methods of interaction -- through symbolic-abstract unistroke gestures. The goal of my work is to not only study user behaviour from novice to more efficient interaction mechanisms, but also to expand upon the concept of intermodal transfer to different contexts. We garner this understanding by empirically evaluating three different use cases of mode and/or modality transitions. Leveraging marking menus, the first piece investigates whether or not designers should force expertise transfer by penalizing use of the guided mode, in an effort to encourage use of the recall mode. Second, we investigate how well users can transfer skills between modalities, particularly when it is impractical to present guidance in the target or recall modality. Lastly, we assess how well users' pre-existing spatial knowledge of an input method (the QWERTY keyboard layout), transfers to performance in a new modality. Applying lessons from these three assessments, we segment intermodal transfer into three possible characterizations -- beyond the traditional novice to expert contextualization. This is followed by a series of implications and potential areas of future exploration spawning from our work

    Move, hold and touch: A framework for Tangible gesture interactive systems

    Get PDF
    © 2015 by the authors. Technology is spreading in our everyday world, and digital interaction beyond the screen, with real objects, allows taking advantage of our natural manipulative and communicative skills. Tangible gesture interaction takes advantage of these skills by bridging two popular domains in Human-Computer Interaction, tangible interaction and gestural interaction. In this paper, we present the Tangible Gesture Interaction Framework (TGIF) for classifying and guiding works in this field. We propose a classification of gestures according to three relationships with objects: move, hold and touch. Following this classification, we analyzed previous work in the literature to obtain guidelines and common practices for designing and building new tangible gesture interactive systems. We describe four interactive systems as application examples of the TGIF guidelines and we discuss the descriptive, evaluative and generative power of TGIF
    • …
    corecore