6,920 research outputs found
3D Printing Magnetophoretic Displays
We present a pipeline for printing interactive and always-on magnetophoretic
displays using affordable Fused Deposition Modeling (FDM) 3D printers. Using
our pipeline, an end-user can convert the surface of a 3D shape into a matrix
of voxels. The generated model can be sent to an FDM 3D printer equipped with
an additional syringe-based injector. During the printing process, an oil and
iron powder-based liquid mixture is injected into each voxel cell, allowing the
appearance of the once-printed object to be editable with external magnetic
sources. To achieve this, we made modifications to the 3D printer hardware and
the firmware. We also developed a 3D editor to prepare printable models. We
demonstrate our pipeline with a variety of examples, including a printed
Stanford bunny with customizable appearances, a small espresso mug that can be
used as a post-it note surface, a board game figurine with a computationally
updated display, and a collection of flexible wearable accessories with
editable visuals
Brave New GES World:A Systematic Literature Review of Gestures and Referents in Gesture Elicitation Studies
How to determine highly effective and intuitive gesture sets for interactive systems tailored to end users’ preferences? A substantial body of knowledge is available on this topic, among which gesture elicitation studies stand out distinctively. In these studies, end users are invited to propose gestures for specific referents, which are the functions to control for an interactive system. The vast majority of gesture elicitation studies conclude with a consensus gesture set identified following a process of consensus or agreement analysis. However, the information about specific gesture sets determined for specific applications is scattered across a wide landscape of disconnected scientific publications, which poses challenges to researchers and practitioners to effectively harness this body of knowledge. To address this challenge, we conducted a systematic literature review and examined a corpus of N=267 studies encompassing a total of 187, 265 gestures elicited from 6, 659 participants for 4, 106 referents. To understand similarities in users’ gesture preferences within this extensive dataset, we analyzed a sample of 2, 304 gestures extracted from the studies identified in our literature review. Our approach consisted of (i) identifying the context of use represented by end users, devices, platforms, and gesture sensing technology, (ii) categorizing the referents, (iii) classifying the gestures elicited for those referents, and (iv) cataloging the gestures based on their representation and implementation modalities. Drawing from the findings of this review, we propose guidelines for conducting future end-user gesture elicitation studies
A Closer Look into Recent Video-based Learning Research: A Comprehensive Review of Video Characteristics, Tools, Technologies, and Learning Effectiveness
People increasingly use videos on the Web as a source for learning. To
support this way of learning, researchers and developers are continuously
developing tools, proposing guidelines, analyzing data, and conducting
experiments. However, it is still not clear what characteristics a video should
have to be an effective learning medium. In this paper, we present a
comprehensive review of 257 articles on video-based learning for the period
from 2016 to 2021. One of the aims of the review is to identify the video
characteristics that have been explored by previous work. Based on our
analysis, we suggest a taxonomy which organizes the video characteristics and
contextual aspects into eight categories: (1) audio features, (2) visual
features, (3) textual features, (4) instructor behavior, (5) learners
activities, (6) interactive features (quizzes, etc.), (7) production style, and
(8) instructional design. Also, we identify four representative research
directions: (1) proposals of tools to support video-based learning, (2) studies
with controlled experiments, (3) data analysis studies, and (4) proposals of
design guidelines for learning videos. We find that the most explored
characteristics are textual features followed by visual features, learner
activities, and interactive features. Text of transcripts, video frames, and
images (figures and illustrations) are most frequently used by tools that
support learning through videos. The learner activity is heavily explored
through log files in data analysis studies, and interactive features have been
frequently scrutinized in controlled experiments. We complement our review by
contrasting research findings that investigate the impact of video
characteristics on the learning effectiveness, report on tasks and technologies
used to develop tools that support learning, and summarize trends of design
guidelines to produce learning video
Brotate and Tribike: Designing Smartphone Control for Cycling
The more people commute by bicycle, the higher is the number of cyclists
using their smartphones while cycling and compromising traffic safety. We have
designed, implemented and evaluated two prototypes for smartphone control
devices that do not require the cyclists to remove their hands from the
handlebars - the three-button device Tribike and the rotation-controlled
Brotate. The devices were the result of a user-centred design process where we
identified the key features needed for a on-bike smartphone control device. We
evaluated the devices in a biking exercise with 19 participants, where users
completed a series of common smartphone tasks. The study showed that Brotate
allowed for significantly more lateral control of the bicycle and both devices
reduced the cognitive load required to use the smartphone. Our work contributes
insights into designing interfaces for cycling.Comment: 22nd International Conference on Human-Computer Interaction with
Mobile Devices and Services (MobileHCI '20), October 5--8, 2020, Oldenburg,
German
Blending the Material and Digital World for Hybrid Interfaces
The development of digital technologies in the 21st century is progressing continuously and new device classes such as tablets, smartphones or smartwatches are finding their way into our everyday lives. However, this development also poses problems, as these prevailing touch and gestural interfaces often lack tangibility, take little account of haptic qualities and therefore require full attention from their users. Compared to traditional tools and analog interfaces, the human skills to experience and manipulate material in its natural environment and context remain unexploited. To combine the best of both, a key question is how it is possible to blend the material world and digital world to design and realize novel hybrid interfaces in a meaningful way. Research on Tangible User Interfaces (TUIs) investigates the coupling between physical objects and virtual data. In contrast, hybrid interfaces, which specifically aim to digitally enrich analog artifacts of everyday work, have not yet been sufficiently researched and systematically discussed.
Therefore, this doctoral thesis rethinks how user interfaces can provide useful digital functionality while maintaining their physical properties and familiar patterns of use in the real world. However, the development of such hybrid interfaces raises overarching research questions about the design: Which kind of physical interfaces are worth exploring? What type of digital enhancement will improve existing interfaces? How can hybrid interfaces retain their physical properties while enabling new digital functions? What are suitable methods to explore different design? And how to support technology-enthusiast users in prototyping?
For a systematic investigation, the thesis builds on a design-oriented, exploratory and iterative development process using digital fabrication methods and novel materials. As a main contribution, four specific research projects are presented that apply and discuss different visual and interactive augmentation principles along real-world applications. The applications range from digitally-enhanced paper, interactive cords over visual watch strap extensions to novel prototyping tools for smart garments. While almost all of them integrate visual feedback and haptic input, none of them are built on rigid, rectangular pixel screens or use standard input modalities, as they all aim to reveal new design approaches. The dissertation shows how valuable it can be to rethink familiar, analog applications while thoughtfully extending them digitally. Finally, this thesis’ extensive work of engineering versatile research platforms is accompanied by overarching conceptual work, user evaluations and technical experiments, as well as literature reviews.Die Durchdringung digitaler Technologien im 21. Jahrhundert schreitet stetig voran und neue Geräteklassen wie Tablets, Smartphones oder Smartwatches erobern unseren Alltag. Diese Entwicklung birgt aber auch Probleme, denn die vorherrschenden berührungsempfindlichen Oberflächen berücksichtigen kaum haptische Qualitäten und erfordern daher die volle Aufmerksamkeit ihrer Nutzer:innen. Im Vergleich zu traditionellen Werkzeugen und analogen Schnittstellen bleiben die menschlichen Fähigkeiten ungenutzt, die Umwelt mit allen Sinnen zu begreifen und wahrzunehmen. Um das Beste aus beiden Welten zu vereinen, stellt sich daher die Frage, wie neuartige hybride Schnittstellen sinnvoll gestaltet und realisiert werden können, um die materielle und die digitale Welt zu verschmelzen. In der Forschung zu Tangible User Interfaces (TUIs) wird die Verbindung zwischen physischen Objekten und virtuellen Daten untersucht. Noch nicht ausreichend erforscht wurden hingegen hybride Schnittstellen, die speziell darauf abzielen, physische Gegenstände des Alltags digital zu erweitern und anhand geeigneter Designparameter und Entwurfsräume systematisch zu untersuchen.
In dieser Dissertation wird daher untersucht, wie Materialität und Digitalität nahtlos ineinander übergehen können. Es soll erforscht werden, wie künftige Benutzungsschnittstellen nützliche digitale Funktionen bereitstellen können, ohne ihre physischen Eigenschaften und vertrauten Nutzungsmuster in der realen Welt zu verlieren. Die Entwicklung solcher hybriden Ansätze wirft jedoch übergreifende Forschungsfragen zum Design auf: Welche Arten von physischen Schnittstellen sind es wert, betrachtet zu werden? Welche Art von digitaler Erweiterung verbessert das Bestehende? Wie können hybride Konzepte ihre physischen Eigenschaften beibehalten und gleichzeitig neue digitale Funktionen ermöglichen? Was sind geeignete Methoden, um verschiedene Designs zu erforschen? Wie kann man Technologiebegeisterte bei der Erstellung von Prototypen unterstützen?
Für eine systematische Untersuchung stützt sich die Arbeit auf einen designorientierten, explorativen und iterativen Entwicklungsprozess unter Verwendung digitaler Fabrikationsmethoden und neuartiger Materialien. Im Hauptteil werden vier Forschungsprojekte vorgestellt, die verschiedene visuelle und interaktive Prinzipien entlang realer Anwendungen diskutieren. Die Szenarien reichen von digital angereichertem Papier, interaktiven Kordeln über visuelle Erweiterungen von Uhrarmbändern bis hin zu neuartigen Prototyping-Tools für intelligente Kleidungsstücke. Um neue Designansätze aufzuzeigen, integrieren nahezu alle visuelles Feedback und haptische Eingaben, um Alternativen zu Standard-Eingabemodalitäten auf starren Pixelbildschirmen zu schaffen. Die Dissertation hat gezeigt, wie wertvoll es sein kann, bekannte, analoge Anwendungen zu überdenken und sie dabei gleichzeitig mit Bedacht digital zu erweitern. Dabei umfasst die vorliegende Arbeit sowohl realisierte technische Forschungsplattformen als auch übergreifende konzeptionelle Arbeiten, Nutzerstudien und technische Experimente sowie die Analyse existierender Forschungsarbeiten
VIMES : A Wearable Memory Assistance System for Automatic Information Retrieval
The advancement of artificial intelligence and wearable computing triggers the radical innovation of cognitive applications. In this work, we propose VIMES, an augmented reality-based memory assistance system that helps recall declarative memory, such as whom the user meets and what they chat. Through a collaborative method with 20 participants, we design VIMES, a system that runs on smartglasses, takes the first-person audio and video as input, and extracts personal profiles and event information to display on the embedded display or a smartphone. We perform an extensive evaluation with 50 participants to show the effectiveness of VIMES for memory recall. VIMES outperforms (90% memory accuracy) other traditional methods such as self-recall (34%) while offering the best memory experience (Vividness, Coherence, and Visual Perspective all score over 4/5). The user study results show that most participants find VIMES useful (3.75/5) and easy to use (3.46/5).Peer reviewe
Design Patterns for Situated Visualization in Augmented Reality
Situated visualization has become an increasingly popular research area in
the visualization community, fueled by advancements in augmented reality (AR)
technology and immersive analytics. Visualizing data in spatial proximity to
their physical referents affords new design opportunities and considerations
not present in traditional visualization, which researchers are now beginning
to explore. However, the AR research community has an extensive history of
designing graphics that are displayed in highly physical contexts. In this
work, we leverage the richness of AR research and apply it to situated
visualization. We derive design patterns which summarize common approaches of
visualizing data in situ. The design patterns are based on a survey of 293
papers published in the AR and visualization communities, as well as our own
expertise. We discuss design dimensions that help to describe both our patterns
and previous work in the literature. This discussion is accompanied by several
guidelines which explain how to apply the patterns given the constraints
imposed by the real world. We conclude by discussing future research directions
that will help establish a complete understanding of the design of situated
visualization, including the role of interactivity, tasks, and workflows.Comment: To appear in IEEE VIS 202
Ethical and Social Aspects of Self-Driving Cars
As an envisaged future of transportation, self-driving cars are being
discussed from various perspectives, including social, economical, engineering,
computer science, design, and ethics. On the one hand, self-driving cars
present new engineering problems that are being gradually successfully solved.
On the other hand, social and ethical problems are typically being presented in
the form of an idealized unsolvable decision-making problem, the so-called
trolley problem, which is grossly misleading. We argue that an applied
engineering ethical approach for the development of new technology is what is
needed; the approach should be applied, meaning that it should focus on the
analysis of complex real-world engineering problems. Software plays a crucial
role for the control of self-driving cars; therefore, software engineering
solutions should seriously handle ethical and social considerations. In this
paper we take a closer look at the regulative instruments, standards, design,
and implementations of components, systems, and services and we present
practical social and ethical challenges that have to be met, as well as novel
expectations for software engineering.Comment: 11 pages, 3 figures, 2 table
Understanding Context to Capture when Reconstructing Meaningful Spaces for Remote Instruction and Connecting in XR
Recent technological advances are enabling HCI researchers to explore
interaction possibilities for remote XR collaboration using high-fidelity
reconstructions of physical activity spaces. However, creating these
reconstructions often lacks user involvement with an overt focus on capturing
sensory context that does not necessarily augment an informal social
experience. This work seeks to understand social context that can be important
for reconstruction to enable XR applications for informal instructional
scenarios. Our study involved the evaluation of an XR remote guidance prototype
by 8 intergenerational groups of closely related gardeners using
reconstructions of personally meaningful spaces in their gardens. Our findings
contextualize physical objects and areas with various motivations related to
gardening and detail perceptions of XR that might affect the use of
reconstructions for remote interaction. We discuss implications for user
involvement to create reconstructions that better translate real-world
experience, encourage reflection, incorporate privacy considerations, and
preserve shared experiences with XR as a medium for informal intergenerational
activities.Comment: 26 pages, 5 figures, 4 table
- …