2,441 research outputs found

    Cross-Dimensional Gestural Interaction Techniques for Hybrid Immersive Environments

    Get PDF
    We present a set of interaction techniques for a hybrid user interface that integrates existing 2D and 3D visualization and interaction devices. Our approach is built around one- and two-handed gestures that support the seamless transition of data between co-located 2D and 3D contexts. Our testbed environment combines a 2D multi-user, multi-touch, projection surface with 3D head-tracked, see-through, head-worn displays and 3D tracked gloves to form a multi-display augmented reality. We also address some of the ways in which we can interact with private data in a collaborative, heterogeneous workspace

    MoPeDT: A Modular Head-Mounted Display Toolkit to Conduct Peripheral Vision Research

    Full text link
    Peripheral vision plays a significant role in human perception and orientation. However, its relevance for human-computer interaction, especially head-mounted displays, has not been fully explored yet. In the past, a few specialized appliances were developed to display visual cues in the periphery, each designed for a single specific use case only. A multi-purpose headset to exclusively augment peripheral vision did not exist yet. We introduce MoPeDT: Modular Peripheral Display Toolkit, a freely available, flexible, reconfigurable, and extendable headset to conduct peripheral vision research. MoPeDT can be built with a 3D printer and off-the-shelf components. It features multiple spatially configurable near-eye display modules and full 3D tracking inside and outside the lab. With our system, researchers and designers may easily develop and prototype novel peripheral vision interaction and visualization techniques. We demonstrate the versatility of our headset with several possible applications for spatial awareness, balance, interaction, feedback, and notifications. We conducted a small study to evaluate the usability of the system. We found that participants were largely not irritated by the peripheral cues, but the headset's comfort could be further improved. We also evaluated our system based on established heuristics for human-computer interaction toolkits to show how MoPeDT adapts to changing requirements, lowers the entry barrier for peripheral vision research, and facilitates expressive power in the combination of modular building blocks.Comment: Accepted IEEE VR 2023 conference pape

    Interactive form creation: exploring the creation and manipulation of free form through the use of interactive multiple input interface

    Get PDF
    Most current CAD systems support only the two most common input devices: a mouse and a keyboard that impose a limit to the degree of interaction that a user can have with the system. However, it is not uncommon for users to work together on the same computer during a collaborative task. Beside that, people tend to use both hands to manipulate 3D objects; one hand is used to orient the object while the other hand is used to perform some operation on the object. The same things could be applied to computer modelling in the conceptual phase of the design process. A designer can rotate and position an object with one hand, and manipulate the shape [deform it] with the other hand. Accordingly, the 3D object can be easily and intuitively changed through interactive manipulation of both hands.The research investigates the manipulation and creation of free form geometries through the use of interactive interfaces with multiple input devices. First the creation of the 3D model will be discussed; several different types of models will be illustrated. Furthermore, different tools that allow the user to control the 3D model interactively will be presented. Three experiments were conducted using different interactive interfaces; two bi-manual techniques were compared with the conventional one-handed approach. Finally it will be demonstrated that the use of new and multiple input devices can offer many opportunities for form creation. The problem is that few, if any, systems make it easy for the user or the programmer to use new input devices

    Digital fabrication of custom interactive objects with rich materials

    Get PDF
    As ubiquitous computing is becoming reality, people interact with an increasing number of computer interfaces embedded in physical objects. Today, interaction with those objects largely relies on integrated touchscreens. In contrast, humans are capable of rich interaction with physical objects and their materials through sensory feedback and dexterous manipulation skills. However, developing physical user interfaces that offer versatile interaction and leverage these capabilities is challenging. It requires novel technologies for prototyping interfaces with custom interactivity that support rich materials of everyday objects. Moreover, such technologies need to be accessible to empower a wide audience of researchers, makers, and users. This thesis investigates digital fabrication as a key technology to address these challenges. It contributes four novel design and fabrication approaches for interactive objects with rich materials. The contributions enable easy, accessible, and versatile design and fabrication of interactive objects with custom stretchability, input and output on complex geometries and diverse materials, tactile output on 3D-object geometries, and capabilities of changing their shape and material properties. Together, the contributions of this thesis advance the fields of digital fabrication, rapid prototyping, and ubiquitous computing towards the bigger goal of exploring interactive objects with rich materials as a new generation of physical interfaces.Computer werden zunehmend in GerĂ€ten integriert, mit welchen Menschen im Alltag interagieren. Heutzutage basiert diese Interaktion weitgehend auf Touchscreens. Im Kontrast dazu steht die reichhaltige Interaktion mit physischen Objekten und Materialien durch sensorisches Feedback und geschickte Manipulation. Interfaces zu entwerfen, die diese FĂ€higkeiten nutzen, ist allerdings problematisch. HierfĂŒr sind Technologien zum Prototyping neuer Interfaces mit benutzerdefinierter InteraktivitĂ€t und KompatibilitĂ€t mit vielfĂ€ltigen Materialien erforderlich. Zudem sollten solche Technologien zugĂ€nglich sein, um ein breites Publikum zu erreichen. Diese Dissertation erforscht die digitale Fabrikation als SchlĂŒsseltechnologie, um diese Probleme zu adressieren. Sie trĂ€gt vier neue Design- und FabrikationsansĂ€tze fĂŒr das Prototyping interaktiver Objekte mit reichhaltigen Materialien bei. Diese ermöglichen einfaches, zugĂ€ngliches und vielseitiges Design und Fabrikation von interaktiven Objekten mit individueller Dehnbarkeit, Ein- und Ausgabe auf komplexen Geometrien und vielfĂ€ltigen Materialien, taktiler Ausgabe auf 3D-Objektgeometrien und der FĂ€higkeit ihre Form und Materialeigenschaften zu Ă€ndern. Insgesamt trĂ€gt diese Dissertation zum Fortschritt der Bereiche der digitalen Fabrikation, des Rapid Prototyping und des Ubiquitous Computing in Richtung des grĂ¶ĂŸeren Ziels, der Exploration interaktiver Objekte mit reichhaltigen Materialien als eine neue Generation von physischen Interfaces, bei

    Eyewear Computing \u2013 Augmenting the Human with Head-Mounted Wearable Assistants

    Get PDF
    The seminar was composed of workshops and tutorials on head-mounted eye tracking, egocentric vision, optics, and head-mounted displays. The seminar welcomed 30 academic and industry researchers from Europe, the US, and Asia with a diverse background, including wearable and ubiquitous computing, computer vision, developmental psychology, optics, and human-computer interaction. In contrast to several previous Dagstuhl seminars, we used an ignite talk format to reduce the time of talks to one half-day and to leave the rest of the week for hands-on sessions, group work, general discussions, and socialising. The key results of this seminar are 1) the identification of key research challenges and summaries of breakout groups on multimodal eyewear computing, egocentric vision, security and privacy issues, skill augmentation and task guidance, eyewear computing for gaming, as well as prototyping of VR applications, 2) a list of datasets and research tools for eyewear computing, 3) three small-scale datasets recorded during the seminar, 4) an article in ACM Interactions entitled \u201cEyewear Computers for Human-Computer Interaction\u201d, as well as 5) two follow-up workshops on \u201cEgocentric Perception, Interaction, and Computing\u201d at the European Conference on Computer Vision (ECCV) as well as \u201cEyewear Computing\u201d at the ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp)

    Exploring Users Pointing Performance on Large Displays with Different Curvatures in Virtual Reality

    Full text link
    Large curved displays inside Virtual Reality environments are becoming popular for visualizing high-resolution content during analytical tasks, gaming or entertainment. Prior research showed that such displays provide a wide field of view and offer users a high level of immersion. However, little is known about users' performance (e.g., pointing speed and accuracy) on them. We explore users' pointing performance on large virtual curved displays. We investigate standard pointing factors (e.g., target width and amplitude) in combination with relevant curve-related factors, namely display curvature and both linear and angular measures. Our results show that the less curved the display, the higher the performance, i.e., faster movement time. This result holds for pointing tasks controlled via their visual properties (linear widths and amplitudes) or their motor properties (angular widths and amplitudes). Additionally, display curvatures significantly affect the error rate for both linear and angular conditions. Furthermore, we observe that curved displays perform better or similar to flat displays based on throughput analysis. Finally, we discuss our results and provide suggestions regarding pointing tasks on large curved displays in VR.Comment: IEEE Transactions on Visualization and Computer Graphics (2023

    Exploring Users' Pointing Performance on Virtual and Physical Large Curved Displays

    Full text link
    Large curved displays have emerged as a powerful platform for collaboration, data visualization, and entertainment. These displays provide highly immersive experiences, a wider field of view, and higher satisfaction levels. Yet, large curved displays are not commonly available due to their high costs. With the recent advancement of Head Mounted Displays (HMDs), large curved displays can be simulated in Virtual Reality (VR) with minimal cost and space requirements. However, to consider the virtual display as an alternative to the physical display, it is necessary to uncover user performance differences (e.g., pointing speed and accuracy) between these two platforms. In this paper, we explored users' pointing performance on both physical and virtual large curved displays. Specifically, with two studies, we investigate users' performance between the two platforms for standard pointing factors such as target width, target amplitude as well as users' position relative to the screen. Results from user studies reveal no significant difference in pointing performance between the two platforms when users are located at the same position relative to the screen. In addition, we observe users' pointing performance improves when they are located at the center of a semi-circular display compared to off-centered positions. We conclude by outlining design implications for pointing on large curved virtual displays. These findings show that large curved virtual displays are a viable alternative to physical displays for pointing tasks.Comment: In 29th ACM Symposium on Virtual Reality Software and Technology (VRST 2023

    Detecting Surface Interactions via a Wearable Microphone to Improve Augmented Reality Text Entry

    Get PDF
    This thesis investigates whether we can detect and distinguish between surface interaction events such as tapping or swiping using a wearable mic from a surface. Also, what are the advantages of new text entry methods such as tapping with two fingers simultaneously to enter capital letters and punctuation? For this purpose, we conducted a remote study to collect audio and video of three different ways people might interact with a surface. We also built a CNN classifier to detect taps. Our results show that we can detect and distinguish between surface interaction events such as tap or swipe via a wearable mic on the user\u27s head

    Freeform 3D interactions in everyday environments

    Get PDF
    PhD ThesisPersonal computing is continuously moving away from traditional input using mouse and keyboard, as new input technologies emerge. Recently, natural user interfaces (NUI) have led to interactive systems that are inspired by our physical interactions in the real-world, and focus on enabling dexterous freehand input in 2D or 3D. Another recent trend is Augmented Reality (AR), which follows a similar goal to further reduce the gap between the real and the virtual, but predominately focuses on output, by overlaying virtual information onto a tracked real-world 3D scene. Whilst AR and NUI technologies have been developed for both immersive 3D output as well as seamless 3D input, these have mostly been looked at separately. NUI focuses on sensing the user and enabling new forms of input; AR traditionally focuses on capturing the environment around us and enabling new forms of output that are registered to the real world. The output of NUI systems is mainly presented on a 2D display, while the input technologies for AR experiences, such as data gloves and body-worn motion trackers are often uncomfortable and restricting when interacting in the real world. NUI and AR can be seen as very complimentary, and bringing these two fields together can lead to new user experiences that radically change the way we interact with our everyday environments. The aim of this thesis is to enable real-time, low latency, dexterous input and immersive output without heavily instrumenting the user. The main challenge is to retain and to meaningfully combine the positive qualities that are attributed to both NUI and AR systems. I review work in the intersecting research fields of AR and NUI, and explore freehand 3D interactions with varying degrees of expressiveness, directness and mobility in various physical settings. There a number of technical challenges that arise when designing a mixed NUI/AR system, which I will address is this work: What can we capture, and how? How do we represent the real in the virtual? And how do we physically couple input and output? This is achieved by designing new systems, algorithms, and user experiences that explore the combination of AR and NUI
    • 

    corecore