1,671 research outputs found
Expressive cutting, deforming, and painting of three-dimensional digital shapes through asymmetric bimanual haptic manipulation
Practitioners of the geosciences, design, and engineering disciplines communicate complex ideas about shape by manipulating three-dimensional digital objects to match their conceptual model. However, the two-dimensional control interfaces, common in software applications, create a disconnect to three-dimensional manipulations. This research examines cutting, deforming, and painting manipulations for expressive three-dimensional interaction. It presents a cutting algorithm specialized for planning cuts on a triangle mesh, the extension of a deformation algorithm for inhomogeneous meshes, and the definition of inhomogeneous meshes by painting into a deformation property map. This thesis explores two-handed interactions with haptic force-feedback where each hand can fulfill an asymmetric bimanual role. These digital shape manipulations demonstrate a step toward the creation of expressive three-dimensional interactions
Physically Interacting With Four Dimensions
Thesis (Ph.D.) - Indiana University, Computer Sciences, 2009People have long been fascinated with understanding the fourth
dimension. While making pictures of 4D objects by projecting them to 3D can help reveal basic geometric features, 3D graphics images by themselves are of limited value. For example, just as 2D shadows of 3D curves may have lines crossing one another in the shadow, 3D graphics projections of smooth 4D topological surfaces can be interrupted where one surface intersects another.
The research presented here creates physically realistic models for
simple interactions with objects and materials in a virtual 4D world.
We provide methods for the construction, multimodal exploration, and interactive manipulation of a wide variety of 4D objects. One basic achievement of this research is to exploit the free motion of a
computer-based haptic probe to support a continuous motion that
follows the \emph{local continuity\/} of a 4D surface, allowing collision-free exploration in the 3D projection. In 3D, this interactive probe follows the full local continuity of the surface as though we were in fact \emph{physically touching\/} the actual static 4D object.
Our next contribution is to support dynamic 4D objects that can move, deform, and collide with other objects as well as with themselves. By combining graphics, haptics, and collision-sensing physical modeling, we can thus enhance our 4D visualization experience. Since we cannot actually place interaction devices in 4D, we develop fluid methods for interacting with a 4D object in its 3D shadow image using adapted reduced-dimension 3D tools for manipulating objects embedded in 4D. By physically modeling the correct properties of 4D surfaces, their bending forces, and their collisions in the 3D interactive or haptic controller interface, we can support full-featured physical exploration of 4D mathematical objects in a manner that is otherwise far beyond the real-world experience accessible to human beings
Pseudo-haptics survey: Human-computer interaction in extended reality & teleoperation
Pseudo-haptic techniques are becoming increasingly popular in human-computer interaction. They replicate haptic sensations by leveraging primarily visual feedback rather than mechanical actuators. These techniques bridge the gap between the real and virtual worlds by exploring the brain’s ability to integrate visual and haptic information. One of the many advantages of pseudo-haptic techniques is that they are cost-effective, portable, and flexible. They eliminate the need for direct attachment of haptic devices to the body, which can be heavy and large and require a lot of power and maintenance. Recent research has focused on applying these techniques to extended reality and mid-air interactions. To better understand the potential of pseudo-haptic techniques, the authors developed a novel taxonomy encompassing tactile feedback, kinesthetic feedback, and combined categories in multimodal approaches, ground not covered by previous surveys. This survey highlights multimodal strategies and potential avenues for future studies, particularly regarding integrating these techniques into extended reality and collaborative virtual environments.info:eu-repo/semantics/publishedVersio
Amplifying Actions - Towards Enactive Sound Design
Recently, artists and designers have begun to use digital technologies in order to
stimulate bodily interaction, while scientists keep revealing new findings about
sensorimotor contingencies, changing the way in which we understand human
knowledge. However, implicit knowledge generated in artistic projects can become
difficult to transfer and scientific research frequently remains isolated due to
specific disciplinary languages and methodologies. By mutually enriching holistic
creative approaches and highly specific scientific ways of working, this doctoral
dissertation aims to set the foundation for Enactive Sound Design. It is focused
on sound that engages sensorimotor experience that has been neglected within
the existing design practices. The premise is that such a foundation can be best
developed if grounded in transdisciplinary methods that bring together scientific
and design approaches.
The methodology adopted to achieve this goal is practice-based and supported
by theoretical research and project analysis. Three different methodologies were
formulated and evaluated during this doctoral study, based on a convergence of existing
methods from design, psychology and human-computer interaction. First, a
basic design approach was used to engage in a reflective creation process and to extend
the existing work on interaction gestalt through hands-on activities. Second,
psychophysical experiments were carried out and adapted to suit the needed shift
from reception-based tests to a performance-based quantitative evaluation. Last,
a set of participatory workshops were developed and conducted, within which the
enactive sound exercises were iteratively tested through direct and participatory
observation, questionnaires and interviews.
A foundation for Enactive Sound Design developed in this dissertation includes
novel methods that have been generated by extensive explorations into the fertile
ground between basic design education, psychophysical experiments and participatory
design. Combining creative practices with traditional task analysis further
developed this basic design approach. The results were a number of abstract sonic
artefacts conceptualised as the experimental apparatuses that can allow psychologists
to study enactive sound experience. Furthermore, a collaboration between
designers and scientists on a psychophysical study produced a new methodology
for the evaluation of sensorimotor performance with tangible sound interfaces.These performance experiments have revealed that sonic feedback can support
enactive learning. Finally, participatory workshops resulted in a number of novel
methods focused on a holistic perspective fostered through a subjective experience
of self-producing sound. They indicated the influence that such an approach may
have on both artists and scientists in the future. The role of designer, as a scientific
collaborator within psychological research and as a facilitator of participatory
workshops, has been evaluated.
Thus, this dissertation recommends a number of collaborative methods and strategies
that can help designers to understand and reflectively create enactive sound
objects. It is hoped that the examples of successful collaborations between designers
and scientists presented in this thesis will encourage further projects and
connections between different disciplines, with the final goal of creating a more
engaging and a more aware sonic future.European Commission 6th Framework and European Science Foundation (COST Action
Understanding interaction mechanics in touchless target selection
Indiana University-Purdue University Indianapolis (IUPUI)We use gestures frequently in daily life—to interact with people, pets, or objects. But interacting with computers using mid-air gestures continues to challenge the design of touchless systems. Traditional approaches to touchless interaction focus on exploring gesture inputs and evaluating user interfaces. I shift the focus from gesture elicitation and interface evaluation to touchless interaction mechanics. I argue for a novel approach to generate design guidelines for touchless systems: to use fundamental interaction principles, instead of a reactive adaptation to the sensing technology. In five sets of experiments, I explore visual and pseudo-haptic feedback, motor intuitiveness, handedness, and perceptual Gestalt effects. Particularly, I study the interaction mechanics in touchless target selection. To that end, I introduce two novel interaction techniques: touchless circular menus that allow command selection using directional strokes and interface topographies that use pseudo-haptic feedback to guide steering–targeting tasks. Results illuminate different facets of touchless interaction mechanics. For example, motor-intuitive touchless interactions explain how our sensorimotor abilities inform touchless interface affordances: we often make a holistic oblique gesture instead of several orthogonal hand gestures while reaching toward a distant display. Following the Gestalt theory of visual perception, we found similarity between user interface (UI) components decreased user accuracy while good continuity made users faster. Other findings include hemispheric asymmetry affecting transfer of training between dominant and nondominant hands and pseudo-haptic feedback improving touchless accuracy. The results of this dissertation contribute design guidelines for future touchless systems. Practical applications of this work include the use of touchless interaction techniques in various domains, such as entertainment, consumer appliances, surgery, patient-centric health settings, smart cities, interactive visualization, and collaboration
- …