2,868 research outputs found
Symmetric and asymmetric action integration during cooperative object manipulation in virtual environments
Cooperation between multiple users in a virtual environment (VE) can take place at one of three levels. These
are defined as where users can perceive each other (Level 1), individually change the scene (Level 2), or
simultaneously act on and manipulate the same object (Level 3). Despite representing the highest level of
cooperation, multi-user object manipulation has rarely been studied. This paper describes a behavioral
experiment in which the piano movers' problem (maneuvering a large object through a restricted space) was
used to investigate object manipulation by pairs of participants in a VE. Participants' interactions with the object
were integrated together either symmetrically or asymmetrically. The former only allowed the common
component of participants' actions to take place, but the latter used the mean. Symmetric action integration was
superior for sections of the task when both participants had to perform similar actions, but if participants had to
move in different ways (e.g., one maneuvering themselves through a narrow opening while the other traveled
down a wide corridor) then asymmetric integration was superior. With both forms of integration, the extent to
which participants coordinated their actions was poor and this led to a substantial cooperation overhead (the
reduction in performance caused by having to cooperate with another person)
Implementing flexible rules of interaction for object manipulation in cluttered virtual environments
Object manipulation in cluttered virtual environments (VEs)
brings additional challenges to the design of interaction
algorithms, when compared with open virtual spaces. As the
complexity of the algorithms increases so does the flexibility with
which users can interact, but this is at the expense of much
greater difficulties in implementation for developers. Three rules
that increase the realism and flexibility of interaction are outlined:
collision response, order of control, and physical compatibility.
The implementation of each is described, highlighting the
substantial increase in algorithm complexity that arises. Data are
reported from an experiment in which participants manipulated a
bulky virtual object through parts of a virtual building (the piano
movers’ problem). These data illustrate the benefits to users that
accrue from implementing flexible rules of interaction
Interactive form creation: exploring the creation and manipulation of free form through the use of interactive multiple input interface
Most current CAD systems support only the two most common input devices: a mouse and a keyboard that impose a limit to the degree of interaction that a user can have with the system. However, it is not uncommon for users to work together on the same computer during a collaborative task. Beside that, people tend to use both hands to manipulate 3D objects; one hand is used to orient the object while the other hand is used to perform some operation on the object. The same things could be applied to computer modelling in the conceptual phase of the design process. A designer can rotate and position an object with one hand, and manipulate the shape [deform it] with the other hand. Accordingly, the 3D object can be easily and intuitively changed through interactive manipulation of both hands.The research investigates the manipulation and creation of free form geometries through the use of interactive interfaces with multiple input devices. First the creation of the 3D model will be discussed; several different types of models will be illustrated. Furthermore, different tools that allow the user to control the 3D model interactively will be presented. Three experiments were conducted using different interactive interfaces; two bi-manual techniques were compared with the conventional one-handed approach. Finally it will be demonstrated that the use of new and multiple input devices can offer many opportunities for form creation. The problem is that few, if any, systems make it easy for the user or the programmer to use new input devices
Recommended from our members
A new navigation paradigm for virtual reality: the guided visit through a virtual world
The three main navigation paradigms for virtual worlds, i.e., free navigation, automatic tours, and multiuser navigation show important limitations when dealing with guided visits that involve interactive cooperation among several users in 3D virtual worlds over the Internet. In this paper, we present our research into this issue and some important results. We propose a new navigation paradigm denominated guided visit through a virtual world, where the capacity of a user guiding several remote users through the virtual world is enriched with the capacity to dynamically interchange the role of guiding between the connected users. The user that acts as a guide moves freely through the virtual world, and his/her movements are reproduced by the browsers of the other guided users. We also present the architecture and the system we developed that implements this paradigm, as well as its integration in a working realworld application that demonstrates its use
Light on horizontal interactive surfaces: Input space for tabletop computing
In the last 25 years we have witnessed the rise and growth of interactive tabletop research, both in academic and in industrial settings. The rising demand for the digital support of human activities motivated the need to bring computational power to table surfaces. In this article, we review the state of the art of tabletop computing, highlighting core aspects that frame the input space of interactive tabletops: (a) developments in hardware technologies that have caused the proliferation of interactive horizontal surfaces and (b) issues related to new classes of interaction modalities (multitouch, tangible, and touchless). A classification is presented that aims to give a detailed view of the current development of this research area and define opportunities and challenges for novel touch- and gesture-based interactions between the human and the surrounding computational environment. © 2014 ACM.This work has been funded by Integra (Amper Sistemas and CDTI, Spanish Ministry of Science and Innovation) and TIPEx (TIN2010-19859-C03-01) projects and Programa de Becas y Ayudas para la Realización de Estudios Oficiales de Máster y Doctorado en la Universidad Carlos III de Madrid, 2010
- …