29,137 research outputs found
Learning 3D Navigation Protocols on Touch Interfaces with Cooperative Multi-Agent Reinforcement Learning
Using touch devices to navigate in virtual 3D environments such as computer
assisted design (CAD) models or geographical information systems (GIS) is
inherently difficult for humans, as the 3D operations have to be performed by
the user on a 2D touch surface. This ill-posed problem is classically solved
with a fixed and handcrafted interaction protocol, which must be learned by the
user. We propose to automatically learn a new interaction protocol allowing to
map a 2D user input to 3D actions in virtual environments using reinforcement
learning (RL). A fundamental problem of RL methods is the vast amount of
interactions often required, which are difficult to come by when humans are
involved. To overcome this limitation, we make use of two collaborative agents.
The first agent models the human by learning to perform the 2D finger
trajectories. The second agent acts as the interaction protocol, interpreting
and translating to 3D operations the 2D finger trajectories from the first
agent. We restrict the learned 2D trajectories to be similar to a training set
of collected human gestures by first performing state representation learning,
prior to reinforcement learning. This state representation learning is
addressed by projecting the gestures into a latent space learned by a
variational auto encoder (VAE).Comment: 17 pages, 8 figures. Accepted at The European Conference on Machine
Learning and Principles and Practice of Knowledge Discovery in Databases 2019
(ECMLPKDD 2019
Interactive form creation: exploring the creation and manipulation of free form through the use of interactive multiple input interface
Most current CAD systems support only the two most common input devices: a mouse and a keyboard that impose a limit to the degree of interaction that a user can have with the system. However, it is not uncommon for users to work together on the same computer during a collaborative task. Beside that, people tend to use both hands to manipulate 3D objects; one hand is used to orient the object while the other hand is used to perform some operation on the object. The same things could be applied to computer modelling in the conceptual phase of the design process. A designer can rotate and position an object with one hand, and manipulate the shape [deform it] with the other hand. Accordingly, the 3D object can be easily and intuitively changed through interactive manipulation of both hands.The research investigates the manipulation and creation of free form geometries through the use of interactive interfaces with multiple input devices. First the creation of the 3D model will be discussed; several different types of models will be illustrated. Furthermore, different tools that allow the user to control the 3D model interactively will be presented. Three experiments were conducted using different interactive interfaces; two bi-manual techniques were compared with the conventional one-handed approach. Finally it will be demonstrated that the use of new and multiple input devices can offer many opportunities for form creation. The problem is that few, if any, systems make it easy for the user or the programmer to use new input devices
Discrete event simulation and virtual reality use in industry: new opportunities and future trends
This paper reviews the area of combined discrete
event simulation (DES) and virtual reality (VR) use within industry.
While establishing a state of the art for progress in this
area, this paper makes the case for VR DES as the vehicle of choice
for complex data analysis through interactive simulation models,
highlighting both its advantages and current limitations. This paper
reviews active research topics such as VR and DES real-time
integration, communication protocols, system design considerations,
model validation, and applications of VR and DES. While
summarizing future research directions for this technology combination,
the case is made for smart factory adoption of VR DES as
a new platform for scenario testing and decision making. It is put
that in order for VR DES to fully meet the visualization requirements
of both Industry 4.0 and Industrial Internet visions of digital
manufacturing, further research is required in the areas of lower
latency image processing, DES delivery as a service, gesture recognition
for VR DES interaction, and linkage of DES to real-time data streams and Big Data sets
A Conceptual Framework for Motion Based Music Applications
Imaginary projections are the core of the framework for motion
based music applications presented in this paper. Their design depends
on the space covered by the motion tracking device, but also
on the musical feature involved in the application. They can be considered
a very powerful tool because they allow not only to project
in the virtual environment the image of a traditional acoustic instrument,
but also to express any spatially defined abstract concept.
The system pipeline starts from the musical content and, through a
geometrical interpretation, arrives to its projection in the physical
space. Three case studies involving different motion tracking devices
and different musical concepts will be analyzed. The three
examined applications have been programmed and already tested
by the authors. They aim respectively at musical expressive interaction
(Disembodied Voices), tonal music knowledge (Harmonic
Walk) and XX century music composition (Hand Composer)
Tangible user interfaces : past, present and future directions
In the last two decades, Tangible User Interfaces (TUIs) have emerged as a new interface type that interlinks the digital and physical worlds. Drawing upon users' knowledge and skills of interaction with the real non-digital world, TUIs show a potential to enhance the way in which people interact with and leverage digital information. However, TUI research is still in its infancy and extensive research is required in or- der to fully understand the implications of tangible user interfaces, to develop technologies that further bridge the digital and the physical, and to guide TUI design with empirical knowledge. This paper examines the existing body of work on Tangible User In- terfaces. We start by sketching the history of tangible user interfaces, examining the intellectual origins of this ïŹeld. We then present TUIs in a broader context, survey application domains, and review frame- works and taxonomies. We also discuss conceptual foundations of TUIs including perspectives from cognitive sciences, phycology, and philoso- phy. Methods and technologies for designing, building, and evaluating TUIs are also addressed. Finally, we discuss the strengths and limita- tions of TUIs and chart directions for future research
Evaluating distributed cognitive resources for wayfinding in a desktop virtual environment.
As 3D interfaces, and in particular virtual environments, become increasingly realistic there is a need to investigate the location and configuration of information resources, as distributed in the humancomputer system, to support any required activities. It is important for the designer of 3D interfaces to be aware of information resource availability and distribution when considering issues such as cognitive load on the user. This paper explores how a model of distributed resources can support the design of alternative aids to virtual environment wayfinding with varying levels of cognitive load. The wayfinding aids have been implemented and evaluated in a desktop virtual environment
A First Step Towards Nuance-Oriented Interfaces for Virtual Environments
Designing usable interfaces for virtual environments (VEs) is not a trivial task. Much of the difficulty stems from the complexity and volume of the input data. Many VEs, in the creation of their interfaces, ignore much of the input data as a result of this. Using machine learning (ML), we introduce the notion of a nuance that can be used to increase the precision and power of a VE interface. An experiment verifying the existence of nuances using a neural network (NN) is discussed and a listing of guidelines to follow is given. We also review reasons why traditional ML techniques are difficult to apply to this problem
Enabling Self-aware Smart Buildings by Augmented Reality
Conventional HVAC control systems are usually incognizant of the physical
structures and materials of buildings. These systems merely follow pre-set HVAC
control logic based on abstract building thermal response models, which are
rough approximations to true physical models, ignoring dynamic spatial
variations in built environments. To enable more accurate and responsive HVAC
control, this paper introduces the notion of "self-aware" smart buildings, such
that buildings are able to explicitly construct physical models of themselves
(e.g., incorporating building structures and materials, and thermal flow
dynamics). The question is how to enable self-aware buildings that
automatically acquire dynamic knowledge of themselves. This paper presents a
novel approach using "augmented reality". The extensive user-environment
interactions in augmented reality not only can provide intuitive user
interfaces for building systems, but also can capture the physical structures
and possibly materials of buildings accurately to enable real-time building
simulation and control. This paper presents a building system prototype
incorporating augmented reality, and discusses its applications.Comment: This paper appears in ACM International Conference on Future Energy
Systems (e-Energy), 201
- âŠ