38 research outputs found
Mental vision:a computer graphics platform for virtual reality, science and education
Despite the wide amount of computer graphics frameworks and solutions available for virtual reality, it is still difficult to find a perfect one fitting at the same time the many constraints of research and educational contexts. Advanced functionalities and user-friendliness, rendering speed and portability, or scalability and image quality are opposite characteristics rarely found into a same approach. Furthermore, fruition of virtual reality specific devices like CAVEs or wearable systems is limited by their costs and accessibility, being most of these innovations reserved to institutions and specialists able to afford and manage them through strong background knowledge in programming. Finally, computer graphics and virtual reality are a complex and difficult matter to learn, due to the heterogeneity of notions a developer needs to practice with before attempting to implement a full virtual environment. In this thesis we describe our contributions to these topics, assembled in what we called the Mental Vision platform. Mental Vision is a framework composed of three main entities. First, a teaching/research oriented graphics engine, simplifying access to 2D/3D real-time rendering on mobile devices, personal computers and CAVE systems. Second, a series of pedagogical modules to introduce and practice computer graphics and virtual reality techniques. Third, two advanced VR systems: a wearable, lightweight and handsfree mixed reality setup, and a four sides CAVE designed through off the shelf hardware. In this dissertation we explain our conceptual, architectural and technical approach, pointing out how we managed to create a robust and coherent solution reducing complexity related to cross-platform and multi-device 3D rendering, and answering simultaneously to contradictory common needs of computer graphics and virtual reality for researchers and students. A series of case studies evaluates how Mental Vision concretely satisfies these needs and achieves its goals on in vitro benchmarks and in vivo scientific and educational projects
Interactive Low-Dimensional Human Motion Synthesis by Combining Motion Models and PIK
This paper explores the issue of interactive low-dimensional human motion synthesis. We compare the performances of two motion models, i.e. Principal Components Analysis (PCA) or Probabilistic PCA (PPCA), for solving a constrained optimization problem within a low-dimensional latent space. We use PCA or PPCA as a first step of preprocessing to reduce the dimensionality of the database to make it tractable, and to encapsulate only the essential aspects of a specific motion pattern. Interactive user control is provided by formulating a low-dimensional optimization framework that uses a Prioritized Inverse Kinematics (PIK) strategy. The key insight of PIK is that the user can adjust a motion by adding constraints with different priorities. We demonstrate the robustness of our approach by synthesizing various styles of golf swing. This movement is challenging in the sense that it is highly coordinated and requires a great precision while moving with high speeds. Hence, any artifact is clearly noticeable in the solution movement. We simultaneously show results comparing local and global motion models regarding synthesis realism and performance. Finally, the quality of the synthesized animations is assessed by comparing our results against a per-frame PIK technique
DIVE on the internet
This dissertation reports research and development of a platform for Collaborative Virtual Environments (CVEs). It has particularly focused on two major challenges: supporting the rapid development of scalable applications and easing their deployment on the Internet. This work employs a research method based on prototyping and refinement and promotes the use of this method for application development. A number of the solutions herein are in line with other CVE systems. One of the strengths of this work consists in a global approach to the issues raised by CVEs and the recognition that such complex problems are best tackled using a multi-disciplinary approach that understands both user and system requirements.
CVE application deployment is aided by an overlay network that is able to complement any IP multicast infrastructure in place. Apart from complementing a weakly deployed worldwide multicast, this infrastructure provides for a certain degree of introspection, remote controlling and visualisation. As such, it forms an important aid in assessing the scalability of running applications. This scalability is further facilitated by specialised object distribution algorithms and an open framework for the implementation of novel partitioning techniques.
CVE application development is eased by a scripting language, which enables rapid development and favours experimentation. This scripting language interfaces many aspects of the system and enables the prototyping of distribution-related components as well as user interfaces. It is the key construct of a distributed environment to which components, written in different languages, connect and onto which they operate in a network abstracted manner. The solutions proposed are exemplified and strengthened by three collaborative applications. The Dive room system is a virtual environment modelled after the room metaphor and supporting asynchronous and synchronous cooperative work. WebPath is a companion application to a Web browser that seeks to make the current history of page visits more visible and usable. Finally, the London travel demonstrator supports travellers by providing an environment where they can explore the city, utilise group collaboration facilities, rehearse particular journeys and access tourist information data
An Information-Theoretic Framework for Consistency Maintenance in Distributed Interactive Applications
Distributed Interactive Applications (DIAs) enable geographically dispersed users
to interact with each other in a virtual environment. A key factor to the success
of a DIA is the maintenance of a consistent view of the shared virtual world for
all the participants. However, maintaining consistent states in DIAs is difficult
under real networks. State changes communicated by messages over such networks
suffer latency leading to inconsistency across the application. Predictive Contract
Mechanisms (PCMs) combat this problem through reducing the number of messages
transmitted in return for perceptually tolerable inconsistency. This thesis examines
the operation of PCMs using concepts and methods derived from information theory.
This information theory perspective results in a novel information model of PCMs
that quantifies and analyzes the efficiency of such methods in communicating the
reduced state information, and a new adaptive multiple-model-based framework for
improving consistency in DIAs.
The first part of this thesis introduces information measurements of user behavior
in DIAs and formalizes the information model for PCM operation. In presenting the
information model, the statistical dependence in the entity state, which makes using
extrapolation models to predict future user behavior possible, is evaluated. The
efficiency of a PCM to exploit such predictability to reduce the amount of network
resources required to maintain consistency is also investigated. It is demonstrated
that from the information theory perspective, PCMs can be interpreted as a form
of information reduction and compression.
The second part of this thesis proposes an Information-Based Dynamic Extrapolation
Model for dynamically selecting between extrapolation algorithms based on
information evaluation and inferred network conditions. This model adapts PCM
configurations to both user behavior and network conditions, and makes the most
information-efficient use of the available network resources. In doing so, it improves
PCM performance and consistency in DIAs
Recommended from our members
Exploring Engineering Applications of Visual Analytics in Virtual Reality
Recent advancements and technological breakthroughs in the development of so-called immersive interfaces, such as augmented (AR), mixed (MR), and virtual reality (VR), coupled with the growing mass-market adoption of such devices has started to attract attention from academia and industry alike. Out of these technologies, VR offers the most mature option in terms of both hardware and software, as well as the best available range of different off-the-shelf offerings. VR is a term interchangeably used to denote both head-mounted displays (HMDs) and fully immersive, bespoke 3D environments which these devices transport their users to. With modern devices, developers can leverage a range of different interaction modalities, including visual, audio, and even haptic feedback, in the creation of these virtual worlds. With such a rich interaction space it is thus natural to think of VR as a well-suited environment for interactive visualisation and analytical reasoning of complex multidimensional data.
Research in \textit{visual analytics} (VA) combines these two themes, spanning the last one and a half decades, and has revealed a number of research findings. This includes a range of new advanced and effective visualisation and analysis tools for even more complex, more noisy and larger data sets. Furthermore, the extension of this research and the use of immersive interfaces to facilitate visual analytics has spun-off a new field of research: \textit{immersive analytics} (IA). Immersive analytics leverages the potential bestowed by immersive interfaces to aid the user in swift and effective data analysis.
Some of the most promising application domains of such immersive interfaces in the industry are various branches of engineering, including aerospace design and in civil engineering. The range of potential applications is vast and growing as new stakeholders are adopting these immersive tools. However, the use of these technologies brings its own challenges. One such difficulty is the design of appropriate interaction techniques. There is no optimal choice, instead such a choice varies depending on available hardware, the user’s prior experience, their task at hand, and the nature of the dataset.
To this end, my PhD work has focused on designing and analysing various interactive, VR-based immersive systems for engineering visual analytics. One of the key elements of such an immersive system is the selection of an adequate interaction method. In a series of both qualitative and quantitative studies, I have explored the potential of various interaction techniques that can be used to support the user in swift and effective data analysis.
Here, I have investigated the feasibility of using techniques such as hand-held controllers, gaze-tracking and hand-tracking input methods used solo or in combination in various challenging use cases and scenarios. For instance, I developed and verified the usability and effectiveness of the AeroVR system for aerospace design in VR. This research has allowed me to trim the very large design space of such systems that have been not sufficiently explored thus far. Moreover, building on top of this work, I have designed, developed, and tested a system for digital twin assessment in aerospace that coupled gaze-tracking and hand-tracking, achieved via an additional sensor attached to the front of the VR headset, with no need for the user to hold a controller. The analysis of the results obtained from a qualitative study with domain experts allowed me to distill and propose design implications when developing similar systems. Furthermore, I worked towards designing an effective VR-based visualisation of complex, multidimensional abstract datasets. Here, I developed and evaluated the immersive version of the well-known Parallel Coordinates Plots (IPCP) visualisation technique. The results of the series of qualitative user studies allowed me to obtain a list of design suggestions for IPCP, as well as provide tentative evidence that the IPCP can be an effective tool for multidimensional data analysis. Lastly, I also worked on the design, development, and verification of the system allowing its users to capture information in the context of conducting engineering surveys in VR.
Furthermore, conducting a meaningful evaluation of immersive analytics interfaces remains an open problem. It is difficult and often not feasible to use traditional A/B comparisons in controlled experiments as the aim of immersive analytics is to provide its users with new insights into their data rather than focusing on more quantifying factors. To this end, I developed a generative process for synthesising clustered datasets for VR analytics experiments that can be used in the process of interface evaluation. I further validated this approach by designing and carrying out two user studies. The statistical analysis of the gathered data revealed that this generative process for synthesising clustered datasets did indeed result in datasets that can be used in experiments without the datasets themselves being the dominant contributor of the variability between conditions.Engineering and Physical Sciences Research Council (EPSRC-1788814); Trinity Hall and Cambridge Commonwealth, European & International Trust; Cambridge Philosophical Societ
Interaction indirecte en réalité virtuelle à l'aide d'un médiateur
Currently many researches in the field of multimodal interfaces (input, output) have been made in order to be able to achieve complex tasks merely, naturally, and quickly. Expert interfaces should be considering the risks resulting from an ordered action, to prevent any harmful action and to suggest possible alternatives. Taking into account the complexity of the tasks to achieve and exponential growth of information, the adaptive systems are henceforth essential to make possible and facilitate the work of the operator. A good man-machine interface is thus strongly required. We note that multiple interaction and manipulation techniques are currently available, but at this time, the characteristic tools of the WIMP paradigm (Windows, Icons, Menus and Ppointing device) did not find their equivalent in 3D interfaces. There still remains way to make to be able to find the perfect tool and to enforce it as a standard for the 3D interfaces and applications. Therefore, our research was focused gradually towards the proposal for a mediating interface: a very adaptive and functional interface, intended to simplify to the maximum the human interaction in the execution of complex work. The concept of the "mediator" might be clarified in the following way, i.e.: A user in full immersive system named mediator world will be able to control or interact a front distance, through an intermediary haptic devices, on another virtual or real world named controlled world. Let us recall that the Human needs simple tools to be able to achieve complicated tasks. In such a case, one of the ultimate goals is to make the machine adapt to the human instead of forcing the human to adapt to the machine
Generating, animating, and rendering varied individuals for real-time crowds
To simulate realistic crowds of virtual humans in real time, three main requirements need satisfaction. First of all, quantity, i.e., the ability to simulate thousands of characters. Secondly, quality, because each virtual human composing a crowd needs to look unique in its appearance and animation. Finally, efficiency is paramount, for an operation usually efficient on a single virtual human, becomes extremely costly when applied on large crowds. Developing an architecture able to manage all three aspects is a challenging problem that we have addressed in our research. Our first contribution is an efficient and versatile architecture called YaQ, able to simulate thousands of characters in real time. This platform, developed at EPFL-VRLab, results from several years of research and integrates state-of-the-art techniques at all levels: YaQ aims at providing efficient algorithms and real-time solutions for populating globally and massively large-scale empty environments. YaQ thus fits various application domains, such as video games and virtual reality. Our architecture is especially efficient in managing the large quantity of data that is used to simulate crowds. In order to simulate large crowds, many instances of a small set of human templates have to be generated. From this starting point, if no care is taken to vary each character individually, many clones appear in the crowd. We present several algorithms to make each individual unique in the crowd. Firstly, we introduce a new method to distinguish body parts of a human and apply detailed color variety and patterns to each one of them. Secondly, we present two techniques to modify the shape and profile of a virtual human: a simple and efficient method for attaching accessories to individuals, and efficient tools to scale the skeleton and mesh of an instance. Finally, we also contribute to varying individuals' animation by introducing variations to the upper body movements, thus allowing characters to make a phone call, have a hand in their pocket, or carry heavy accessories, etc. To achieve quantity in a crowd, levels of detail need to be used. We explore the most adequate solutions to simulate large crowds with levels of detail, while avoiding disturbing switches between two different representations of a virtual human. To do so, we develop solutions to make most variety techniques scalable to all levels of detail
Entornos multimedia de realidad aumentada en el campo del arte
La relación ente Ciencia y Arte ha mantenido a lo largo de la historia momentos de proximidad o distanciamiento, llegando a entenderse como dos culturas diferentes, pero también se han producido situaciones interdisciplinares de colaboración e intercambio que en nuestros días mantienen como nexo común la cultura digital y el uso del ordenador. Según Berenguer (2002) desde la aparición del ordenador, científicos y artistas están encontrando un espacio común de trabajo y entendimiento. Mediante el empleo de las nuevas tecnologías, la distancia que separa ambas disciplinas es cada vez más corta. En esta tesis, cuyo título es "Entornos Multimedia de Realidad Aumentada en el Campo del Arte", se presenta una investigación teórico-práctica de la tecnología de realidad aumentada aplicada al arte y campos afines, como el edutainment (educación + entretenimiento). La investigación se ha realizado en dos bloques: en el primer bloque se trata la tecnología desde distintos factores que se han considerado relevantes para su entendimiento y funcionamiento; en el segundo se presentan un total de seis ensayos que constituyen la parte práctica de esta tesis.Portalés Ricart, C. (2008). Entornos multimedia de realidad aumentada en el campo del arte [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/3402Palanci