2,908 research outputs found
The Iray Light Transport Simulation and Rendering System
While ray tracing has become increasingly common and path tracing is well
understood by now, a major challenge lies in crafting an easy-to-use and
efficient system implementing these technologies. Following a purely
physically-based paradigm while still allowing for artistic workflows, the Iray
light transport simulation and rendering system allows for rendering complex
scenes by the push of a button and thus makes accurate light transport
simulation widely available. In this document we discuss the challenges and
implementation choices that follow from our primary design decisions,
demonstrating that such a rendering system can be made a practical, scalable,
and efficient real-world application that has been adopted by various companies
across many fields and is in use by many industry professionals today
Interactive inspection of complex multi-object industrial assemblies
The final publication is available at Springer via http://dx.doi.org/10.1016/j.cad.2016.06.005The use of virtual prototypes and digital models containing thousands of individual objects is commonplace in complex industrial applications like the cooperative design of huge ships. Designers are interested in selecting and editing specific sets of objects during the interactive inspection sessions. This is however not supported by standard visualization systems for huge models. In this paper we discuss in detail the concept of rendering front in multiresolution trees, their properties and the algorithms that construct the hierarchy and efficiently render it, applied to very complex CAD models, so that the model structure and the identities of objects are preserved. We also propose an algorithm for the interactive inspection of huge models which uses a rendering budget and supports selection of individual objects and sets of objects, displacement of the selected objects and real-time collision detection during these displacements. Our solutionâbased on the analysis of several existing view-dependent visualization schemesâuses a Hybrid Multiresolution Tree that mixes layers of exact geometry, simplified models and impostors, together with a time-critical, view-dependent algorithm and a Constrained Front. The algorithm has been successfully tested in real industrial environments; the models involved are presented and discussed in the paper.Peer ReviewedPostprint (author's final draft
A Haptic Modeling System
Haptics has been studied as a means of providing users with natural and immersive haptic sensations in various real, augmented, and virtual environments, but it is still relatively unfamiliar to the general public. One reason is the lack of abundant haptic content in areas familiar to the general public. Even though some modeling tools do exist for creating haptic content, the addition of haptic data to graphic models is still relatively primitive, time consuming, and unintuitive. In order to establish a comprehensive and efficient haptic modeling system, this chapter first defines the haptic modeling processes and its scopes. It then proposes a haptic modeling system that can, based on depth images and image data structure, create and edit haptic content easily and intuitively for virtual object. This system can also efficiently handle non-uniform haptic property per pixel, and can effectively represent diverse haptic properties (stiffness, friction, etc)
Unstructured Human Activity Detection from RGBD Images
Being able to detect and recognize human activities is essential for several
applications, including personal assistive robotics. In this paper, we perform
detection and recognition of unstructured human activity in unstructured
environments. We use a RGBD sensor (Microsoft Kinect) as the input sensor, and
compute a set of features based on human pose and motion, as well as based on
image and pointcloud information. Our algorithm is based on a hierarchical
maximum entropy Markov model (MEMM), which considers a person's activity as
composed of a set of sub-activities. We infer the two-layered graph structure
using a dynamic programming approach. We test our algorithm on detecting and
recognizing twelve different activities performed by four people in different
environments, such as a kitchen, a living room, an office, etc., and achieve
good performance even when the person was not seen before in the training set.Comment: 2012 IEEE International Conference on Robotics and Automation (A
preliminary version of this work was presented at AAAI workshop on Pattern,
Activity and Intent Recognition, 2011
Multi Layered Multi Task Marker Based Interaction in Information Rich Virtual Environments
Simple and cheap interaction has a key role in the operation and exploration of any Virtual Environment (VE). In this paper, we propose an interaction technique that provides two different ways of interaction (information and control) on complex objects in a simple and computationally cheap way. The interaction is based on the use of multiple embedded markers in a specialized manner. The proposed marker like an interaction peripheral works just like a touch paid which can perform any type of interaction in a 3D VE. The proposed marker is not only used for interaction with Augmented Reality (AR), but also with Mixed Reality. A biological virtual learning application is developed which is used for evaluation and experimentation. We conducted our experiments in two phases. First, we compared a simple VE with the proposed layered VE. Second, a comparative study is conducted between the proposed marker, a simple layered marker, and multiple single markers. We found the proposed marker with improved learning, easiness in interaction, and comparatively less task execution time. The results gave improved learning for layered VE as compared to simple VE
New Geometric Data Structures for Collision Detection
We present new geometric data structures for collision detection and more, including: Inner Sphere Trees - the first data structure to compute the peneration volume efficiently. Protosphere - an new algorithm to compute space filling sphere packings for arbitrary objects. Kinetic AABBs - a bounding volume hierarchy that is optimal in the number of updates when the objects deform. Kinetic Separation-List - an algorithm that is able to perform continuous collision detection for complex deformable objects in real-time. Moreover, we present applications of these new approaches to hand animation, real-time collision avoidance in dynamic environments for robots and haptic rendering, including a user study that exploits the influence of the degrees of freedom in complex haptic interactions. Last but not least, we present a new benchmarking suite for both, peformance and quality benchmarks, and a theoretic analysis of the running-time of bounding volume-based collision detection algorithms
Analysis domain model for shared virtual environments
The field of shared virtual environments, which also
encompasses online games and social 3D environments, has a
system landscape consisting of multiple solutions that share great functional overlap. However, there is little system interoperability between the different solutions. A shared virtual environment has an associated problem domain that is highly complex raising difficult challenges to the development process, starting with the architectural design of the underlying system. This paper has two main contributions. The first contribution is a broad domain analysis of shared virtual environments, which enables developers to have a better understanding of the whole rather than the part(s). The second contribution is a reference domain model for discussing and describing solutions - the Analysis Domain Model
Kimera: from SLAM to Spatial Perception with 3D Dynamic Scene Graphs
Humans are able to form a complex mental model of the environment they move
in. This mental model captures geometric and semantic aspects of the scene,
describes the environment at multiple levels of abstractions (e.g., objects,
rooms, buildings), includes static and dynamic entities and their relations
(e.g., a person is in a room at a given time). In contrast, current robots'
internal representations still provide a partial and fragmented understanding
of the environment, either in the form of a sparse or dense set of geometric
primitives (e.g., points, lines, planes, voxels) or as a collection of objects.
This paper attempts to reduce the gap between robot and human perception by
introducing a novel representation, a 3D Dynamic Scene Graph(DSG), that
seamlessly captures metric and semantic aspects of a dynamic environment. A DSG
is a layered graph where nodes represent spatial concepts at different levels
of abstraction, and edges represent spatio-temporal relations among nodes. Our
second contribution is Kimera, the first fully automatic method to build a DSG
from visual-inertial data. Kimera includes state-of-the-art techniques for
visual-inertial SLAM, metric-semantic 3D reconstruction, object localization,
human pose and shape estimation, and scene parsing. Our third contribution is a
comprehensive evaluation of Kimera in real-life datasets and photo-realistic
simulations, including a newly released dataset, uHumans2, which simulates a
collection of crowded indoor and outdoor scenes. Our evaluation shows that
Kimera achieves state-of-the-art performance in visual-inertial SLAM, estimates
an accurate 3D metric-semantic mesh model in real-time, and builds a DSG of a
complex indoor environment with tens of objects and humans in minutes. Our
final contribution shows how to use a DSG for real-time hierarchical semantic
path-planning. The core modules in Kimera are open-source.Comment: 34 pages, 25 figures, 9 tables. arXiv admin note: text overlap with
arXiv:2002.0628
- âŠ