8,509 research outputs found
A multi-projector CAVE system with commodity hardware and gesture-based interaction
Spatially-immersive systems such as CAVEs provide users with surrounding worlds by projecting 3D models on multiple screens around the viewer. Compared to alternative immersive systems such as HMDs, CAVE systems are a powerful tool for collaborative inspection of virtual environments due to better use of peripheral vision, less sensitivity to tracking errors, and higher communication possibilities among users. Unfortunately, traditional CAVE setups require sophisticated equipment including stereo-ready projectors and tracking systems with high acquisition and maintenance costs. In this paper we present the design and construction of a passive-stereo, four-wall CAVE system based on commodity hardware. Our system works with any mix of a wide range of projector models that can be replaced independently at any time, and achieves high resolution and brightness at a minimum cost. The key ingredients of our CAVE are a self-calibration approach that guarantees continuity across the screen, as well as a gesture-based interaction approach based on a clever
combination of skeletal data from multiple Kinect sensors.Preprin
Adaptive User Perspective Rendering for Handheld Augmented Reality
Handheld Augmented Reality commonly implements some variant of magic lens
rendering, which turns only a fraction of the user's real environment into AR
while the rest of the environment remains unaffected. Since handheld AR devices
are commonly equipped with video see-through capabilities, AR magic lens
applications often suffer from spatial distortions, because the AR environment
is presented from the perspective of the camera of the mobile device. Recent
approaches counteract this distortion based on estimations of the user's head
position, rendering the scene from the user's perspective. To this end,
approaches usually apply face-tracking algorithms on the front camera of the
mobile device. However, this demands high computational resources and therefore
commonly affects the performance of the application beyond the already high
computational load of AR applications. In this paper, we present a method to
reduce the computational demands for user perspective rendering by applying
lightweight optical flow tracking and an estimation of the user's motion before
head tracking is started. We demonstrate the suitability of our approach for
computationally limited mobile devices and we compare it to device perspective
rendering, to head tracked user perspective rendering, as well as to fixed
point of view user perspective rendering
An affordable surround-screen virtual reality display
Building a projection-based virtual reality display is a time, cost, and resource intensive enterprise andmany details contribute to the final display quality. This is especially true for surround-screen displays wheremost of them are one-of-a-kind systems or custom-made installations with specialized projectors, framing, andprojection screens. In general, the costs of acquiring these types of systems have been in the hundreds and evenmillions of dollars, specifically for those supporting synchronized stereoscopic projection across multiple screens.Furthermore, the maintenance of such systems adds an additional recurrent cost, which makes them hard to affordfor a general introduction in a wider range of industry, academic, and research communities.We present a low-cost, easy to maintain surround-screen design based on off-the-shelf affordable componentsfor the projection screens, framing, and display system. The resulting system quality is comparable to significantlymore expensive commercially available solutions. Additionally, users with average knowledge can implement ourdesign and it has the added advantage that single components can be individually upgraded based on necessity aswell as available funds
Pervasive Displays Research: What's Next?
Reports on the 7th ACM International Symposium on Pervasive Displays that took place from June 6-8 in Munich, Germany
Collaboration in Augmented Reality: How to establish coordination and joint attention?
Schnier C, Pitsch K, Dierker A, Hermann T. Collaboration in Augmented Reality: How to establish coordination and joint attention? In: Boedker S, Bouvin NO, Lutters W, Wulf V, Ciolfi L, eds. Proceedings of the 12th European Conference on Computer Supported Cooperative Work (ECSCW 2011). Springer-Verlag London; 2011: 405-416.We present an initial investigation from a semi-experimental setting, in which
an HMD-based AR-system has been used for real-time collaboration in a task-oriented scenario (design of a museum exhibition). Analysis points out the specific conditions of interacting in an AR environment and focuses on one particular practical problem for the participants in coordinating their interaction: how to establish joint attention towards the same object or referent. Analysis allows insights into how the pair of users begins to
familarize with the environment, the limitations and opportunities of the setting and how they establish new routines for e.g. solving the ʻjoint attentionʼ-problem
Synthetic content generation for auto-stereoscopic displays
Due to the appearance of auto-stereoscopic visualization as one of the most
emerging tendencies used in displays, new content generation techniques for this
kind of visualization are required. In this paper we present a study for the generation
of multi-view synthetic content, studying several camera setups (planar, cylindrical
and hyperbolic) and their configurations. We discuss the different effects obtained
varying the parameters of these setups. A study with several users was made to
analyze visual perceptions, asking them for their optimal visualization. To create the
virtual content, a multi-view system has been integrated in a powerful game engine,
which allows us to use the latest graphics hardware advances. This integration is
detailed and several demos and videos are attached with this paper, which represent
a virtual world for auto-stereoscopic displays and the same scenario in a two-view
anaglyph representation for being visualized in any conventional display. In all these
demos, the parameters studied can be modified offering the possibility of easily
appreciate their effects in a virtual scene
New visual coding exploration in MPEG: Super-MultiView and free navigation in free viewpoint TV
ISO/IEC MPEG and ITU-T VCEG have recently jointly issued
a new multiview video compression standard, called 3D-HEVC,
which reaches unpreceded compression performances for linear,dense camera arrangements. In view of supporting future highquality,auto-stereoscopic 3D displays and Free Navigation virtual/augmented reality applications with sparse, arbitrarily arranged camera setups, innovative depth estimation and virtual view synthesis techniques with global optimizations over all camera views should be developed. Preliminary studies in response to the MPEG-FTV (Free viewpoint TV) Call for Evidence suggest these
targets are within reach, with at least 6% bitrate gains over 3DHEVC
technology
3D-Stereoscopic Immersive Analytics Projects at Monash University and University of Konstanz
Immersive Analytics investigates how novel interaction and display technologies may support analytical reasoning and decision making. The Immersive Analytics initiative of Monash University started early 2014. Over the last few years, a number of projects have been developed or extended in this context to meet the requirements of semi- or full-immersive stereoscopic environments. Different technologies are used for this purpose: CAVE2™ (a 330 degree large-scale visualization environment which can be used for educative and scientific group presentations, analyses and discussions), stereoscopic Powerwalls (miniCAVEs, representing a segment of the CAVE2 and used for development and communication), Fishtanks, and/or HMDs (such as Oculus, VIVE, and mobile HMD approaches). Apart from CAVE2™ all systems are or will be employed on both the Monash University and the University of Konstanz side, especially to investigate collaborative Immersive Analytics. In addition, sensiLab extends most of the previous approaches by involving all senses, 3D visualization is combined with multi-sensory feedback, 3D printing, robotics in a scientific-artistic-creative environment
- …