62 research outputs found
A multi-projector CAVE system with commodity hardware and gesture-based interaction
Spatially-immersive systems such as CAVEs provide users with surrounding worlds by projecting 3D models on multiple screens around the viewer. Compared to alternative immersive systems such as HMDs, CAVE systems are a powerful tool for collaborative inspection of virtual environments due to better use of peripheral vision, less sensitivity to tracking errors, and higher communication possibilities among users. Unfortunately, traditional CAVE setups require sophisticated equipment including stereo-ready projectors and tracking systems with high acquisition and maintenance costs. In this paper we present the design and construction of a passive-stereo, four-wall CAVE system based on commodity hardware. Our system works with any mix of a wide range of projector models that can be replaced independently at any time, and achieves high resolution and brightness at a minimum cost. The key ingredients of our CAVE are a self-calibration approach that guarantees continuity across the screen, as well as a gesture-based interaction approach based on a clever
combination of skeletal data from multiple Kinect sensors.Preprin
Future Directions in Astronomy Visualisation
Despite the large budgets spent annually on astronomical research equipment
such as telescopes, instruments and supercomputers, the general trend is to
analyse and view the resulting datasets using small, two-dimensional displays.
We report here on alternative advanced image displays, with an emphasis on
displays that we have constructed, including stereoscopic projection, multiple
projector tiled displays and a digital dome. These displays can provide
astronomers with new ways of exploring the terabyte and petabyte datasets that
are now regularly being produced from all-sky surveys, high-resolution computer
simulations, and Virtual Observatory projects. We also present a summary of the
Advanced Image Displays for Astronomy (AIDA) survey which we conducted from
March-May 2005, in order to raise some issues pertitent to the current and
future level of use of advanced image displays.Comment: 13 pages, 2 figures, accepted for publication in PAS
An affordable surround-screen virtual reality display
Building a projection-based virtual reality display is a time, cost, and resource intensive enterprise andmany details contribute to the final display quality. This is especially true for surround-screen displays wheremost of them are one-of-a-kind systems or custom-made installations with specialized projectors, framing, andprojection screens. In general, the costs of acquiring these types of systems have been in the hundreds and evenmillions of dollars, specifically for those supporting synchronized stereoscopic projection across multiple screens.Furthermore, the maintenance of such systems adds an additional recurrent cost, which makes them hard to affordfor a general introduction in a wider range of industry, academic, and research communities.We present a low-cost, easy to maintain surround-screen design based on off-the-shelf affordable componentsfor the projection screens, framing, and display system. The resulting system quality is comparable to significantlymore expensive commercially available solutions. Additionally, users with average knowledge can implement ourdesign and it has the added advantage that single components can be individually upgraded based on necessity aswell as available funds
Developing a mixed reality assistance system based on projection mapping technology for manual operations at assembly workstations.
ABSTRACT
Manual tasks play an important role in social sustainable manufacturing enterprises. Commonly, manual operations are used for low volume productions, but are not limited to. Operational models in manufacturing sisters cased of x-to-order paradigms (e.g. assembly-to-order) may require manual operations to speed-up the ramp-up time of new product configuration assemblies. The implications of manual operations in any production line may imply that any manufacturing or assembly process become more susceptible to human errors and therefore
translate into delays, defects and/or poor product quality. In this scenario, virtual and augmented realities can offer significant advantages to support the human operator in manual operations. This research work presents the development of a mixed (virtual and augmented) reality assistance system that permits real-time support in manual operations. A review of mixed
reality techniques and technologies was conducted, where it was determined to use a projection mapping solution for the proposed assistance system. According to the specific requirements of the demonstration environment, hardware and software components were chosen. The developed mixed reality assistance system was able to guide any user without any prior knowledge through the successful completion of the specific assembly task
THE UNIVERSAL MEDIA BOOK
We explore the integration of projected imagery with a physical book that acts as a tangible interface to multimedia data. Using a camera and projector pair, a tracking framework is presented wherein the 3D position of planar pages are monitored as they are turned back and forth by a user, and data is correctly warped and projected onto each page at interactive rates to provide the user with an intuitive mixed-reality experience. The book pages are blank, so traditional camera-based approaches to tracking physical features on the display surface do not apply. Instead, in each frame, feature points are independently extracted from the camera and projector images, and matched to recover the geometry of the pages in motion. The book can be loaded with multimedia content, including images and videos. In addition, volumetric datasets can be explored by removing a page from the book and using it as a tool to navigate through a virtual 3D volume
Advanced Calibration of Automotive Augmented Reality Head-Up Displays = Erweiterte Kalibrierung von Automotiven Augmented Reality-Head-Up-Displays
In dieser Arbeit werden fortschrittliche Kalibrierungsmethoden fĂŒr Augmented-Reality-Head-up-Displays (AR-HUDs) in Kraftfahrzeugen vorgestellt, die auf parametrischen perspektivischen Projektionen und nichtparametrischen Verzerrungsmodellen basieren. Die AR-HUD-Kalibrierung ist wichtig, um virtuelle Objekte in relevanten Anwendungen wie z.B. Navigationssystemen oder ParkvorgĂ€ngen korrekt zu platzieren. Obwohl es im Stand der Technik einige nĂŒtzliche AnsĂ€tze fĂŒr dieses Problem gibt, verfolgt diese Dissertation das Ziel, fortschrittlichere und dennoch weniger komplizierte AnsĂ€tze zu entwickeln. Als Voraussetzung fĂŒr die Kalibrierung haben wir mehrere relevante Koordinatensysteme definiert, darunter die dreidimensionale (3D) Welt, den Ansichtspunkt-Raum, den HUD-Sichtfeld-Raum (HUD-FOV) und den zweidimensionalen (2D) virtuellen Bildraum. Wir beschreiben die Projektion der Bilder von einem AR-HUD-Projektor in Richtung der Augen des Fahrers als ein ansichtsabhĂ€ngiges Lochkameramodell, das aus intrinsischen und extrinsischen Matrizen besteht. Unter dieser Annahme schĂ€tzen wir zunĂ€chst die intrinsische Matrix unter Verwendung der Grenzen des HUD-Sichtbereichs. Als nĂ€chstes kalibrieren wir die extrinsischen Matrizen an verschiedenen Blickpunkten innerhalb einer ausgewĂ€hlten "Eyebox" unter BerĂŒcksichtigung der sich Ă€ndernden Augenpositionen des Fahrers. Die 3D-Positionen dieser Blickpunkte werden von einer Fahrerkamera verfolgt. FĂŒr jeden einzelnen Blickpunkt erhalten wir eine Gruppe von 2D-3D-Korrespondenzen zwischen einer Menge Punkten im virtuellen Bildraum und ihren ĂŒbereinstimmenden Kontrollpunkten vor der Windschutzscheibe. Sobald diese Korrespondenzen verfĂŒgbar sind, berechnen wir die extrinsische Matrix am entsprechenden Betrachtungspunkt. Durch Vergleichen der neu projizierten und realen Pixelpositionen dieser virtuellen Punkte erhalten wir eine 2D-Verteilung von Bias-Vektoren, mit denen wir Warping-Karten rekonstruieren, welche die Informationen ĂŒber die Bildverzerrung enthalten. FĂŒr die VollstĂ€ndigkeit wiederholen wir die obigen extrinsischen Kalibrierungsverfahren an allen ausgewĂ€hlten Betrachtungspunkten. Mit den kalibrierten extrinsischen Parametern stellen wir die Betrachtungspunkte wieder her im Weltkoordinatensystem. Da wir diese Punkte gleichzeitig im Raum der Fahrerkamera verfolgen, kalibrieren wir weiter die Transformation von der Fahrerkamera in den Weltraum unter Verwendung dieser 3D-3D-Korrespondenzen. Um mit nicht teilnehmenden Betrachtungspunkten innerhalb der Eyebox umzugehen, erhalten wir ihre extrinsischen Parameter und Warping-Karten durch nichtparametrische Interpolationen. Unsere Kombination aus parametrischen und nichtparametrischen Modellen ĂŒbertrifft den Stand der Technik hinsichtlich der ZielkomplexitĂ€t sowie Zeiteffizienz, wĂ€hrend wir eine vergleichbare Kalibrierungsgenauigkeit beibehalten. Bei allen unseren Kalibrierungsschemen liegen die Projektionsfehler in der Auswertungsphase bei einer Entfernung von 7,5 Metern innerhalb weniger Millimeter, was einer Winkelgenauigkeit von ca. 2 Bogenminuten entspricht, was nahe am Auflösungvermögen des Auges liegt
Recommended from our members
3D (embodied) projection mapping and sensing bodies : a study in interactive dance performance
This dissertation identifies the synergies between physical and virtual environments when designing for immersive experiences in interactive dance performances. The integration of virtual information in physical space is transforming our interactions and experiences with the world. By using the body and creative expression as the interface between real and virtual worlds, dance performance creates a privileged framework to research and design interactive mixed reality environments and immersive augmented architectures. The research is primarily situated in the fields of visual art and interaction design. It combines performance with transdisciplinary fields and intertwines practice with theory. The theoretical and conceptual implications involved in designing and experiencing immersive hybrid environments are analyzed using the realityâvirtuality continuum. These theories helped frame the ways augmented reality architectures are achieved through the integration of dance performance with digital software and reception displays. They also helped identify the main artistic affordances and restrictions in the design of augmented reality and augmented virtuality environments for live performance. These pervasive media architectures were materialized in three field experiments, the live dance performances. Each performance was created in three different stages of conception, design and production. The first stage was to âdigitizeâ the performerâs movement and brain activity to the virtual environment and our system. This was accomplished through the use of depth sensor cameras, 3D motion capture, and brain computer interfaces. The second stage was the creation of the computational architecture and software that aggregates the connections and mapping between the physical body and the spatial dynamics of the virtual environment. This process created real-time interactions between the performerâs behavior and motion and the real-time generative computer 3D graphics. Finally, the third stage consisted of the output modality: 3D projector based augmentation techniques were adopted in order to overlay the virtual environment onto physical space. This thesis proposes and lays out theoretical, technical, and artistic frameworks between 3D digital environments and moving bodies in dance performance. By sensing the body and the brain with the 3D virtual environments, new layers of augmentation and interactions are established, and ultimately this generates mixed reality environments for embodied improvisational self-expression.Radio-Television-Fil
KAVE - Kinect Cave: design, tools and comparative analysis with other VR technologies
Virtual reality has been delivered through many different forms and iterations. One of them is the CAVE. CAVE systems have developed over the yearsbuttheyarestillhaveprohibitivecostsandarerathercomplextoimplement. In this thesis we propose our own low-cost CAVE system - comprised of details about the setup as well as a calibration software that was developedtohelpachievethegoalsofthisthesis-andcompareittootherlost-cost CAVEsfoundintheliterature. Thisthesisalsoencompassesapresencestudy that was performed as a result of assessing the resulting CAVE. This study compared CAVE, PC and Head-Mounted Display in terms of presence and workloadthroughtheuseofvalidatedquestionnairesfoundintheliterature. The resulting data showed HMD induced higher sense of presence than the CAVE, and CAVE induced higher sense of presence than the PC. Regarding workloadofthesystem,thedataalsoshowednostatisticallymeaningfuldifferences between the three technologies except for the physical demand of performing a task in a CAVE compared to performing the same task in the PC
State of the art 3D technologies and MVV end to end system design
Lâoggetto del presente lavoro di tesi Ăš costituito dallâanalisi e dalla recensione di tutte le tecnologie 3D: esistenti e in via di sviluppo per ambienti domestici; tenendo come punto di riferimento le tecnologie multiview video (MVV). Tutte le sezioni della catena dalla fase di cattura a quella di riproduzione sono analizzate. Lo scopo Ăš di progettare una possibile architettura satellitare per un futuro sistema MVV televisivo, nellâambito di due possibili scenari, broadcast o interattivo. Lâanalisi coprirĂ considerazioni tecniche, ma anche limitazioni commerciali
- âŠ