55 research outputs found

    Tele-immersive display with live-streamed video.

    Get PDF
    Tang Wai-Kwan.Thesis (M.Phil.)--Chinese University of Hong Kong, 2001.Includes bibliographical references (leaves 88-95).Abstracts in English and Chinese.Abstract --- p.iAcknowledgement --- p.iiiChapter 1 --- Introduction --- p.1Chapter 1.1 --- Applications --- p.3Chapter 1.2 --- Motivation and Goal --- p.6Chapter 1.3 --- Thesis Outline --- p.7Chapter 2 --- Background and Related Work --- p.8Chapter 2.1 --- Panoramic Image Navigation --- p.8Chapter 2.2 --- Image Mosaicing --- p.9Chapter 2.2.1 --- Image Registration --- p.10Chapter 2.2.2 --- Image Composition --- p.12Chapter 2.3 --- Immersive Display --- p.13Chapter 2.4 --- Video Streaming --- p.14Chapter 2.4.1 --- Video Coding --- p.15Chapter 2.4.2 --- Transport Protocol --- p.18Chapter 3 --- System Design --- p.19Chapter 3.1 --- System Architecture --- p.19Chapter 3.1.1 --- Video Capture Module --- p.19Chapter 3.1.2 --- Video Streaming Module --- p.23Chapter 3.1.3 --- Stitching and Rendering Module --- p.24Chapter 3.1.4 --- Display Module --- p.24Chapter 3.2 --- Design Issues --- p.25Chapter 3.2.1 --- Modular Design --- p.25Chapter 3.2.2 --- Scalability --- p.26Chapter 3.2.3 --- Workload distribution --- p.26Chapter 4 --- Panoramic Video Mosaic --- p.28Chapter 4.1 --- Video Mosaic to Image Mosaic --- p.28Chapter 4.1.1 --- Assumptions --- p.29Chapter 4.1.2 --- Processing Pipeline --- p.30Chapter 4.2 --- Camera Calibration --- p.33Chapter 4.2.1 --- Perspective Projection --- p.33Chapter 4.2.2 --- Distortion --- p.36Chapter 4.2.3 --- Calibration Procedure --- p.37Chapter 4.3 --- Panorama Generation --- p.39Chapter 4.3.1 --- Cylindrical and Spherical Panoramas --- p.39Chapter 4.3.2 --- Homography --- p.41Chapter 4.3.3 --- Homography Computation --- p.42Chapter 4.3.4 --- Error Minimization --- p.44Chapter 4.3.5 --- Stitching Multiple Images --- p.46Chapter 4.3.6 --- Seamless Composition --- p.47Chapter 4.4 --- Image Mosaic to Video Mosaic --- p.49Chapter 4.4.1 --- Varying Intensity --- p.49Chapter 4.4.2 --- Video Frame Management --- p.50Chapter 5 --- Immersive Display --- p.52Chapter 5.1 --- Human Perception System --- p.52Chapter 5.2 --- Creating Virtual Scene --- p.53Chapter 5.3 --- VisionStation --- p.54Chapter 5.3.1 --- F-Theta Lens --- p.55Chapter 5.3.2 --- VisionStation Geometry --- p.56Chapter 5.3.3 --- Sweet Spot Relocation and Projection --- p.57Chapter 5.3.4 --- Sweet Spot Relocation in Vector Representation --- p.61Chapter 6 --- Video Streaming --- p.65Chapter 6.1 --- Video Compression --- p.66Chapter 6.2 --- Transport Protocol --- p.66Chapter 6.3 --- Latency and Jitter Control --- p.67Chapter 6.4 --- Synchronization --- p.70Chapter 7 --- Implementation and Results --- p.71Chapter 7.1 --- Video Capture --- p.71Chapter 7.2 --- Video Streaming --- p.73Chapter 7.2.1 --- Video Encoding --- p.73Chapter 7.2.2 --- Streaming Protocol --- p.75Chapter 7.3 --- Implementation Results --- p.76Chapter 7.3.1 --- Indoor Scene --- p.76Chapter 7.3.2 --- Outdoor Scene --- p.78Chapter 7.4 --- Evaluation --- p.78Chapter 8 --- Conclusion --- p.83Chapter 8.1 --- Summary --- p.83Chapter 8.2 --- Future Directions --- p.84Chapter A --- Parallax --- p.8

    Acting rehearsal in collaborative multimodal mixed reality environments

    Get PDF
    This paper presents the use of our multimodal mixed reality telecommunication system to support remote acting rehearsal. The rehearsals involved two actors, located in London and Barcelona, and a director in another location in London. This triadic audiovisual telecommunication was performed in a spatial and multimodal collaborative mixed reality environment based on the 'destination-visitor' paradigm, which we define and put into use. We detail our heterogeneous system architecture, which spans the three distributed and technologically asymmetric sites, and features a range of capture, display, and transmission technologies. The actors' and director's experience of rehearsing a scene via the system are then discussed, exploring successes and failures of this heterogeneous form of telecollaboration. Overall, the common spatial frame of reference presented by the system to all parties was highly conducive to theatrical acting and directing, allowing blocking, gross gesture, and unambiguous instruction to be issued. The relative inexpressivity of the actors' embodiments was identified as the central limitation of the telecommunication, meaning that moments relying on performing and reacting to consequential facial expression and subtle gesture were less successful

    Implementation of computer visualisation in UK planning

    Get PDF
    PhD ThesisWithin the processes of public consultation and development management, planners are required to consider spatial information, appreciate spatial transformations and future scenarios. In the past, conventional media such as maps, plans, illustrations, sections, and physical models have been used. Those traditional visualisations are at a high degree of abstraction, sometimes difficult to understand for lay people and inflexible in terms of the range of scenarios which can be considered. Yet due to technical advances and falling costs, the potential for computer based visualisation has much improved and has been increasingly adopted within the planning process. Despite the growth in this field, insufficient consideration has been given to the possible weakness of computerised visualisations. Reflecting this lack of research, this study critically evaluates the use and potential of computerised visualisation within this process. The research is divided into two components: case study analysis and reflections of the author following his involvement within the design and use of visualisations in a series of planning applications; and in-depth interviews with experienced practitioners in the field. Based on a critical review of existing literature, this research explores in particular the issues of credibility, realism and costs of production. The research findings illustrate the importance of the credibility of visualisations, a topic given insufficient consideration within the academic literature. Whereas the realism of visualisations has been the focus of much previous research, the results of the case studies and interviews with practitioners undertaken in this research suggest a ‘photo’ realistic level of details may not be required as long as the observer considers the visualisations to be a credible reflection of the underlying reality. Although visualisations will always be a simplification of reality and their level of realism is subjective, there is still potential for developing guidelines or protocols for image production based on commonly agreed standards. In the absence of such guidelines there is a danger that scepticism in the credibility of computer visualisations will prevent the approach being used to its full potential. These findings suggest there needs to be a balance between scientific protocols and artistic licence in the production of computer visualisation. In order to be sufficiently credible for use in decision making within the planning processes, the production of computer visualisation needs to follow a clear methodology and scientific protocols set out in good practice guidance published by professional bodies and governmental organisations.Newcastle upon Tyne for awarding me an International Scholarship and Alumni Bursar

    Modeling and Simulation in Engineering

    Get PDF
    This book provides an open platform to establish and share knowledge developed by scholars, scientists, and engineers from all over the world, about various applications of the modeling and simulation in the design process of products, in various engineering fields. The book consists of 12 chapters arranged in two sections (3D Modeling and Virtual Prototyping), reflecting the multidimensionality of applications related to modeling and simulation. Some of the most recent modeling and simulation techniques, as well as some of the most accurate and sophisticated software in treating complex systems, are applied. All the original contributions in this book are jointed by the basic principle of a successful modeling and simulation process: as complex as necessary, and as simple as possible. The idea is to manipulate the simplifying assumptions in a way that reduces the complexity of the model (in order to make a real-time simulation), but without altering the precision of the results

    Augmented Reality and Artificial Intelligence in Image-Guided and Robot-Assisted Interventions

    Get PDF
    In minimally invasive orthopedic procedures, the surgeon places wires, screws, and surgical implants through the muscles and bony structures under image guidance. These interventions require alignment of the pre- and intra-operative patient data, the intra-operative scanner, surgical instruments, and the patient. Suboptimal interaction with patient data and challenges in mastering 3D anatomy based on ill-posed 2D interventional images are essential concerns in image-guided therapies. State of the art approaches often support the surgeon by using external navigation systems or ill-conditioned image-based registration methods that both have certain drawbacks. Augmented reality (AR) has been introduced in the operating rooms in the last decade; however, in image-guided interventions, it has often only been considered as a visualization device improving traditional workflows. Consequently, the technology is gaining minimum maturity that it requires to redefine new procedures, user interfaces, and interactions. This dissertation investigates the applications of AR, artificial intelligence, and robotics in interventional medicine. Our solutions were applied in a broad spectrum of problems for various tasks, namely improving imaging and acquisition, image computing and analytics for registration and image understanding, and enhancing the interventional visualization. The benefits of these approaches were also discovered in robot-assisted interventions. We revealed how exemplary workflows are redefined via AR by taking full advantage of head-mounted displays when entirely co-registered with the imaging systems and the environment at all times. The proposed AR landscape is enabled by co-localizing the users and the imaging devices via the operating room environment and exploiting all involved frustums to move spatial information between different bodies. The system's awareness of the geometric and physical characteristics of X-ray imaging allows the exploration of different human-machine interfaces. We also leveraged the principles governing image formation and combined it with deep learning and RGBD sensing to fuse images and reconstruct interventional data. We hope that our holistic approaches towards improving the interface of surgery and enhancing the usability of interventional imaging, not only augments the surgeon's capabilities but also augments the surgical team's experience in carrying out an effective intervention with reduced complications

    Interactive mixed reality rendering in a distributed ray tracing framework

    Get PDF
    The recent availability of interactive ray tracing opened the way for new applications and for improving existing ones in terms of quality. Since today CPUs are still too slow for this purpose, the necessary computing power is obtained by connecting a number of machines and using distributed algorithms. Mixed reality rendering - the realm of convincingly combining real and virtual parts to a new composite scene - needs a powerful rendering method to obtain a photorealistic result. The ray tracing algorithm thus provides an excellent basis for photorealistic rendering and also advantages over other methods. It is worth to explore its abilities for interactive mixed reality rendering. This thesis shows the applicability of interactive ray tracing for mixed (MR) and augmented reality (AR) applications on the basis of the OpenRT framework. Two extensions to the OpenRT system are introduced and serve as basic building blocks: streaming video textures and in-shader AR view compositing. Streaming video textures allow for inclusion of the real world into interactive applications in terms of imagery. The AR view compositing mechanism is needed to fully exploit the advantages of modular shading in a ray tracer. A number of example applications from the entire spectrum of the Milgram Reality-Virtuality continuum illustrate the practical implications. An implementation of a classic AR scenario, inserting a virtual object into live video, shows how a differential rendering method can be used in combination with a custom build real-time lightprobe device to capture the incident light and include it into the rendering process to achieve convincing shading and shadows. Another field of mixed reality rendering is the insertion of real actors into a virtual scene in real-time. Two methods - video billboards and a live 3D visual hull reconstruction - are discussed. The implementation of live mixed reality systems is based on a number of technologies beside rendering and a comprehensive understanding of related methods and hardware is necessary. Large parts of this thesis hence deal with the discussion of technical implementations and design alternatives. A final summary discusses the benefits and drawbacks of interactive ray tracing for mixed reality rendering.Die Verfügbarkeit von interaktivem Ray-Tracing ebnet den Weg für neue Anwendungen, aber auch für die Verbesserung der Qualität bestehener Methoden. Da die heute verfügbaren CPUs noch zu langsam sind, ist es notwendig, mehrere Maschinen zu verbinden und verteilte Algorithmen zu verwenden. Mixed Reality Rendering - die Technik der überzeugenden Kombination von realen und synthetischen Teilen zu einer neuen Szene - braucht eine leistungsfähige Rendering-Methode um photorealistische Ergebnisse zu erzielen. Der Ray-Tracing-Algorithmus bietet hierfür eine exzellente Basis, aber auch Vorteile gegenüber anderen Methoden. Es ist naheliegend, die Möglichkeiten von Ray-Tracing für Mixed-Reality-Anwendungen zu erforschen. Diese Arbeit zeigt die Anwendbarkeit von interaktivem Ray-Tracing für Mixed-Reality (MR) und Augmented-Reality (AR) Anwendungen anhand des OpenRT-Systems. Zwei Erweiterungen dienen als Grundbausteine: Videotexturen und In-Shader AR View Compositing. Videotexturen erlauben die reale Welt in Form von Bildern in den Rendering-Prozess mit einzubeziehen. Der View-Compositing-Mechanismus is notwendig um die Modularität einen Ray-Tracers voll auszunutzen. Eine Reihe von Beispielanwendungen von beiden Enden des Milgramschen Reality-Virtuality-Kontinuums verdeutlichen die praktischen Aspekte. Eine Implementierung des klassischen AR-Szenarios, das Einfügen eines virtuellen Objektes in eine Live-Übertragung zeigt, wie mittels einer Differential Rendering Methode und einem selbstgebauten Gerät zur Erfassung des einfallenden Lichts realistische Beleuchtung und Schatten erzielt werden können. Ein anderer Anwendungsbereich ist das Einfügen einer realen Person in eine künstliche Szene. Hierzu werden zwei Methoden besprochen: Video-Billboards und eine interaktive 3D Rekonstruktion. Da die Implementierung von Mixed-Reality-Anwendungen Kentnisse und Verständnis einer ganzen Reihe von Technologien nebem dem eigentlichen Rendering voraus setzt, ist eine Diskussion der technischen Grundlagen ein wesentlicher Bestandteil dieser Arbeit. Dies ist notwenig, um die Entscheidungen für bestimmte Designalternativen zu verstehen. Den Abschluss bildet eine Diskussion der Vor- und Nachteile von interaktivem Ray-Tracing für Mixed Reality Anwendungen

    Interactive mixed reality rendering in a distributed ray tracing framework

    Get PDF
    The recent availability of interactive ray tracing opened the way for new applications and for improving existing ones in terms of quality. Since today CPUs are still too slow for this purpose, the necessary computing power is obtained by connecting a number of machines and using distributed algorithms. Mixed reality rendering - the realm of convincingly combining real and virtual parts to a new composite scene - needs a powerful rendering method to obtain a photorealistic result. The ray tracing algorithm thus provides an excellent basis for photorealistic rendering and also advantages over other methods. It is worth to explore its abilities for interactive mixed reality rendering. This thesis shows the applicability of interactive ray tracing for mixed (MR) and augmented reality (AR) applications on the basis of the OpenRT framework. Two extensions to the OpenRT system are introduced and serve as basic building blocks: streaming video textures and in-shader AR view compositing. Streaming video textures allow for inclusion of the real world into interactive applications in terms of imagery. The AR view compositing mechanism is needed to fully exploit the advantages of modular shading in a ray tracer. A number of example applications from the entire spectrum of the Milgram Reality-Virtuality continuum illustrate the practical implications. An implementation of a classic AR scenario, inserting a virtual object into live video, shows how a differential rendering method can be used in combination with a custom build real-time lightprobe device to capture the incident light and include it into the rendering process to achieve convincing shading and shadows. Another field of mixed reality rendering is the insertion of real actors into a virtual scene in real-time. Two methods - video billboards and a live 3D visual hull reconstruction - are discussed. The implementation of live mixed reality systems is based on a number of technologies beside rendering and a comprehensive understanding of related methods and hardware is necessary. Large parts of this thesis hence deal with the discussion of technical implementations and design alternatives. A final summary discusses the benefits and drawbacks of interactive ray tracing for mixed reality rendering.Die Verfügbarkeit von interaktivem Ray-Tracing ebnet den Weg für neue Anwendungen, aber auch für die Verbesserung der Qualität bestehener Methoden. Da die heute verfügbaren CPUs noch zu langsam sind, ist es notwendig, mehrere Maschinen zu verbinden und verteilte Algorithmen zu verwenden. Mixed Reality Rendering - die Technik der überzeugenden Kombination von realen und synthetischen Teilen zu einer neuen Szene - braucht eine leistungsfähige Rendering-Methode um photorealistische Ergebnisse zu erzielen. Der Ray-Tracing-Algorithmus bietet hierfür eine exzellente Basis, aber auch Vorteile gegenüber anderen Methoden. Es ist naheliegend, die Möglichkeiten von Ray-Tracing für Mixed-Reality-Anwendungen zu erforschen. Diese Arbeit zeigt die Anwendbarkeit von interaktivem Ray-Tracing für Mixed-Reality (MR) und Augmented-Reality (AR) Anwendungen anhand des OpenRT-Systems. Zwei Erweiterungen dienen als Grundbausteine: Videotexturen und In-Shader AR View Compositing. Videotexturen erlauben die reale Welt in Form von Bildern in den Rendering-Prozess mit einzubeziehen. Der View-Compositing-Mechanismus is notwendig um die Modularität einen Ray-Tracers voll auszunutzen. Eine Reihe von Beispielanwendungen von beiden Enden des Milgramschen Reality-Virtuality-Kontinuums verdeutlichen die praktischen Aspekte. Eine Implementierung des klassischen AR-Szenarios, das Einfügen eines virtuellen Objektes in eine Live-Übertragung zeigt, wie mittels einer Differential Rendering Methode und einem selbstgebauten Gerät zur Erfassung des einfallenden Lichts realistische Beleuchtung und Schatten erzielt werden können. Ein anderer Anwendungsbereich ist das Einfügen einer realen Person in eine künstliche Szene. Hierzu werden zwei Methoden besprochen: Video-Billboards und eine interaktive 3D Rekonstruktion. Da die Implementierung von Mixed-Reality-Anwendungen Kentnisse und Verständnis einer ganzen Reihe von Technologien nebem dem eigentlichen Rendering voraus setzt, ist eine Diskussion der technischen Grundlagen ein wesentlicher Bestandteil dieser Arbeit. Dies ist notwenig, um die Entscheidungen für bestimmte Designalternativen zu verstehen. Den Abschluss bildet eine Diskussion der Vor- und Nachteile von interaktivem Ray-Tracing für Mixed Reality Anwendungen
    corecore