9,725 research outputs found

    Cosmic cookery : making a stereoscopic 3D animated movie.

    Get PDF
    This paper describes our experience making a short stereoscopic movie visualizing the development of structure in the universe during the 13.7 billion years from the Big Bang to the present day. Aimed at a general audience for the Royal Society's 2005 Summer Science Exhibition, the movie illustrates how the latest cosmological theories based on dark matter and dark energy are capable of producing structures as complex as spiral galaxies and allows the viewer to directly compare observations from the real universe with theoretical results. 3D is an inherent feature of the cosmology data sets and stereoscopic visualization provides a natural way to present the images to the viewer, in addition to allowing researchers to visualize these vast, complex data sets. The presentation of the movie used passive, linearly polarized projection onto a 2m wide screen but it was also required to playback on a Sharp RD3D display and in anaglyph projection at venues without dedicated stereoscopic display equipment. Additionally lenticular prints were made from key images in the movie. We discuss the following technical challenges during the stereoscopic production process; 1) Controlling the depth presentation, 2) Editing the stereoscopic sequences, 3) Generating compressed movies in display speci¯c formats. We conclude that the generation of high quality stereoscopic movie content using desktop tools and equipment is feasible. This does require careful quality control and manual intervention but we believe these overheads are worthwhile when presenting inherently 3D data as the result is signi¯cantly increased impact and better understanding of complex 3D scenes

    Object-based 2D-to-3D video conversion for effective stereoscopic content generation in 3D-TV applications

    Get PDF
    Three-dimensional television (3D-TV) has gained increasing popularity in the broadcasting domain, as it enables enhanced viewing experiences in comparison to conventional two-dimensional (2D) TV. However, its application has been constrained due to the lack of essential contents, i.e., stereoscopic videos. To alleviate such content shortage, an economical and practical solution is to reuse the huge media resources that are available in monoscopic 2D and convert them to stereoscopic 3D. Although stereoscopic video can be generated from monoscopic sequences using depth measurements extracted from cues like focus blur, motion and size, the quality of the resulting video may be poor as such measurements are usually arbitrarily defined and appear inconsistent with the real scenes. To help solve this problem, a novel method for object-based stereoscopic video generation is proposed which features i) optical-flow based occlusion reasoning in determining depth ordinal, ii) object segmentation using improved region-growing from masks of determined depth layers, and iii) a hybrid depth estimation scheme using content-based matching (inside a small library of true stereo image pairs) and depth-ordinal based regularization. Comprehensive experiments have validated the effectiveness of our proposed 2D-to-3D conversion method in generating stereoscopic videos of consistent depth measurements for 3D-TV applications

    Redundancy of stereoscopic images: Experimental Evaluation

    Full text link
    With the recent advancement in visualization devices over the last years, we are seeing a growing market for stereoscopic content. In order to convey 3D content by means of stereoscopic displays, one needs to transmit and display at least 2 points of view of the video content. This has profound implications on the resources required to transmit the content, as well as demands on the complexity of the visualization system. It is known that stereoscopic images are redundant, which may prove useful for compression and may have positive effect on the construction of the visualization device. In this paper we describe an experimental evaluation of data redundancy in color stereoscopic images. In the experiments with computer generated and real life and test stereo images, several observers visually tested the stereopsis threshold and accuracy of parallax measuring in anaglyphs and stereograms as functions of the blur degree of one of two stereo images and color saturation threshold in one of two stereo images for which full color 3D perception with no visible color degradations is maintained. The experiments support a theoretical estimate that one has to add, to data required to reproduce one of two stereoscopic images, only several percents of that amount of data in order to achieve stereoscopic perception

    3D Capturing with Monoscopic Camera

    Get PDF
    This article presents a new concept of using the auto-focus function of the monoscopic camera sensor to estimate depth map information, which avoids not only using auxiliary equipment or human interaction, but also the introduced computational complexity of SfM or depth analysis. The system architecture that supports both stereo image and video data capturing, processing and display is discussed. A novel stereo image pair generation algorithm by using Z-buffer-based 3D surface recovery is proposed. Based on the depth map, we are able to calculate the disparity map (the distance in pixels between the image points in both views) for the image. The presented algorithm uses a single image with depth information (e.g. z-buffer) as an input and produces two images for left and right eye

    Stereoscopic Sketchpad: 3D Digital Ink

    Get PDF
    --Context-- This project looked at the development of a stereoscopic 3D environment in which a user is able to draw freely in all three dimensions. The main focus was on the storage and manipulation of the ‘digital ink’ with which the user draws. For a drawing and sketching package to be effective it must not only have an easy to use user interface, it must be able to handle all input data quickly and efficiently so that the user is able to focus fully on their drawing. --Background-- When it comes to sketching in three dimensions the majority of applications currently available rely on vector based drawing methods. This is primarily because the applications are designed to take a users two dimensional input and transform this into a three dimensional model. Having the sketch represented as vectors makes it simpler for the program to act upon its geometry and thus convert it to a model. There are a number of methods to achieve this aim including Gesture Based Modelling, Reconstruction and Blobby Inflation. Other vector based applications focus on the creation of curves allowing the user to draw within or on existing 3D models. They also allow the user to create wire frame type models. These stroke based applications bring the user closer to traditional sketching rather than the more structured modelling methods detailed. While at present the field is inundated with vector based applications mainly focused upon sketch-based modelling there are significantly less voxel based applications. The majority of these applications focus on the deformation and sculpting of voxmaps, almost the opposite of drawing and sketching, and the creation of three dimensional voxmaps from standard two dimensional pixmaps. How to actually sketch freely within a scene represented by a voxmap has rarely been explored. This comes as a surprise when so many of the standard 2D drawing programs in use today are pixel based. --Method-- As part of this project a simple three dimensional drawing program was designed and implemented using C and C++. This tool is known as Sketch3D and was created using a Model View Controller (MVC) architecture. Due to the modular nature of Sketch3Ds system architecture it is possible to plug a range of different data structures into the program to represent the ink in a variety of ways. A series of data structures have been implemented and were tested for efficiency. These structures were a simple list, a 3D array, and an octree. They have been tested for: the time it takes to insert or remove points from the structure; how easy it is to manipulate points once they are stored; and also how the number of points stored effects the draw and rendering times. One of the key issues brought up by this project was devising a means by which a user is able to draw in three dimensions while using only two dimensional input devices. The method settled upon and implemented involves using the mouse or a digital pen to sketch as one would in a standard 2D drawing package but also linking the up and down keyboard keys to the current depth. This allows the user to move in and out of the scene as they draw. A couple of user interface tools were also developed to assist the user. A 3D cursor was implemented and also a toggle, which when on, highlights all of the points intersecting the depth plane on which the cursor currently resides. These tools allow the user to see exactly where they are drawing in relation to previously drawn lines. --Results-- The tests conducted on the data structures clearly revealed that the octree was the most effective data structure. While not the most efficient in every area, it manages to avoid the major pitfalls of the other structures. The list was extremely quick to render and draw to the screen but suffered severely when it comes to finding and manipulating points already stored. In contrast the three dimensional array was able to erase or manipulate points effectively while the draw time rendered the structure effectively useless, taking huge amounts of time to draw each frame. The focus of this research was on how a 3D sketching package would go about storing and accessing the digital ink. This is just a basis for further research in this area and many issues touched upon in this paper will require a more in depth analysis. The primary area of this future research would be the creation of an effective user interface and the introduction of regular sketching package features such as the saving and loading of images

    An Advanced, Three-Dimensional Plotting Library for Astronomy

    Get PDF
    We present a new, three-dimensional (3D) plotting library with advanced features, and support for standard and enhanced display devices. The library - S2PLOT - is written in C and can be used by C, C++ and FORTRAN programs on GNU/Linux and Apple/OSX systems. S2PLOT draws objects in a 3D (x,y,z) Cartesian space and the user interactively controls how this space is rendered at run time. With a PGPLOT inspired interface, S2PLOT provides astronomers with elegant techniques for displaying and exploring 3D data sets directly from their program code, and the potential to use stereoscopic and dome display devices. The S2PLOT architecture supports dynamic geometry and can be used to plot time-evolving data sets, such as might be produced by simulation codes. In this paper, we introduce S2PLOT to the astronomical community, describe its potential applications, and present some example uses of the library.Comment: 12 pages, 10 eps figures (higher resolution versions available from http://astronomy.swin.edu.au/s2plot/paperfigures). The S2PLOT library is available for download from http://astronomy.swin.edu.au/s2plo

    New metric products, movies and 3D models from old stereopairs and their application to the in situ palaeontological site of Ambrona

    Get PDF
    [ES] Este artículo está basado en la información del siguiente proyecto:● LDGP_mem_006-1: "[S_Ambrona_Insitu] Levantamiento fotogramétrico del yacimiento paleontológico “Museo in situ” de Ambrona (Soria)", http://hdl.handle.net/10810/7353● LDGP_mem_006-1: "[S_Ambrona_Insitu] Levantamiento fotogramétrico del yacimiento paleontológico “Museo in situ” de Ambrona (Soria)", http://hdl.handle.net/10810/7353[EN] This paper is based on the information gathered in the following project:[EN] 3D modelling tools from photographic pictures have experienced significant improvements in the last years. One of the most outstanding changes is the spread of the photogrammetric systems based on algorithms referred to as Structure from Motion (SfM) in contrast with the traditional stereoscopic pairs. Nevertheless, the availability of important collections of stereoscopic registers collected during past decades invites us to explore the possibilities for re-using these photographs in order to generate new multimedia products, especially due to the fact that many of the documented elements have been largely altered or even disappeared. This article analyses an example of application to the re-use of a collection of photographs from the palaeontological site of Ambrona (Soria, Spain). More specifically, different pieces of software based on Structure from Motion (SfM) algorithms for the generation of 3D models with photographic textures are tested and some derived products such as orthoimages, video or applications of Augmented Reality (AR) are presented.[ES] Las herramientas de modelado 3D a partir de imágenes fotográficas han experimentado avances muy significativos en los últimos años. Uno de los más destacados corresponde a la generalización de los sistemas fotogramétricos basados en los algoritmos denominados Structure from Motion (SfM) sobre los proyectos de documentación tradicional basados en pares estereoscópicos. La existencia de importantes colecciones de registros estereoscópicos realizados durante las décadas anteriores invita a explorar las posibilidades de reutilización de estos registros para la obtención de productos multimedia actuales, máxime cuando algunos de los elementos documentados han sufrido grandes modificaciones o incluso desaparecido. En el presente artículo se analiza la reutilización de colecciones fotográficas de yacimientos paleontológicos mediante un ejemplo centrado en el yacimiento de Ambrona (Soria, España). En concreto, se contrastan varios programas basados en los algoritmos denominados Structure from Motion (SfM) para la generación del modelo 3D con textura y otros productos derivados como ortoimágenes, vídeos o aplicaciones de Realidad Aumentada (RA)
    corecore