63,589 research outputs found

    Depth mapping of integral images through viewpoint image extraction with a hybrid disparity analysis algorithm

    Get PDF
    Integral imaging is a technique capable of displaying 3–D images with continuous parallax in full natural color. It is one of the most promising methods for producing smooth 3–D images. Extracting depth information from integral image has various applications ranging from remote inspection, robotic vision, medical imaging, virtual reality, to content-based image coding and manipulation for integral imaging based 3–D TV. This paper presents a method of generating a depth map from unidirectional integral images through viewpoint image extraction and using a hybrid disparity analysis algorithm combining multi-baseline, neighbourhood constraint and relaxation strategies. It is shown that a depth map having few areas of uncertainty can be obtained from both computer and photographically generated integral images using this approach. The acceptable depth maps can be achieved from photographic captured integral images containing complicated object scene

    Cosmic cookery : making a stereoscopic 3D animated movie.

    Get PDF
    This paper describes our experience making a short stereoscopic movie visualizing the development of structure in the universe during the 13.7 billion years from the Big Bang to the present day. Aimed at a general audience for the Royal Society's 2005 Summer Science Exhibition, the movie illustrates how the latest cosmological theories based on dark matter and dark energy are capable of producing structures as complex as spiral galaxies and allows the viewer to directly compare observations from the real universe with theoretical results. 3D is an inherent feature of the cosmology data sets and stereoscopic visualization provides a natural way to present the images to the viewer, in addition to allowing researchers to visualize these vast, complex data sets. The presentation of the movie used passive, linearly polarized projection onto a 2m wide screen but it was also required to playback on a Sharp RD3D display and in anaglyph projection at venues without dedicated stereoscopic display equipment. Additionally lenticular prints were made from key images in the movie. We discuss the following technical challenges during the stereoscopic production process; 1) Controlling the depth presentation, 2) Editing the stereoscopic sequences, 3) Generating compressed movies in display speci¯c formats. We conclude that the generation of high quality stereoscopic movie content using desktop tools and equipment is feasible. This does require careful quality control and manual intervention but we believe these overheads are worthwhile when presenting inherently 3D data as the result is signi¯cantly increased impact and better understanding of complex 3D scenes

    A comparison of land-use determinations using data from ERTS-1 and high altitude aircraft

    Get PDF
    A manual interpretation of ERTS-1 MSS system corrected imagery has been performed on a study area within the Houston Area Test Site to classify land use using the Level 1 categories proposed by the Department of the Interior. The two types of imagery used included: (1) black and white transparencies of each band enlarged to a scale of approximately 1:250,000 and (2) color transparencies composited from the computer compatible tapes using the film recorder on a multispectral data analysis station. The results of this interpretation have been compared with the 1970 land use inventory of HATS which was compiled using color ektachrome imagery from high altitude aircraft (scale 1:120,000). Urban data from the same scene was also analyzed using a computer-aided (clustering) technique. The resulting clusters, representing areas of similar content, were compared with existing land use patterns in Houston. A technique was developed to correlate the spectral clusters to specific urban features on aircraft imagery by the location of specific, high contrast objects in particular resolution elements. It was concluded that ERTS-1 data could be used to develop Level 1 and many Level 2 land use categories for regional inventories and perhaps to some degree on a local level

    High dynamic range imaging for archaeological recording

    No full text
    This paper notes the adoption of digital photography as a primary recording means within archaeology, and reviews some issues and problems that this presents. Particular attention is given to the problems of recording high-contrast scenes in archaeology and High Dynamic Range imaging using multiple exposures is suggested as a means of providing an archive of high-contrast scenes that can later be tone-mapped to provide a variety of visualisations. Exposure fusion is also considered, although it is noted that this has some disadvantages. Three case studies are then presented (1) a very high contrast photograph taken from within a rock-cut tomb at Cala Morell, Menorca (2) an archaeological test pitting exercise requiring rapid acquisition of photographic records in challenging circumstances and (3) legacy material consisting of three differently exposed colour positive (slide) photographs of the same scene. In each case, HDR methods are shown to significantly aid the generation of a high quality illustrative record photograph, and it is concluded that HDR imaging could serve an effective role in archaeological photographic recording, although there remain problems of archiving and distributing HDR radiance map data

    A computer vision model for visual-object-based attention and eye movements

    Get PDF
    This is the post-print version of the final paper published in Computer Vision and Image Understanding. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2008 Elsevier B.V.This paper presents a new computational framework for modelling visual-object-based attention and attention-driven eye movements within an integrated system in a biologically inspired approach. Attention operates at multiple levels of visual selection by space, feature, object and group depending on the nature of targets and visual tasks. Attentional shifts and gaze shifts are constructed upon their common process circuits and control mechanisms but also separated from their different function roles, working together to fulfil flexible visual selection tasks in complicated visual environments. The framework integrates the important aspects of human visual attention and eye movements resulting in sophisticated performance in complicated natural scenes. The proposed approach aims at exploring a useful visual selection system for computer vision, especially for usage in cluttered natural visual environments.National Natural Science of Founda- tion of Chin

    Feminist Phenomenology and the films of Sally Potter

    Get PDF

    An Extended Virtual Aperture Imaging Model for Through-the-wall Sensing and Its Environmental Parameters Estimation

    Get PDF
    Through-the-wall imaging (TWI) radar has been given increasing attention in recent years. However, prior knowledge about environmental parameters, such as wall thickness and dielectric constant, and the standoff distance between an array and a wall, is generally unavailable in real applications. Thus, targets behind the wall suffer from defocusing and displacement under the conventional imag¬ing operations. To solve this problem, in this paper, we first set up an extended imaging model of a virtual aperture obtained by a multiple-input-multiple-output array, which considers the array position to the wall and thus is more applicable for real situations. Then, we present a method to estimate the environmental parameters to calibrate the TWI, without multiple measurements or dominant scatter¬ers behind-the-wall to assist. Simulation and field experi¬ments were performed to illustrate the validity of the pro¬posed imaging model and the environmental parameters estimation method
    corecore