60 research outputs found

    A generalized sub-pixel mapping algorithm for auto-stereoscopic displays using slanted optical plates

    Get PDF
    Auto-stereoscopic displays using slanted optical plates have inherent subpixel rasterization compared to the normal 2D displays, and mappings between subpixel positions and multi-view indices even vary according to the number of views and angles of slanted optical plates. In this paper, we derive a simple but generalized formula for subpixel mappings from a naïve ray-tracing technique on RGB stripe type panels. To verify the proposed algorithm, our proto-type auto-stereoscopic display using parallax barriers with a slanted angle was used and examined in the experiment. The proposed algorithm is expected to leverage converting multi-view 3D inputs to various types of auto-stereoscopic contents in real time.2nd International Conference on Artificial Intelligence in Information and Communication (ICAIIC 2020), February 19-21, 2020, Fukuoka, Japa

    Quality of Experience in Immersive Video Technologies

    Get PDF
    Over the last decades, several technological revolutions have impacted the television industry, such as the shifts from black & white to color and from standard to high-definition. Nevertheless, further considerable improvements can still be achieved to provide a better multimedia experience, for example with ultra-high-definition, high dynamic range & wide color gamut, or 3D. These so-called immersive technologies aim at providing better, more realistic, and emotionally stronger experiences. To measure quality of experience (QoE), subjective evaluation is the ultimate means since it relies on a pool of human subjects. However, reliable and meaningful results can only be obtained if experiments are properly designed and conducted following a strict methodology. In this thesis, we build a rigorous framework for subjective evaluation of new types of image and video content. We propose different procedures and analysis tools for measuring QoE in immersive technologies. As immersive technologies capture more information than conventional technologies, they have the ability to provide more details, enhanced depth perception, as well as better color, contrast, and brightness. To measure the impact of immersive technologies on the viewersâ QoE, we apply the proposed framework for designing experiments and analyzing collected subjectsâ ratings. We also analyze eye movements to study human visual attention during immersive content playback. Since immersive content carries more information than conventional content, efficient compression algorithms are needed for storage and transmission using existing infrastructures. To determine the required bandwidth for high-quality transmission of immersive content, we use the proposed framework to conduct meticulous evaluations of recent image and video codecs in the context of immersive technologies. Subjective evaluation is time consuming, expensive, and is not always feasible. Consequently, researchers have developed objective metrics to automatically predict quality. To measure the performance of objective metrics in assessing immersive content quality, we perform several in-depth benchmarks of state-of-the-art and commonly used objective metrics. For this aim, we use ground truth quality scores, which are collected under our subjective evaluation framework. To improve QoE, we propose different systems for stereoscopic and autostereoscopic 3D displays in particular. The proposed systems can help reducing the artifacts generated at the visualization stage, which impact picture quality, depth quality, and visual comfort. To demonstrate the effectiveness of these systems, we use the proposed framework to measure viewersâ preference between these systems and standard 2D & 3D modes. In summary, this thesis tackles the problems of measuring, predicting, and improving QoE in immersive technologies. To address these problems, we build a rigorous framework and we apply it through several in-depth investigations. We put essential concepts of multimedia QoE under this framework. These concepts not only are of fundamental nature, but also have shown their impact in very practical applications. In particular, the JPEG, MPEG, and VCEG standardization bodies have adopted these concepts to select technologies that were proposed for standardization and to validate the resulting standards in terms of compression efficiency

    Spatial Displays and Spatial Instruments

    Get PDF
    The conference proceedings topics are divided into two main areas: (1) issues of spatial and picture perception raised by graphical electronic displays of spatial information; and (2) design questions raised by the practical experience of designers actually defining new spatial instruments for use in new aircraft and spacecraft. Each topic is considered from both a theoretical and an applied direction. Emphasis is placed on discussion of phenomena and determination of design principles

    Geometric Accuracy Testing, Evaluation and Applicability of Space Imagery to the Small Scale Topographic Mapping of the Sudan

    Get PDF
    The geometric accuracy, interpretabilty and the applicability of using space imagery for the production of small-scale topographic maps of the Sudan have been assessed. Two test areas have been selected. The first test area was selected in the central Sudan including the area between the Blue Nile and the White Nile and extending to Atbara in the Nile Province. The second test area was selected in the Red Sea Hills area which has modern 1:100,000 scale topographic map coverage and has been covered by six types of images, Landsat MSS TM and RBV; MOMS; Metric Camera (MC); and Large format Camera (LFC). Geometric accuracy testing has been carried out using a test field of well-defined control points whose terrain coordinates have been obtained from the existing maps. The same points were measured on each of the images in a Zeiss Jena Stereocomparator (Stecometer C II) and transformed into the terrain coordinate system using polynomial transformations in the case of the scanner and RBV images; and space resection/intersection, relative/absolute orientation and bundle adjustment in the case of the MC and LFC photographs. The two sets of coordinates were then compared. The planimetric accuracies (root mean square errors) obtained for the scanner and RBV images were: Landsat MSS +/-80 m; TM +/-45 m; REV +/-40 m; and MOMS +/-28 m. The accuracies of the 3-dimensional coordinates obtained from the photographs were: MC:-X=+/-16 m, Y=+/-16 m, Z=+/-30 m; and LFC:- X=+/-14 m, Y=+/-14 m, and Z=+/-20 m. The planimetric accuracy figures are compatible with the specifications for topographic maps at scales of 1:250,000 in the case of MSS; 1:125,000 scale in the case of TM and RBV; and 1:100,000 scale in the case of MOMS. The planimetric accuracies (vector =+/-20 m) achieved with the two space cameras are compatible with topographic mapping at 1:60,000 to 1:70,000 scale. However, the spot height accuracies of +/-20 to +/-30 m - equivalent to a contour interval of 50 to 60 m - fall short of the required heighting accuracies for 1:60,000 to 1:100,000 scale mapping. The interpretation tests carried out on the MSS, TM, and RBV images showed that, while the main terrain features (hills, ridges, wadis, etc.) can be mapped reasonably well, there was an almost complete failure to pick up the cultural features - towns, villages, roads, railways, etc. - present in the test areas. The high resolution MOMS images and the space photographs were much more satisfactory in this respect though still the cultural features are difficult to pick up due to the buildings and roads being built out of local material and exhibiting little contrast on the images

    Camera based Display Image Quality Assessment

    Get PDF
    This thesis presents the outcomes of research carried out by the PhD candidate Ping Zhao during 2012 to 2015 in Gjøvik University College. The underlying research was a part of the HyPerCept project, in the program of Strategic Projects for University Colleges, which was funded by The Research Council of Norway. The research was engaged under the supervision of Professor Jon Yngve Hardeberg and co-supervision of Associate Professor Marius Pedersen, from The Norwegian Colour and Visual Computing Laboratory, in the Faculty of Computer Science and Media Technology of Gjøvik University College; as well as the co-supervision of Associate Professor Jean-Baptiste Thomas, from The Laboratoire Electronique, Informatique et Image, in the Faculty of Computer Science of Universit´e de Bourgogne. The main goal of this research was to develop a fast and an inexpensive camera based display image quality assessment framework. Due to the limited time frame, we decided to focus only on projection displays with static images displayed on them. However, the proposed methods were not limited to projection displays, and they were expected to work with other types of displays, such as desktop monitors, laptop screens, smart phone screens, etc., with limited modifications. The primary contributions from this research can be summarized as follows: 1. We proposed a camera based display image quality assessment framework, which was originally designed for projection displays but it can be used for other types of displays with limited modifications. 2. We proposed a method to calibrate the camera in order to eliminate unwanted vignetting artifact, which is mainly introduced by the camera lens. 3. We proposed a method to optimize the camera’s exposure with respect to the measured luminance of incident light, so that after the calibration all camera sensors share a common linear response region. 4. We proposed a marker-less and view-independent method to register one captured image with its original at a sub-pixel level, so that we can incorporate existing full reference image quality metrics without modifying them. 5. We identified spatial uniformity, contrast and sharpness as the most important image quality attributes for projection displays, and we used the proposed framework to evaluate the prediction performance of the state-of-the-art image quality metrics regarding these attributes. The proposed image quality assessment framework is the core contribution of this research. Comparing to conventional image quality assessment approaches, which were largely based on the measurements of colorimeter or spectroradiometer, using camera as the acquisition device has the advantages of quickly recording all displayed pixels in one shot, relatively inexpensive to purchase the instrument. Therefore, the consumption of time and resources for image quality assessment can be largely reduced. We proposed a method to calibrate the camera in order to eliminate unwanted vignetting artifact primarily introduced by the camera lens. We used a hazy sky as a closely uniform light source, and the vignetting mask was generated with respect to the median sensor responses over i only a few rotated shots of the same spot on the sky. We also proposed a method to quickly determine whether all camera sensors were sharing a common linear response region. In order to incorporate existing full reference image quality metrics without modifying them, an accurate registration of pairs of pixels between one captured image and its original is required. We proposed a marker-less and view-independent image registration method to solve this problem. The experimental results proved that the proposed method worked well in the viewing conditions with a low ambient light. We further identified spatial uniformity, contrast and sharpness as the most important image quality attributes for projection displays. Subsequently, we used the developed framework to objectively evaluate the prediction performance of the state-of-art image quality metrics regarding these attributes in a robust manner. In this process, the metrics were benchmarked with respect to the correlations between the prediction results and the perceptual ratings collected from subjective experiments. The analysis of the experimental results indicated that our proposed methods were effective and efficient. Subjective experiment is an essential component for image quality assessment; however it can be time and resource consuming, especially in the cases that additional image distortion levels are required to extend the existing subjective experimental results. For this reason, we investigated the possibility of extending subjective experiments with baseline adjustment method, and we found that the method could work well if appropriate strategies were applied. The underlying strategies referred to the best distortion levels to be included in the baseline, as well as the number of them

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Optimization of the holographic process for imaging and lithography

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 272-297).Since their invention in 1948 by Dennis Gabor, holograms have demonstrated to be important components of a variety of optical systems and their implementation in new fields and methods is expected to continue growing. Their ability to encode 3D optical fields on a 2D plane opened the possibility of novel applications for imaging and lithography. In the traditional form, holograms are produced by the interference of a reference and object waves recording the phase and amplitude of the complex field. The holographic process has been extended to include different recording materials and methods. The increasing demand for holographic-based systems is followed by a need for efficient optimization tools designed for maximizing the performance of the optical system. In this thesis, a variety of multi-domain optimization tools designed to improve the performance of holographic optical systems are proposed. These tools are designed to be robust, computationally efficient and sufficiently general to be applied when designing various holographic systems. All the major forms of holographic elements are studied: computer generated holograms, thin and thick conventional holograms, numerically simulated holograms and digital holograms. Novel holographic optical systems for imaging and lithography are proposed. In the case of lithography, a high-resolution system based on Fresnel domain computer generated holograms (CGHs) is presented. The holograms are numerically designed using a reduced complexity hybrid optimization algorithm (HOA) based on genetic algorithms (GAs) and the modified error reduction (MER) method. The algorithm is efficiently implemented on a graphic processing unit. Simulations as well as experimental results for CGHs fabricated using electron-beam lithography are presented. A method for extending the system's depth of focus is proposed. The HOA is extended for the design and optimization of multispectral CGHs applied for high efficiency solar concentration and spectral splitting. A second lithographic system based on optically recorded total internal reflection (TIR) holograms is studied. A comparative analysis between scalar and (cont.) vector diffraction theories for the modeling and simulation of the system is performed.A complete numerical model of the system is conducted including the photoresist response and first order models for shrinkage of the holographic emulsion. A novel block-stitching algorithm is introduced for the calculation of large diffraction patterns that allows overcoming current computational limitations of memory and processing time. The numerical model is implemented for optimizing the system's performance as well as redesigning the mask to account for potential fabrication errors. The simulation results are compared to experimentally measured data. In the case of imaging, a segmented aperture thin imager based on holographically corrected gradient index lenses (GRIN) is proposed. The compound system is constrained to a maximum thickness of 5mm and utilizes an optically recorded hologram for correcting high-order optical aberrations of the GRIN lens array. The imager is analyzed using system and information theories. A multi-domain optimization approach is implemented based on GAs for maximizing the system's channel capacity and hence improving the information extraction or encoding process. A decoding or reconstruction strategy is implemented using the superresolution algorithm. Experimental results for the optimization of the hologram's recording process and the tomographic measurement of the system's space-variant point spread function are presented. A second imaging system for the measurement of complex fluid flows by tracking micron sized particles using digital holography is studied. A stochastic theoretical model based on a stability metric similar to the channel capacity for a Gaussian channel is presented and used to optimize the system. The theoretical model is first derived for the extreme case of point source particles using Rayleigh scattering and scalar diffraction theory formulations. The model is then extended to account for particles of variable sizes using Mie theory for the scattering of homogeneous dielectric spherical particles. The influence and statistics of the particle density dependent cross-talk noise are studied. Simulation and experimental results for finding the optimum particle density based on the stability metric are presented. For all the studied systems, a sensitivity analysis is performed to predict and assist in the correction of potential fabrication or calibration errors.by José Antonio Domínguez-Caballero.Ph.D

    Remote Sensing Applications in Coastal Environment

    Get PDF
    Coastal regions are susceptible to rapid changes, as they constitute the boundary between the land and the sea. The resilience of a particular segment of coast depends on many factors, including climate change, sea-level changes, natural and technological hazards, extraction of natural resources, population growth, and tourism. Recent research highlights the strong capabilities for remote sensing applications to monitor, inventory, and analyze the coastal environment. This book contains 12 high-quality and innovative scientific papers that explore, evaluate, and implement the use of remote sensing sensors within both natural and built coastal environments

    NASA Tech Briefs, August 1991

    Get PDF
    Topics: New Product Ideas; NASA TU Services; Electronic Components and Circuits; Electronic Systems; Physical Sciences; Materials; Computer Programs; Mechanics; Machinery; Fabrication Technology; Mathematics and Information Sciences; Life Sciences
    • …
    corecore