21,857 research outputs found

    End-to-end Projector Photometric Compensation

    Full text link
    Projector photometric compensation aims to modify a projector input image such that it can compensate for disturbance from the appearance of projection surface. In this paper, for the first time, we formulate the compensation problem as an end-to-end learning problem and propose a convolutional neural network, named CompenNet, to implicitly learn the complex compensation function. CompenNet consists of a UNet-like backbone network and an autoencoder subnet. Such architecture encourages rich multi-level interactions between the camera-captured projection surface image and the input image, and thus captures both photometric and environment information of the projection surface. In addition, the visual details and interaction information are carried to deeper layers along the multi-level skip convolution layers. The architecture is of particular importance for the projector compensation task, for which only a small training dataset is allowed in practice. Another contribution we make is a novel evaluation benchmark, which is independent of system setup and thus quantitatively verifiable. Such benchmark is not previously available, to our best knowledge, due to the fact that conventional evaluation requests the hardware system to actually project the final results. Our key idea, motivated from our end-to-end problem formulation, is to use a reasonable surrogate to avoid such projection process so as to be setup-independent. Our method is evaluated carefully on the benchmark, and the results show that our end-to-end learning solution outperforms state-of-the-arts both qualitatively and quantitatively by a significant margin.Comment: To appear in the 2019 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Source code and dataset are available at https://github.com/BingyaoHuang/compenne

    Future Directions in Astronomy Visualisation

    Full text link
    Despite the large budgets spent annually on astronomical research equipment such as telescopes, instruments and supercomputers, the general trend is to analyse and view the resulting datasets using small, two-dimensional displays. We report here on alternative advanced image displays, with an emphasis on displays that we have constructed, including stereoscopic projection, multiple projector tiled displays and a digital dome. These displays can provide astronomers with new ways of exploring the terabyte and petabyte datasets that are now regularly being produced from all-sky surveys, high-resolution computer simulations, and Virtual Observatory projects. We also present a summary of the Advanced Image Displays for Astronomy (AIDA) survey which we conducted from March-May 2005, in order to raise some issues pertitent to the current and future level of use of advanced image displays.Comment: 13 pages, 2 figures, accepted for publication in PAS

    3D Camouflaging Object using RGB-D Sensors

    Full text link
    This paper proposes a new optical camouflage system that uses RGB-D cameras, for acquiring point cloud of background scene, and tracking observers eyes. This system enables a user to conceal an object located behind a display that surrounded by 3D objects. If we considered here the tracked point of observer s eyes is a light source, the system will work on estimating shadow shape of the display device that falls on the objects in background. The system uses the 3d observer s eyes and the locations of display corners to predict their shadow points which have nearest neighbors in the constructed point cloud of background scene.Comment: 6 pages, 12 figures, 2017 IEEE International Conference on SM

    An Advanced, Three-Dimensional Plotting Library for Astronomy

    Get PDF
    We present a new, three-dimensional (3D) plotting library with advanced features, and support for standard and enhanced display devices. The library - S2PLOT - is written in C and can be used by C, C++ and FORTRAN programs on GNU/Linux and Apple/OSX systems. S2PLOT draws objects in a 3D (x,y,z) Cartesian space and the user interactively controls how this space is rendered at run time. With a PGPLOT inspired interface, S2PLOT provides astronomers with elegant techniques for displaying and exploring 3D data sets directly from their program code, and the potential to use stereoscopic and dome display devices. The S2PLOT architecture supports dynamic geometry and can be used to plot time-evolving data sets, such as might be produced by simulation codes. In this paper, we introduce S2PLOT to the astronomical community, describe its potential applications, and present some example uses of the library.Comment: 12 pages, 10 eps figures (higher resolution versions available from http://astronomy.swin.edu.au/s2plot/paperfigures). The S2PLOT library is available for download from http://astronomy.swin.edu.au/s2plo

    Conceptual design study of a visual system for a rotorcraft simulator and some advances in platform motion utilization

    Get PDF
    A conceptual design of a visual system for a rotorcraft flight simulator is presented. Also, drive logic elements for a coupled motion base for such a simulator are given. The design is the result of an assessment of many potential arrangements of electro-optical elements and is a concept considered feasible for the application. The motion drive elements represent an example logic for a coupled motion base and is essentially an appeal to the designers of such logic to combine their washout and braking functions

    Temporal phase unwrapping using deep learning

    Full text link
    The multi-frequency temporal phase unwrapping (MF-TPU) method, as a classical phase unwrapping algorithm for fringe projection profilometry (FPP), is capable of eliminating the phase ambiguities even in the presence of surface discontinuities or spatially isolated objects. For the simplest and most efficient case, two sets of 3-step phase-shifting fringe patterns are used: the high-frequency one is for 3D measurement and the unit-frequency one is for unwrapping the phase obtained from the high-frequency pattern set. The final measurement precision or sensitivity is determined by the number of fringes used within the high-frequency pattern, under the precondition that the phase can be successfully unwrapped without triggering the fringe order error. Consequently, in order to guarantee a reasonable unwrapping success rate, the fringe number (or period number) of the high-frequency fringe patterns is generally restricted to about 16, resulting in limited measurement accuracy. On the other hand, using additional intermediate sets of fringe patterns can unwrap the phase with higher frequency, but at the expense of a prolonged pattern sequence. Inspired by recent successes of deep learning techniques for computer vision and computational imaging, in this work, we report that the deep neural networks can learn to perform TPU after appropriate training, as called deep-learning based temporal phase unwrapping (DL-TPU), which can substantially improve the unwrapping reliability compared with MF-TPU even in the presence of different types of error sources, e.g., intensity noise, low fringe modulation, and projector nonlinearity. We further experimentally demonstrate for the first time, to our knowledge, that the high-frequency phase obtained from 64-period 3-step phase-shifting fringe patterns can be directly and reliably unwrapped from one unit-frequency phase using DL-TPU

    Locating image presentation technology within pedagogic practice

    Get PDF
    This article presents data gathered through a University for the Creative Arts Learning and Teaching Research Grant (2009-2010); including a study of existing image presentation tools, both digital and non-digital; and analysis of data from four interviews and an online questionnaire. The aim of the research was to look afresh at available technology from the point of view of a lecturer in the visual arts, and to use the information gathered to look more critically at the available technology
    • …
    corecore