720 research outputs found

    Projector-Based Augmentation

    Get PDF
    Projector-based augmentation approaches hold the potential of combining the advantages of well-establishes spatial virtual reality and spatial augmented reality. Immersive, semi-immersive and augmented visualizations can be realized in everyday environments – without the need for special projection screens and dedicated display configurations. Limitations of mobile devices, such as low resolution and small field of view, focus constrains, and ergonomic issues can be overcome in many cases by the utilization of projection technology. Thus, applications that do not require mobility can benefit from efficient spatial augmentations. Examples range from edutainment in museums (such as storytelling projections onto natural stone walls in historical buildings) to architectural visualizations (such as augmentations of complex illumination simulations or modified surface materials in real building structures). This chapter describes projector-camera methods and multi-projector techniques that aim at correcting geometric aberrations, compensating local and global radiometric effects, and improving focus properties of images projected onto everyday surfaces

    A revised radiometric normalisation standard for SAR

    Full text link
    Improved geometric accuracy in SAR sensors implies that more complex models of the Earth may be used not only to geometrically rectify imagery, but also to more robustly calibrate their radiometry. Current beta, sigma, and gamma nought SAR radiometry conventions all assume a simple “flat as Kansas” Earth ellipsoid model. We complement these simple models with improved radiometric calibration that accounts for local terrain variations. In the era of ERS-1 and RADARSAT-1, image geolocation accuracy was in the order of multiple samples, and tiepointfree establishment of the relationship between radar and map geometries was not possible. Newer sensors such as ASAR, PALSAR, and TerraSAR-X all support accurate geolocation based on product annotations alone. We show that high geolocation accuracy, combined with availability of high-resolution accurate elevation models, enables a more robust radiometric calibration standard for modern SAR sensors that is based on gamma nought normalised using an Earth terrain-model

    Projector-Based Augmentation

    Get PDF
    Projector-based augmentation approaches hold the potential of combining the advantages of well-establishes spatial virtual reality and spatial augmented reality. Immersive, semi-immersive and augmented visualizations can be realized in everyday environments – without the need for special projection screens and dedicated display configurations. Limitations of mobile devices, such as low resolution and small field of view, focus constrains, and ergonomic issues can be overcome in many cases by the utilization of projection technology. Thus, applications that do not require mobility can benefit from efficient spatial augmentations. Examples range from edutainment in museums (such as storytelling projections onto natural stone walls in historical buildings) to architectural visualizations (such as augmentations of complex illumination simulations or modified surface materials in real building structures). This chapter describes projector-camera methods and multi-projector techniques that aim at correcting geometric aberrations, compensating local and global radiometric effects, and improving focus properties of images projected onto everyday surfaces

    Projector Compensation for Unconventional Projection Surface

    Get PDF
    Projecting onto irregular textured surfaces found on buildings, automobiles and theatre stages calls for the development of radiometric and geometric compensation algorithms that require no user intervention and compensate for the patterning and colourization of the background surface. This process needs a projector-camera setup where the feedback from the camera is used to learn the background's geometric and radiometric properties. In this thesis, radiometric compensation, which is used to correct for the background texture distortion, is discussed in detail. Existing compensation frameworks assume no inter--pixel coupling and develop an independent compensation model for each projector pixel. This assumption is valid on background with uniform texture variation but fails at sharp contrast differences leading to visible edge artifacts in the compensated image. To overcome the edge artifacts, a novel radiometric compensation approach is presented that directly learns the compensation model, rather than inverting a learned forward model. That is, the proposed method uses spatially uniform camera images to learn the projector images that successfully hide the background. The proposed approach can be used with any existing radiometric compensation algorithm to improve its performance. Comparisons with classical and state-of-the-art methods show the superiority of the proposed method in terms of the perceived image quality and computational complexity. The modified target image from the radiometric compensation algorithm can exceed the limited dynamic range of the projector resulting in saturation artifacts in the compensated image. Since the achievable range of luminance on the background surface with the given projector is limited, the projector compensation should also consider the contents of the target image along with the background properties while calculating the projector image. A novel spatially optimized luminance modification approach is proposed using human visual system properties to reduce the saturation artifacts. Here, the tolerance of the human visual system is exploited to make perceptually less sensitive modifications to the target image that in turn reduces the luminance demands from the projector. The proposed spatial modification approach can be combined with any radiometric compensation models to improve its performance. The simulated results of the proposed luminance modification are evaluated to show the improvement in perceptual performance. The inverse approach combined with the spatial luminance modification concludes the proposed projector compensation, which enables the optimum compensated projection on an arbitrary background surface

    Radiometric Compensation of Nonlinear Projector Camera Systems by Modeling Human Visual Systems

    Get PDF
    Radiometric compensation is the process of adjusting the luminance and colour output of images on a display to compensate for non-uniformity of the display. In the case of projector-camera systems, this non-uniformity can be a product of both the light source and of the projection surface. Conventional radiometric compensation techniques have been demonstrated to compensate the output of a projector to appear correct to a camera, but a camera does not possess the colour sensitivity and response of a human. By correctly modelling the interaction between a projector stimulus and camera and human colour responses, radiometric compensation can be performed for a human tristimulus colour model rather than that of the camera. The result is a colour gamut which is seen to be correct for a human viewer but not necessarily the camera. A novel radiometric compensation method for projector-camera systems and textured surfaces is introduced based on the human visual system (HVS) colour response. The proposed method for modelling human colour response can extend established compensation methods to produce colours which are human-perceived to be correct (egocentric modelling). As a result, this method performs radiometric compensation which is not only consistent and precise, but also produces images which are visually accurate to an external colour reference. Additionally, conventional radiometric compensation relies on a solution of a linear system for the colour response of each pixel in an image, but this is insufficient for modelling systems containing a nonlinear projector or camera. In the proposed method, nonlinear projector output or camera response has been modelled in a separable fashion to allow for the linear system solution for the human visual space to be applied to nonlinear projector-camera systems. The performance of the system is evaluated by comparison with conventional solutions in terms of computational speed, memory requirements, and accuracy of the colour compensation. Studies include the qualitative and quantitative assessment of the proposed compensation method on a variety of adverse surfaces, with varying colour and specularity which demonstrate the colour accuracy of the proposed method. By using a spectroradiometer outside of the calibration loop, this method is shown to produce generally the lowest average radiometric compensation error when compared to compensation performed using only the response of a camera, demonstrated through quantitative analysis of compensated colours, and supported by qualitative results

    Correction of Errors in Time of Flight Cameras

    Get PDF
    En esta tesis se aborda la corrección de errores en cámaras de profundidad basadas en tiempo de vuelo (Time of Flight - ToF). De entre las más recientes tecnologías, las cámaras ToF de modulación continua (Continuous Wave Modulation - CWM) son una alternativa prometedora para la creación de sensores compactos y rápidos. Sin embargo, existen gran variedad de errores que afectan notablemente la medida de profundidad, poniendo en compromiso posibles aplicaciones. La corrección de dichos errores propone un reto desafiante. Actualmente, se consideran dos fuentes principales de error: i) sistemático y ii) no sistemático. Mientras que el primero admite calibración, el último depende de la geometría y el movimiento relativo de la escena. Esta tesis propone métodos que abordan i) la distorsión sistemática de profundidad y dos de las fuentes de error no sistemático más relevantes: ii.a) la interferencia por multicamino (Multipath Interference - MpI) y ii.b) los artefactos de movimiento. La distorsión sistemática de profundidad en cámaras ToF surge principalmente debido al uso de señales sinusoidales no perfectas para modular. Como resultado, las medidas de profundidad aparecen distorsionadas, pudiendo ser reducidas con una etapa de calibración. Esta tesis propone un método de calibración basado en mostrar a la cámara un plano en diferentes posiciones y orientaciones. Este método no requiere de patrones de calibración y, por tanto, puede emplear los planos, que de manera natural, aparecen en la escena. El método propuesto encuentra una función que obtiene la corrección de profundidad correspondiente a cada píxel. Esta tesis mejora los métodos existentes en cuanto a precisión, eficiencia e idoneidad. La interferencia por multicamino surge debido a la superposición de la señal reflejada por diferentes caminos con la reflexión directa, produciendo distorsiones que se hacen más notables en superficies convexas. La MpI es la causa de importantes errores en la estimación de profundidad en cámaras CWM ToF. Esta tesis propone un método que elimina la MpI a partir de un solo mapa de profundidad. El enfoque propuesto no requiere más información acerca de la escena que las medidas ToF. El método se fundamenta en un modelo radio-métrico de las medidas que se emplea para estimar de manera muy precisa el mapa de profundidad sin distorsión. Una de las tecnologías líderes para la obtención de profundidad en imagen ToF está basada en Photonic Mixer Device (PMD), la cual obtiene la profundidad mediante el muestreado secuencial de la correlación entre la señal de modulación y la señal proveniente de la escena en diferentes desplazamientos de fase. Con movimiento, los píxeles PMD capturan profundidades diferentes en cada etapa de muestreo, produciendo artefactos de movimiento. El método propuesto en esta tesis para la corrección de dichos artefactos destaca por su velocidad y sencillez, pudiendo ser incluido fácilmente en el hardware de la cámara. La profundidad de cada píxel se recupera gracias a la consistencia entre las muestras de correlación en el píxel PMD y de la vecindad local. Este método obtiene correcciones precisas, reduciendo los artefactos de movimiento enormemente. Además, como resultado de este método, puede obtenerse el flujo óptico en los contornos en movimiento a partir de una sola captura. A pesar de ser una alternativa muy prometedora para la obtención de profundidad, las cámaras ToF todavía tienen que resolver problemas desafiantes en relación a la corrección de errores sistemáticos y no sistemáticos. Esta tesis propone métodos eficaces para enfrentarse con estos errores

    Content creation for seamless augmented experiences with projection mapping

    Get PDF
    This dissertation explores systems and methods for creating projection mapping content that seamlessly merges virtual and physical. Most virtual reality and augmented reality technologies rely on screens for display and interaction, where a mobile device or head mounted display mediates the user's experience. In contrast, projection mapping uses off-the-shelf video projectors to augment the appearance of physical objects, and with projection mapping there is no screen to mediate the experience. The physical world simply becomes the display. Projection mapping can provide users with a seamless augmented experience, where virtual and physical become indistinguishable in an apparently unmediated way. Projection mapping is an old concept dating to Disney's 1969 Haunted Mansion. The core technical foundations were laid back in 1999 with UNC's Office of the Future and Shader Lamps projects. Since then, projectors have gotten brighter, higher resolution, and drastically decreased in price. Yet projection mapping has not crossed the chasm into mainstream use. The largest remaining challenge for projection mapping is that content creation is very difficult and time consuming. Content for projection mapping is still created via a tedious manual process by warping a 2D video file onto a 3D physical object using existing tools (e.g. Adobe Photoshop) which are not made for defining animated interactive effects on 3D object surfaces. With existing tools, content must be created for each specific display object, and cannot be re-used across experiences. For each object the artist wants to animate, the artist must manually create a custom texture for that specific object, and warp the texture to the physical object. This limits projection mapped experiences to controlled environments and static scenes. If the artist wants to project onto a different object from the original, they must start from scratch creating custom content for that object. This manual content creation process is time consuming, expensive and doesn't scale. This thesis explores new methods for creating projection mapping content. Our goal is to make projection mapping easier, cheaper and more scalable. We explore methods for adaptive projection mapping, which enables artists to create content once, and that content adapts based on the color and geometry of the display surface. Content can be created once, and re-used on any surface. This thesis is composed of three proof-of-concept prototypes, exploring new methods for content creation for projection mapping. IllumiRoom expands video game content beyond the television screen and into the physical world using a standard video projector to surround a television with projected light. IllumiRoom works in any living room, the projected content dynamically adapts based on the color and geometry of the room. RoomAlive expands on this idea, using multiple projectors to cover an entire living room in input/output pixels and dynamically adapts gaming experiences to fill an entire room. Finally, Projectibles focuses on the physical aspect of projection mapping. Projectibles optimizes the display surface color to increase the contrast and resolution of the overall experience, enabling artists to design the physical object along with the virtual content. The proof-of-concept prototypes presented in this thesis are aimed at the not-to-distant future. The projects in this thesis are not theoretical concepts, but fully working prototype systems that demonstrate the practicality of projection mapping to create immersive experiences. It is the sincere hope of the author that these experiences quickly move of the lab and into the real world

    Extrinsic Parameter Calibration for Line Scanning Cameras on Ground Vehicles with Navigation Systems Using a Calibration Pattern

    Full text link
    Line scanning cameras, which capture only a single line of pixels, have been increasingly used in ground based mobile or robotic platforms. In applications where it is advantageous to directly georeference the camera data to world coordinates, an accurate estimate of the camera's 6D pose is required. This paper focuses on the common case where a mobile platform is equipped with a rigidly mounted line scanning camera, whose pose is unknown, and a navigation system providing vehicle body pose estimates. We propose a novel method that estimates the camera's pose relative to the navigation system. The approach involves imaging and manually labelling a calibration pattern with distinctly identifiable points, triangulating these points from camera and navigation system data and reprojecting them in order to compute a likelihood, which is maximised to estimate the 6D camera pose. Additionally, a Markov Chain Monte Carlo (MCMC) algorithm is used to estimate the uncertainty of the offset. Tested on two different platforms, the method was able to estimate the pose to within 0.06 m / 1.05^{\circ} and 0.18 m / 2.39^{\circ}. We also propose several approaches to displaying and interpreting the 6D results in a human readable way.Comment: Published in MDPI Sensors, 30 October 201
    corecore