18 research outputs found
Dynamic Shadow Removal from Front Projection Displays
A technique and system for detecting a radiometric variation/artifacts of a front-projected dynamic display region under observation by at least one camera. The display is comprised of one or more images projected from one or more of a plurality of projectors; the system is preferably calibrated by using a projective relationship. A predicted image of the display region by the camera is constructed using frame-buffer information from each projector contributing to the display, which has been geometrically transformed for the camera and its relative image intensity adjusted. A detectable difference between a predicted image and the display region under observation causes corrective adjustment of the image being projected from at least one projector. The corrective adjustment may be achieved by way of pixel-wise approach (an alpha-mask is constructed from delta pixels/images), or bounding region approach (difference/bounding region is sized to include the area of the display affected by the radiometric variation). Also: a technique, or method, for detecting a radiometric variation of a display region under observation, as well as associated computer executable program code on a computer readable storage medium, therefor
Monitoring and Correction of Geometric Distortion in Projected Displays
A technique, and associated system and computer executable program code on a computer readable storage medium, for automatically correcting distortion of a front-projected display under observation by at least one camera. The technique may be employed in a myriad of front-projected display environments, e.g., single or multiple projectors and cameras are used. The technique includes: observing a first image, projected from at least one projector, comprising at least one target distribution of light intensities; for each conglomeration of white pixels of a difference image, compute a bounding box comprising a corresponding conglomeration of pixels in a framebuffer information of the camera, compute a bounding box comprising a corresponding conglomeration of pixels in a framebuffer information of the projector, compute an initial homography matrix, Htemp, mapping pixels of the projector\u27s bounding box to those of the camera\u27s bounding box, optimize the initial homography matrix, compute a central location, (Cx, Cy), of the camera\u27s bounding box using the initial homography matrix; and using a plurality of correspondence values comprising the correspondence, compute a corrective transform to aid in the automatic correcting of the display
Super-Resolution Overlay in Multi-Projector Displays
A technique, associated system and computer executable program code, for projecting a superimposed image onto a target display surface under observation of one or more cameras. A projective relationship between each projector being used and the target display surface is determined using a suitable calibration technique. A component image for each projector is then estimated using the information from the calibration, and represented in the frequency domain. Each component image is estimated by: Using the projective relationship, determine a set of sub-sampled, regionally shifted images, represented in the frequency domain; each component image is then composed of a respective set of the sub-sampled, regionally shifted images. In an optimization step, the difference between a sum of the component images and a frequency domain representation of a target image is minimized to produce a second, or subsequent, component image for each projector
Recommended from our members
Automatic model acquisition and aerial image understanding
This thesis introduces a model-based technique for the automatic recognition and three-dimensional reconstruction of buildings directly from a single range image or stereo processing of multiple optical views of an urban site. Initially, focus-of-attention regions that are likely to contain buildings are segmented from the scene. A perceptual grouping algorithm detects building boundaries as closed polygons in the optical image. When a digital elevation map (DEM) is the only input source available, building regions are detected through direct analysis of the elevation data. Both methods then utilize the key idea of matching a database of shape models against the DEM using a model-indexing procedure that compares orientation histograms for each parameterized model in the database to a histogram that corresponds to the DEM region. The set of models (surfaces) that most closely match the DEM region are used as the initial estimates in a robust surface fitting technique that refines the model parameters (such as orientation and peak-roof angle) of each hypothesized roof surface. The surface model that converges to the DEM with the lowest residual fit error is retained as the most likely description of the surface. The database of surface models contains a limited number of canonical shapes common to rooftops, such as planes, peaks, domes, and gables. Reconstruction of complex shapes is achieved through a composition of different parameterizations of the canonical shape models. We show how the technique can be recursively applied to a range image to segment and reconstruct buildings as well as rooftop substructure. The ability of the model-indexing technique to separate surface models under different resolutions of the parameter space and different levels of noise in the DEM is studied. The approach is evaluated on several datasets, and we demonstrate that this two-phase reconstruction approach allows robust and accurate reconstruction of a wide variety of building types. The building reconstruction process is at the heart of a general knowledge-driven system called Ascender II that incorporates contextual control of computer vision algorithms comprising a processing library. The system operates in the aerial image domain and is composed of a number of different computer vision algorithms that discriminate object classes based on evidence extracted from the available data. Algorithms are stored in evidence policies that encode contextual information about their data requirements and expected performance. Explicit knowledge about a site is stored in a Bayesian network that is used to fuse information gathered from the execution of a subset of the evidence policies on an image and forms the basis for automatic control of the library of algorithms. Based on the state of the Bayesian network and information encoded in the evidence policies, algorithms are selectively applied to the data in order to segment and recognize different object classes. Using this mechanism, the building reconstruction processes are more likely to be applied to building regions that have already been discriminated from other objects present in an urban area. Our conjecture is that this will lead to significantly better performance of the algorithms (fewer false positives, for example). The Ascender II system is evaluated on three different data sets. Acquired models are evaluated with respect to both geometric and semantic accuracy. Furthermore, the robustness of the system is analyzed with respect to incorrect and incomplete knowledge within the Bayesian network and errors within the vision algorithms. (Abstract shortened by UMI.
Building Reconstruction from Optical and Range Images
A technique is introduced for extracting and reconstructing a wide class of building types from a registered range image and optical image. An attentional focus stage, followed by model indexing, allows topdown robust surface fittting to reconstruct the 3D nature of the buildings in the data. Because of the effectiveness of model selection, top-down processing of noisy range data still succeeds and the algorithm is capable of detecting and reconstructing several different building roof classes, including flat single level, flat multi-leveled, peaked, and curved rooftops. The algorithm is applicable to range data that may have been collected from several different range sensor types. We demonstrate reconstructions of different buildings classes in the presence of large amounts of noise. Our results underline the usefuless of range data when processed in the context of a focus-ofattention area derived from the monocular optical image. 1 Introduction We introduce a solution to the prob..
Knowledge Directed Reconstruction from Multiple Aerial Images
Image understanding (IU) techniques for automatic site reconstruction have demonstrated success within restricted domains and for small numbers of model classes. However, these techniques often fail when applied out of context and do not "scale-up" into a more general solution. Under the APGD program, we are constructing a knowledgebased site reconstruction system that automatically selects the correct algorithm according to the current context, applies it to a focused subset of the data, and constrains the interpretation of the result through the explicit use of knowledge. 1 Introduction The extraction and reconstruction of building models from aerial images has become an important area of research in recent years. Significant progress has been made and several systems perform reasonably well within their appropriate domains [Collins'95, Herman'94, Lin et al.'94, Chellapa et al.'94]. For example, recent testing of the Ascender I system has shown it capable of automatically extractin..
Three-Dimensional Grouping and Information Fusion for Site Modeling from Aerial Images
This paper demonstrates the utility of data fusion when applied to the problem of site model reconstruction. We combined the results from hierarchical image matching, feature based building detec- tion, robust plane fitting, and heuristic assemble algorithms to form an accurate, robust site model reconstruction system. In the future, the techniques described here will be extended to more complex buildings, including gabled and curved roofs, by fitting the elevation data to a wider variety of geometric models. An additional goal is to use the partially closed chains as focus of attention mechanisms and to explore approaches for recovering surface structure between arbitrary sequences of corners and lines. Overall, we expect to continue the investigation of plausible strategies for grouping generic elements into complex structures and for simultaneously fusing information from multiple sources into coherent models. The results shown here are encouraging. The accuracy of the final reconstructions can be observed from the visually consistent renderings. Through the careful combination of primitive elements and special purpose strategies, we have the beginnings of an automatic, accurate, and functional system