24 research outputs found

    Fast Correction of Tiled Display Systems on Planar Surfaces

    Full text link
    A method for fast colour and geometric correction of a tiled display system is presented in this paper. Such kind of displays are a common choice for virtual reality applications and simulators, where a high resolution image is required. They are the cheapest and more flexible alternative for large image generation but they require a precise geometric and colour correction. The purpose of the proposed method is to correct the projection system as fast as possible so in case the system needs to be recalibrated it doesn’t interfere with the normal operation of the simulator or virtual reality application. This technique makes use of a single conventional webcam for both geometric and photometric correction. Some previous assumptions are made, like planar projection surface and negligibleintra-projector colour variation and black-offset levels. If these assumptions hold true, geometric and photometric seamlessness can be achievedfor this kind of display systems. The method described in this paper is scalable for an undefined number of projectors and completely automatic

    Distortion Correction for Non-Planar Deformable Projection Displays through Homography Shaping and Projected Image Warping

    Get PDF
    Video projectors have advanced from being tools for only delivering presentations on flat or planar surfaces to tools for delivering media content in such applications as augmented reality, simulated sports practice and invisible displays. With the use of non-planar surfaces for projection comes geometric and radiometric distortions. This work dwells on correcting geometric distortions occurring when images or video frames are projected onto static and deformable non-planar display surfaces. The distortion-correction process involves (i) detecting feature points from the camera images and creating a desired shape of the undistorted view through a 2D homography, (ii) transforming the feature points on the camera images to control points on the projected images, (iii) calculating Radial Basis Function (RBF) warping coefficients from the control points, and warping the projected image to obtain an undistorted image of the projection on the projection surface. Several novel aspects of this work have emerged and include (i) developing a theoretical framework that explains the cause of distortion and provides a general warping pattern to be applied to the projection, (ii) carrying out the distortion-correction process without the use of a distortion-measuring calibration image or structured light pattern, (iii) carrying out the distortioncorrection process on a projection display that deforms with time with a single uncalibrated projector and uncalibrated camera, and (iv) performing an optimisation of the distortioncorrection processes to operate in real-time. The geometric distortion correction process designed in this work has been tested for both static projection systems in which the components remain fixed in position, and dynamic projection systems in which the positions of components or shape of the display change with time. The results of these tests show that the geometric distortion-correction technique developed in this work improves the observed image geometry by as much as 31% based on normalised correlation measure. The optimisation of the distortion-correction process resulted in a 98% improvement of its speed of operation thereby demonstrating the applicability of the proposed approach to real projection systems with deformable projection displays

    Integrated tactile-optical coordinate measurement for the reverse engineering of complex geometry

    Get PDF
    Complex design specifications and tighter tolerances are increasingly required in modern engineering applications, either for functional or aesthetic demands. Multiple sensors are therefore exploited to achieve both holistic measurement information and improved reliability or reduced uncertainty of measurement data. Multi-sensor integration systems can combine data from several information sources (sensors) into a common representational format in order that the measurement evaluation can benefit from all available sensor information and data. This means a multi-sensor system is able to provide more efficient solutions and better performances than a single sensor based system. This thesis develops a compensation approach for reverse engineering applications based on the hybrid tactile-optical multi-sensor system. In the multi-sensor integration system, each individual sensor should be configured to its optimum for satisfactory measurement results. All the data measured from different equipment have to be precisely integrated into a common coordinate system. To solve this problem, this thesis proposes an accurate and flexible method to unify the coordinates of optical and tactile sensors for reverse engineering. A sphere-plate artefact with nine spheres is created and a set of routines are developed for data integration of a multi-sensor system. Experimental results prove that this novel centroid approach is more accurate than the traditional method. Thus, data sampled by different measuring devices, irrespective of their location can be accurately unified. This thesis describes a competitive integration for reverse engineering applications where the point cloud data scanned by the fast optical sensor is compensated and corrected by the slower, but more accurate tactile probe measurement to improve its overall accuracy. A new competitive approach for rapid and accurate reverse engineering of geometric features from multi-sensor systems based on a geometric algebra approach is proposed and a set of programs based on the MATLAB platform has been generated for the verification of the proposed method. After data fusion, the measurement efficiency is improved 90% in comparison to the tactile method and the accuracy of the reconstructed geometric model is improved from 45 micrometres to 7 micrometres in comparison to the optical method, which are validated by case study

    Training in Virtual Environments: A Safe, Cost Effective, and Engaging Approach to Training

    Get PDF

    Towards Predictive Rendering in Virtual Reality

    Get PDF
    The strive for generating predictive images, i.e., images representing radiometrically correct renditions of reality, has been a longstanding problem in computer graphics. The exactness of such images is extremely important for Virtual Reality applications like Virtual Prototyping, where users need to make decisions impacting large investments based on the simulated images. Unfortunately, generation of predictive imagery is still an unsolved problem due to manifold reasons, especially if real-time restrictions apply. First, existing scenes used for rendering are not modeled accurately enough to create predictive images. Second, even with huge computational efforts existing rendering algorithms are not able to produce radiometrically correct images. Third, current display devices need to convert rendered images into some low-dimensional color space, which prohibits display of radiometrically correct images. Overcoming these limitations is the focus of current state-of-the-art research. This thesis also contributes to this task. First, it briefly introduces the necessary background and identifies the steps required for real-time predictive image generation. Then, existing techniques targeting these steps are presented and their limitations are pointed out. To solve some of the remaining problems, novel techniques are proposed. They cover various steps in the predictive image generation process, ranging from accurate scene modeling over efficient data representation to high-quality, real-time rendering. A special focus of this thesis lays on real-time generation of predictive images using bidirectional texture functions (BTFs), i.e., very accurate representations for spatially varying surface materials. The techniques proposed by this thesis enable efficient handling of BTFs by compressing the huge amount of data contained in this material representation, applying them to geometric surfaces using texture and BTF synthesis techniques, and rendering BTF covered objects in real-time. Further approaches proposed in this thesis target inclusion of real-time global illumination effects or more efficient rendering using novel level-of-detail representations for geometric objects. Finally, this thesis assesses the rendering quality achievable with BTF materials, indicating a significant increase in realism but also confirming the remainder of problems to be solved to achieve truly predictive image generation

    Towards 3D Scanning from Digital Images by Novice Users

    Get PDF
    The uptake of hobbyist 3D printers is being held back, in part, due to the barriers associated with creating a computer model to be printed. One way of creating such a computer model is to take a 3D scan of a pre-existing object using multiple digital images of the object showing the object from different points of view. This document details one way of doing this, with particular emphasis on camera calibration: the process of estimating camera parameters for the camera that took an image. In common calibration scenarios, multiple images are used where it is assumed that the internal parameters, such as zoom and focus settings, are fixed between images and the relative placement of the camera between images needs to be estimated. This is not ideal for a novice doing 3D scanning with a “point and shoot” camera where these internal parameters may not have been held fixed between images. A common coordinate system between images with a known relationship to real-world measurements is also desirable. Additionally, in some 3D scanning scenarios that use digital images, where it is expected that a trained individual will be doing the photography and internal settings can be held constant throughout the process, the images used for doing the calibration are different from those that are used to do the object capture. A technique has been developed to overcome these shortcomings. It uses a known printed sheet of paper, called the calibration sheet, that the object to be scanned sits on so that object acquisition and camera calibration can be done from the same image. Each image is processed independently with reference to the known size of the calibration sheet so the output is automatically to scale and minor camera calibration errors with one image do not propagate and affect estimates of camera calibration parameters for other images. The calibration process developed is also one that will work where large parts of the calibration sheet are obscured

    Meshfree Approximation Methods For Free-form Optical Surfaces With Applications To Head-worn Displays

    Get PDF
    Compact and lightweight optical designs achieving acceptable image quality, field of view, eye clearance, eyebox size, operating across the visible spectrum, are the key to the success of next generation head-worn displays. The first part of this thesis reports on the design, fabrication, and analysis of off-axis magnifier designs. The first design is catadioptric and consists of two elements. The lens utilizes a diffractive optical element and the mirror has a free-form surface described with an x-y polynomial. A comparison of color correction between doublets and single layer diffractive optical elements in an eyepiece as a function of eye clearance is provided to justify the use of a diffractive optical element. The dual-element design has an 8 mm diameter eyebox, 15 mm eye clearance, 20 degree diagonal full field, and is designed to operate across the visible spectrum between 450-650 nm. 20% MTF at the Nyquist frequency with less than 3% distortion has been achieved in the dual-element head-worn display. An ideal solution for a head-worn display would be a single free-form surface mirror design. A single surface mirror does not have dispersion; therefore, color correction is not required. A single surface mirror can be made see-through by machining the appropriate surface shape on the opposite side to form a zero power shell. The second design consists of a single off-axis free-form mirror described with an x-y polynomial, which achieves a 3 mm diameter exit pupil, 15 mm eye relief, and a 24 degree diagonal full field of view. The second design achieves 10% MTF at the Nyquist frequency set by the pixel spacing of the VGA microdisplay with less than 3% distortion. Both designs have been fabricated using diamond turning techniques. Finally, this thesis addresses the question of what is the optimal surface shape for a single mirror constrained in an off-axis magnifier configuration with multiple fields? Typical optical surfaces implemented in raytrace codes today are functions mapping two dimensional vectors to real numbers. The majority of optical designs to-date have relied on conic sections and polynomials as the functions of choice. The choice of conic sections is justified since conic sections are stigmatic surfaces under certain imaging geometries. The choice of polynomials from the point of view of surface description can be challenged. A polynomial surface description may link a designer s understanding of the wavefront aberrations and the surface description. The limitations of using multivariate polynomials are described by a theorem due to Mairhuber and Curtis from approximation theory. This thesis proposes and applies radial basis functions to represent free-form optical surfaces as an alternative to multivariate polynomials. We compare the polynomial descriptions to radial basis functions using the MTF criteria. The benefits of using radial basis functions for surface description are summarized in the context of specific head-worn displays. The benefits include, for example, the performance increase measured by the MTF, or the ability to increase the field of view or pupil size. Even though Zernike polynomials are a complete and orthogonal set of basis over the unit circle and they can be orthogonalized for rectangular or hexagonal pupils using Gram-Schmidt, taking practical considerations into account, such as optimization time and the maximum number of variables available in current raytrace codes, for the specific case of the single off-axis magnifier with a 3 mm pupil, 15 mm eye relief, 24 degree diagonal full field of view, we found the Gaussian radial basis functions to yield a 20% gain in the average MTF at 17 field points compared to a Zernike (using 66 terms) and an x-y polynomial up to and including 10th order. The linear combination of radial basis function representation is not limited to circular apertures. Visualization tools such as field map plots provided by nodal aberration theory have been applied during the analysis of the off-axis systems discussed in this thesis. Full-field displays are used to establish node locations within the field of view for the dual-element head-worn display. The judicious separation of the nodes along the x-direction in the field of view results in well-behaved MTF plots. This is in contrast to an expectation of achieving better performance through restoring symmetry via collapsing the nodes to yield field-quadratic astigmatism

    NASA Tech Briefs, February 1993

    Get PDF
    Topics include: Communication Technology; Electronic Components and Circuits; Electronic Systems; Physical Sciences; Materials; Computer Programs; Mechanics; Machinery; Fabrication Technology; Mathematics and Information Sciences; Life Sciences

    Internationales Kolloquium über Anwendungen der Informatik und Mathematik in Architektur und Bauwesen : 04. bis 06.07. 2012, Bauhaus-Universität Weimar

    Get PDF
    The 19th International Conference on the Applications of Computer Science and Mathematics in Architecture and Civil Engineering will be held at the Bauhaus University Weimar from 4th till 6th July 2012. Architects, computer scientists, mathematicians, and engineers from all over the world will meet in Weimar for an interdisciplinary exchange of experiences, to report on their results in research, development and practice and to discuss. The conference covers a broad range of research areas: numerical analysis, function theoretic methods, partial differential equations, continuum mechanics, engineering applications, coupled problems, computer sciences, and related topics. Several plenary lectures in aforementioned areas will take place during the conference. We invite architects, engineers, designers, computer scientists, mathematicians, planners, project managers, and software developers from business, science and research to participate in the conference
    corecore