35 research outputs found

    Morphological Antialiasing and Topological Reconstruction

    Get PDF
    International audienceMorphological antialiasing is a post-processing approach which does note require additional samples computation. This algorithm acts as a non-linear filter, ill-suited to massively parallel hardware architectures. We redesigned the initial method using multiple passes with, in particular, a new approach to line length computation. We also introduce in the method the notion of topological reconstruction to correct the weaknesses of postprocessing antialiasing techniques. Our method runs as a pure post-process filter providing full-image antialiasing at high framerates, competing with traditional MSAA

    Discontinuity Edge Overdraw

    Get PDF
    Aliasing is an important problem when rendering triangle meshes. Efficient antialiasing techniques such as mipmapping greatly improve the filtering of textures defined over a mesh. A major component of the remaining aliasing occurs along discontinuity edges such as silhouettes, creases, and material boundaries. Framebuffer supersampling is a simple remedy, but 2x2 supersampling leaves behind significant temporal artifacts, while greater supersampling demands even more fill-rate and memory. We present an alternative that focuses effort on discontinuity edges by overdrawing such edges as antialiased lines. Although the idea is simple, several subtleties arise. Visible silhouette edges must be detected efficiently. Discontinuity edges need consistent orientations. They must be blended as they approach the silhouette to avoid popping. Unfortunately, edge blending results in blurriness. Our technique balances these two competing objectives of temporal smoothness and spatial sharpness. Finally, the best results are obtained when discontinuity edges are sorted by depth. Our approach proves surprisingly effective at reducing temporal artifacts commonly referred to as "crawling jaggies," with little added cost.Engineering and Applied Science

    The A -buffer, an antialiased hidden surface method

    Full text link

    Segmentation of Photovoltaic Module Cells in Electroluminescence Images

    Full text link
    High resolution electroluminescence (EL) images captured in the infrared spectrum allow to visually and non-destructively inspect the quality of photovoltaic (PV) modules. Currently, however, such a visual inspection requires trained experts to discern different kinds of defects, which is time-consuming and expensive. Automated segmentation of cells is therefore a key step in automating the visual inspection workflow. In this work, we propose a robust automated segmentation method for extraction of individual solar cells from EL images of PV modules. This enables controlled studies on large amounts of data to understanding the effects of module degradation over time-a process not yet fully understood. The proposed method infers in several steps a high-level solar module representation from low-level edge features. An important step in the algorithm is to formulate the segmentation problem in terms of lens calibration by exploiting the plumbline constraint. We evaluate our method on a dataset of various solar modules types containing a total of 408 solar cells with various defects. Our method robustly solves this task with a median weighted Jaccard index of 94.47% and an F1F_1 score of 97.54%, both indicating a very high similarity between automatically segmented and ground truth solar cell masks

    Spacetime Catmull Recursive Subdivision Facilitated with Occlusion Culling

    Get PDF
    We describe an extension and a generalization of the Catmull recursive subdivision algorithm: first, an imagebased occlusion culling stage is added; second, all rendering stages, that is, view-frustum and occlusion culling, subdivision of geometric primitives into micropolygons, and rasterization, are performed in spacetime. Operating in spacetime allows to exploit temporal coherence in animated scenes

    Rendering and display for multi-viewer tele-immersion

    Get PDF
    Video teleconferencing systems are widely deployed for business, education and personal use to enable face-to-face communication between people at distant sites. Unfortunately, the two-dimensional video of conventional systems does not correctly convey several important non-verbal communication cues such as eye contact and gaze awareness. Tele-immersion refers to technologies aimed at providing distant users with a more compelling sense of remote presence than conventional video teleconferencing. This dissertation is concerned with the particular challenges of interaction between groups of users at remote sites. The problems of video teleconferencing are exacerbated when groups of people communicate. Ideally, a group tele-immersion system would display views of the remote site at the right size and location, from the correct viewpoint for each local user. However, is is not practical to put a camera in every possible eye location, and it is not clear how to provide each viewer with correct and unique imagery. I introduce rendering techniques and multi-view display designs to support eye contact and gaze awareness between groups of viewers at two distant sites. With a shared 2D display, virtual camera views can improve local spatial cues while preserving scene continuity, by rendering the scene from novel viewpoints that may not correspond to a physical camera. I describe several techniques, including a compact light field, a plane sweeping algorithm, a depth dependent camera model, and video-quality proxies, suitable for producing useful views of a remote scene for a group local viewers. The first novel display provides simultaneous, unique monoscopic views to several users, with fewer user position restrictions than existing autostereoscopic displays. The second is a random hole barrier autostereoscopic display that eliminates the viewing zones and user position requirements of conventional autostereoscopic displays, and provides unique 3D views for multiple users in arbitrary locations

    A Pyramid Approach to Subpixel Registration Based on Intensity

    Get PDF
    We present an automatic sub-pixel registration algorithm that minimizes the mean square difference of intensities between a reference and a test data set, which can be either tri-dimensional (3-D) volumes or bi-dimensional (2-D) images. It uses spline processing, is based on a coarse-to-fine strategy (pyramid approach), and performs minimization according to a new variation of the iterative Marquardt-Levenberg algorithm for non-linear least-square optimization (MLA). The geometric deformation model is a global 3-D affine transformation, which one may restrict to the case of rigid-body motion (isometric scale, rotation and translation). It may also include a parameter to adjust for image contrast differences. We obtain excellent results for the registration of intra-modality Positron Emission Tomography (PET) and functional Magnetic Resonance Imaging (fMRI) data. We conclude that the multi-resolution refinement strategy is more robust than a comparable single-scale method, being less likely to get trapped into a false local optimum. In addition, it is also faster

    Implementation of an OpenVG Rasterizer with Configurable Anti-aliasing and Multi-window Scissoring

    Get PDF
    This paper describes an OpenVG-compliant hardware rasterizer with configurable anti-aliasing and multi-window scissoring. This rasterizer requires 129K logic gates with 2KB on-chip SRAM and provides satisfactory image quality with a reasonable rasterizer speed at the operational frequency of 100MHz. In this paper, we propose an optimized scanline algorithm, which provides better performance than the conventional scanline algorithm with supersampline while maintaining the flexibility and the hardware simplicity. We also propose a fast LUT-based scissoring algorithm, which has zero-latency in most of the cases. The hardware implementation of this rasterizer is explained in detail

    Perceptually Optimized Visualization on Autostereoscopic 3D Displays

    Get PDF
    The family of displays, which aims to visualize a 3D scene with realistic depth, are known as "3D displays". Due to technical limitations and design decisions, such displays create visible distortions, which are interpreted by the human vision as artefacts. In absence of visual reference (e.g. the original scene is not available for comparison) one can improve the perceived quality of the representations by making the distortions less visible. This thesis proposes a number of signal processing techniques for decreasing the visibility of artefacts on 3D displays. The visual perception of depth is discussed, and the properties (depth cues) of a scene which the brain uses for assessing an image in 3D are identified. Following the physiology of vision, a taxonomy of 3D artefacts is proposed. The taxonomy classifies the artefacts based on their origin and on the way they are interpreted by the human visual system. The principles of operation of the most popular types of 3D displays are explained. Based on the display operation principles, 3D displays are modelled as a signal processing channel. The model is used to explain the process of introducing distortions. It also allows one to identify which optical properties of a display are most relevant to the creation of artefacts. A set of optical properties for dual-view and multiview 3D displays are identified, and a methodology for measuring them is introduced. The measurement methodology allows one to derive the angular visibility and crosstalk of each display element without the need for precision measurement equipment. Based on the measurements, a methodology for creating a quality profile of 3D displays is proposed. The quality profile can be either simulated using the angular brightness function or directly measured from a series of photographs. A comparative study introducing the measurement results on the visual quality and position of the sweet-spots of eleven 3D displays of different types is presented. Knowing the sweet-spot position and the quality profile allows for easy comparison between 3D displays. The shape and size of the passband allows depth and textures of a 3D content to be optimized for a given 3D display. Based on knowledge of 3D artefact visibility and an understanding of distortions introduced by 3D displays, a number of signal processing techniques for artefact mitigation are created. A methodology for creating anti-aliasing filters for 3D displays is proposed. For multiview displays, the methodology is extended towards so-called passband optimization which addresses Moiré, fixed-pattern-noise and ghosting artefacts, which are characteristic for such displays. Additionally, design of tuneable anti-aliasing filters is presented, along with a framework which allows the user to select the so-called 3d sharpness parameter according to his or her preferences. Finally, a set of real-time algorithms for view-point-based optimization are presented. These algorithms require active user-tracking, which is implemented as a combination of face and eye-tracking. Once the observer position is known, the image on a stereoscopic display is optimised for the derived observation angle and distance. For multiview displays, the combination of precise light re-direction and less-precise face-tracking is used for extending the head parallax. For some user-tracking algorithms, implementation details are given, regarding execution of the algorithm on a mobile device or on desktop computer with graphical accelerator
    corecore