38 research outputs found

    3D model reconstruction with noise filtering using boundary edges.

    Get PDF
    Lau Tak Fu.Thesis submitted in: October 2003.Thesis (M.Phil.)--Chinese University of Hong Kong, 2004.Includes bibliographical references (leaves 93-98).Abstracts in English and Chinese.Chapter 1 - --- Introduction --- p.9Chapter 1.1 --- Scope of the work --- p.9Chapter 1.2 --- Main contribution --- p.11Chapter 1.3 --- Outline of the thesis --- p.12Chapter 2 - --- Background --- p.14Chapter 2.1 --- Three dimensional models from images --- p.14Chapter 2.2 --- Un-calibrated 3D reconstruction --- p.14Chapter 2.3 --- Self calibrated 3D reconstruction --- p.16Chapter 2.4 --- Initial model formation using image based --- p.18Chapter 2.5 --- Volumes from Silhouettes --- p.19Chapter 3 - --- Initial model reconstruct the problem with mismatch noise --- p.22Chapter 3.1 --- Perspective Camera Model --- p.24Chapter 3.2 --- "Intrinsic parameters, Extrinsic parameters and camera motion" --- p.25Chapter 3.2.1 --- Intrinsic parameters --- p.25Chapter 3.2.2 --- Extrinsic parameter and camera motion --- p.27Chapter 3.3 --- Lowe's method --- p.29Chapter 3.4 --- Interleave bundle adjustment for structure and motion recovery from multiple images --- p.32Chapter 3.5 --- Feature points mismatch analysis --- p.38Chapter 4 - --- Feature selection by using look forward silhouette clipping --- p.43Chapter 4.1 --- Introduction to silhouette clipping --- p.43Chapter 4.2 --- Silhouette clipping for 3D model --- p.45Chapter 4.3 --- Implementation --- p.52Chapter 4.3.1 --- Silhouette extraction program --- p.52Chapter 4.3.2 --- Feature filter for alternative bundle adjustment algorithm --- p.59Chapter 5 - --- Experimental data --- p.61Chapter 5.1 --- Simulation --- p.61Chapter 5.1.1 --- Input of simulation --- p.61Chapter 5.1.2 --- Output of the simulation --- p.66Chapter 5.1.2.1 --- Radius distribution --- p.66Chapter 5.1.2.2 --- 3D model output --- p.74Chapter 5.1.2.3 --- VRML plotting --- p.80Chapter 5.2 --- Real Image testing --- p.82Chapter 5.2.1 --- Toy house on a turntable test --- p.82Chapter 5.2.2 --- Other tests on turntable --- p.86Chapter 6 - --- Conclusion and discussion --- p.8

    Hybridization of silhouette rendering and pen-and-ink illustration of non-photorealistic rendering technique for 3D object

    Get PDF
    This study proposes a hybrid of Non-photorealistic Rendering techniques. Nonphotorealistic Rendering (NPR) covers one part in computer graphics that caters towards generating many kinds of 2D digital art style from 3D data, for instance output that looks like painting and drawing. NPR includes the painterly, interpretative, expressive and artistic styles, among others. NPR research deal with different issues such as the stylization that are driven by human perception, the science and art that were brought together and being harmonized with techniques used. Some of approaches used in NPR were discussed such as cartoon rendering, watercolour painting, silhouette rendering, penand- ink illustration and so on. A plan for hybridization of NPR techniques is proposed between silhouette rendering techniques and pen-and-ink illustration for this study. The integration process of these rendering techniques takes on the lighting mapping and also the construction of colour region of the model in order to ensure the pen-and-ink illustration texture can be implemented into the object. The evaluation process is based on the visualization of the image from the hybridization process. Based on findings, the hybridization of NPR technique was able to create interesting results and considered as an alternative in producing new variety of visualization image in NPR

    A high performance vector rendering pipeline

    Get PDF
    Vector images are images which encode visible surfaces of a 3D scene, in a resolution independent format. Prior to this work generation of such an image was not real time. As such the benefits of using them in the graphics pipeline were not fully expressed. In this thesis we propose methods for addressing the following questions. How can we introduce vector images into the graphics pipeline, namingly, how can we produce them in real time. How can we take advantage of resolution independence, and how can we render vector images to a pixel display as efficiently as possible and with the highest quality. There are three main contributions of this work. We have designed a real time vector rendering system. That is, we present a GPU accelerated pipeline which takes as an input a scene with 3D geometry, and outputs a vector image. We call this system SVGPU: Scalable Vector Graphics on the GPU. As mentioned vector images are resolution independent. We have designed a cloud pipeline for streaming vector images. That is, we present system design and optimizations for streaming vector images across interconnection networks, which reduces the bandwidth required for transporting real time 3D content from server to client. Lastly, in this thesis we introduce another added benefit of vector images. We have created a method for rendering them with the highest possible quality. That is, we have designed a new set of operations on vector images, which allows us to anti-alias them during rendering to a canonical 2D image. Our contributions provide the system design, optimizations, and algorithms required to bring vector image utilization and benefits much closer to the real time graphics pipeline. Together they form an end to end pipeline to this purpose, i.e. "A High Performance Vector Rendering Pipeline.

    Feature-Based Textures

    Get PDF
    This paper introduces feature-based textures, a new image representation that combines features and texture samples for high-quality texture mapping. Features identify boundaries within a texture where samples change discontinuously. They can be extracted from vector graphics representations, or explicity added to raster images to improve sharpness. Texture lookups are then interpolated from samples while respecting these boundaries. We present results from a software implementation of this technique demonstrating quality, efficiency and low memory overhead

    An Upper Bound on the Average Size of Silhouettes

    Get PDF
    It is a widely observed phenomenon in computer graphics that the size of the silhouette of a polyhedron is much smaller than the size of the whole polyhedron. This paper provides, for the first time, theoretical evidence supporting this for a large class of objects, namely for polyhedra that approximate surfaces in some reasonable way; the surfaces may be non-convex and non-differentiable and they may have boundaries. We prove that such polyhedra have silhouettes of expected size O(n)O(\sqrt{n}) where the average is taken over all points of view and n is the complexity of the polyhedron

    Discontinuity Edge Overdraw

    Get PDF
    Aliasing is an important problem when rendering triangle meshes. Efficient antialiasing techniques such as mipmapping greatly improve the filtering of textures defined over a mesh. A major component of the remaining aliasing occurs along discontinuity edges such as silhouettes, creases, and material boundaries. Framebuffer supersampling is a simple remedy, but 2x2 supersampling leaves behind significant temporal artifacts, while greater supersampling demands even more fill-rate and memory. We present an alternative that focuses effort on discontinuity edges by overdrawing such edges as antialiased lines. Although the idea is simple, several subtleties arise. Visible silhouette edges must be detected efficiently. Discontinuity edges need consistent orientations. They must be blended as they approach the silhouette to avoid popping. Unfortunately, edge blending results in blurriness. Our technique balances these two competing objectives of temporal smoothness and spatial sharpness. Finally, the best results are obtained when discontinuity edges are sorted by depth. Our approach proves surprisingly effective at reducing temporal artifacts commonly referred to as "crawling jaggies," with little added cost.Engineering and Applied Science

    Non-photorealistic volume rendering using stippling techniques

    Get PDF
    Journal ArticleSimulating hand-drawn illustration techniques can succinctly express information in a manner that is communicative and informative. We present a framework for an interactive direct volume illustration system that simulates traditional stipple drawing. By combining the principles of artistic and scientific illustration, we explore several feature enhancement techniques to create effective, interactive visualizations of scientific and medical datasets. We also introduce a rendering mechanism that generates appropriate point lists at all resolutions during an automatic preprocess, and modifies rendering styles through different combinations of these feature enhancements. The new system is an effective way to interactively preview large, complex volume datasets in a concise, meaningful, and illustrative manner. Volume stippling is effective for many applications and provides a quick and efficient method to investigate volume models

    Interruptible rendering

    Full text link
    corecore