186 research outputs found

    Polygonization of Multi-Component Non-Manifold Implicit Surfaces through A Symbolic-Numerical Continuation Algorithm

    Get PDF
    In computer graphics, most algorithms for sampling implicit surfaces use a 2-points numerical method. If the surface-describing function evaluates positive at the first point and negative at the second one, we can say that the surface is located somewhere between them. Surfaces detected this way are called sign-variant implicit surfaces. However, 2-points numerical methods may fail to detect and sample the surface because the functions of many implicit surfaces evaluate either positive or negative everywhere around them. These surfaces are here called sign-invariant implicit surfaces. In this paper, instead of using a 2-points numerical method, we use a 1-point numerical method to guarantee that our algorithm detects and samples both sign-variant and sign-invariant surface components or branches correctly. This algorithm follows a continuation approach to tessellate implicit surfaces, so that it applies symbolic factorization to decompose the function expression into symbolic components, sampling then each symbolic function component separately. This ensures that our algorithm detects, samples, and triangulates most components of implicit surfaces

    Normal Umbrella: A new primitive for triangulating parametric surfaces

    Get PDF
    Typical methods for the triangulation of parametric surfaces use a sampling of the parameter space, and the wrong choice of parameterization can spoil a triangulation or even cause the algorithm to fail. We present a new method that uses a local tessellation primitive for almost-uniformly sampling and triangulating a surface, so that its parameterization becomes irrelevant. If sampling density or triangle shape has to be adaptive, the uniform mesh can be used either as an initial coarse mesh for a refinement process, or as a fine mesh to be reduced

    Towards shape representation using trihedral mesh projections

    Get PDF
    This paper explores the possibility of approximating a surface by a trihedral polygonal mesh plus some triangles at strategic places. The presented approximation has attractive properties. It turns out that the Z-coordinates} of the vertices are completely governed by the Z-coordinates assigned to four selected ones. This allows describing the spatial polygonal mesh with just its 2D projection plus the heights of four vertices. As a consequence, these projections essentially capture the “spatial meaning” of the given surface, in the sense that, whatever spatial interpretations are drawn from them, they all exhibit essentially the same shape.This work was supported by the project 'Resolución de sistemas de ecuaciones cinemáticas para la simulación de mecanismos, posicionado interactivo de objetos y conformación de moléculas' (070-722).Peer Reviewe

    Piecewise Linear Approximations of Digitized Space Curves with Applications

    Get PDF

    Image Processing Applications in Real Life: 2D Fragmented Image and Document Reassembly and Frequency Division Multiplexed Imaging

    Get PDF
    In this era of modern technology, image processing is one the most studied disciplines of signal processing and its applications can be found in every aspect of our daily life. In this work three main applications for image processing has been studied. In chapter 1, frequency division multiplexed imaging (FDMI), a novel idea in the field of computational photography, has been introduced. Using FDMI, multiple images are captured simultaneously in a single shot and can later be extracted from the multiplexed image. This is achieved by spatially modulating the images so that they are placed at different locations in the Fourier domain. Finally, a Texas Instruments digital micromirror device (DMD) based implementation of FDMI is presented and results are shown. Chapter 2 discusses the problem of image reassembly which is to restore an image back to its original form from its pieces after it has been fragmented due to different destructive reasons. We propose an efficient algorithm for 2D image fragment reassembly problem based on solving a variation of Longest Common Subsequence (LCS) problem. Our processing pipeline has three steps. First, the boundary of each fragment is extracted automatically; second, a novel boundary matching is performed by solving LCS to identify the best possible adjacency relationship among image fragment pairs; finally, a multi-piece global alignment is used to filter out incorrect pairwise matches and compose the final image. We perform experiments on complicated image fragment datasets and compare our results with existing methods to show the improved efficiency and robustness of our method. The problem of reassembling a hand-torn or machine-shredded document back to its original form is another useful version of the image reassembly problem. Reassembling a shredded document is different from reassembling an ordinary image because the geometric shape of fragments do not carry a lot of valuable information if the document has been machine-shredded rather than hand-torn. On the other hand, matching words and context can be used as an additional tool to help improve the task of reassembly. In the final chapter, document reassembly problem has been addressed through solving a graph optimization problem

    Context-based coding of bilevel images enhanced by digital straight line analysis

    Get PDF

    Object polygonization in traffic scenes using small Eigenvalue analysis

    Get PDF
    Shape polygonization is an effective and convenient method to compress the storage requirements of a shape curve. Polygonal approximation offers an invariant representation of local properties even after digitization of a shape curve. In this paper, we propose to use universal threshold for polygonal approximation of any two-dimensional object boundary by exploiting the strength of small eigenvalues. We also propose to adapt the Jaccard Index as a metric to measure the effectiveness of shape polygonization. In the context of this paper, we have conducted extensive experiments on the semantically segmented images from Cityscapes dataset to polygonize the objects in the traffic scenes. Further, to corroborate the efficacy of the proposed method, experiments on the MPEG-7 shape database are conducted. Results obtained by the proposed technique are encouraging and can enable greater compression of annotation documents. This is particularly critical in the domain of instrumented vehicles where large volumes of high quality video must be exhaustively annotated without loss of accuracy and least man-hours

    Analysis of Diagnostic Images of Artworks and Feature Extraction: Design of a Methodology

    Get PDF
    none6noDigital images represent the primary tool for diagnostics and documentation of the state of preservation of artifacts. Today the interpretive filters that allow one to characterize information and communicate it are extremely subjective. Our research goal is to study a quantitative analysis methodology to facilitate and semi-automate the recognition and polygonization of areas corresponding to the characteristics searched. To this end, several algorithms have been tested that allow for separating the characteristics and creating binary masks to be statistically analyzed and polygonized. Since our methodology aims to offer a conservator-restorer model to obtain useful graphic documentation in a short time that is usable for design and statistical purposes, this process has been implemented in a single Geographic Information Systems (GIS) application.openAmura, Annamaria; Aldini, Alessandro; Pagnotta, Stefano; Salerno, Emanuele; Tonazzini, Anna; Triolo, PaoloAmura, Annamaria; Aldini, Alessandro; Pagnotta, Stefano; Salerno, Emanuele; Tonazzini, Anna; Triolo, Paol
    corecore