201 research outputs found

    Development and evaluation of a digital tool for virtual reconstruction of historic Islamic geometric patterns

    Get PDF
    For the purpose of cultural heritage preservation, the task of recording and reconstructing visually complicated architectural geometrical patterns is facing many practical challenges. Existing traditional technologies rely heavily on the subjective nature of our perceptual power in understanding its complexity and depicting its color differences. This study explores one possible solution, through utilizing digital techniques for reconstructing detailed historical Islamic geometric patterns. Its main hypothesis is that digital techniques offer many advantages over the human eye in terms of recognizing subtle differences in light and color. The objective of the study is to design, test and evaluate an automatic visual tool for identifying deteriorated or incomplete archaeological Islamic geometrical patterns captured in digital images, and then restoring them digitally, for the purpose of producing accurate 2D reconstructed metric models. An experimental approach is used to develop, test and evaluate the specialized software. The goal of the experiment is to analyze the output reconstructed patterns for the purpose of evaluating the digital tool in respect to reliability and structural accuracy, from the point of view of the researcher in the context of historic preservation. The research encapsulates two approaches within its methodology; Qualitative approach is evident in the process of program design, algorithm selection, and evaluation. Quantitative approach is manifested through using mathematical knowledge of pattern generation to interpret available data and to simulate the rest based on it. The reconstruction process involves induction, deduction and analogy. The proposed method was proven to be successful in capturing the accurate structural geometry of the deteriorated straight-lines patterns generated based on the octagon-square basic grid. This research also concluded that it is possible to apply the same conceptual method to reconstruct all two-dimensional Islamic geometric patterns. Moreover, the same methodology can be applied to reconstruct many other pattern systems. The conceptual framework proposed by this study can serve as a platform for developing professional softwares related to historic documentation. Future research should be directed more towards developing artificial intelligence and pattern recognition techniques that have the ability to suplement human power in accomplishing difficult tasks

    Development and evaluation of a digital tool for virtual reconstruction of historic Islamic geometric patterns

    Get PDF
    For the purpose of cultural heritage preservation, the task of recording and reconstructing visually complicated architectural geometrical patterns is facing many practical challenges. Existing traditional technologies rely heavily on the subjective nature of our perceptual power in understanding its complexity and depicting its color differences. This study explores one possible solution, through utilizing digital techniques for reconstructing detailed historical Islamic geometric patterns. Its main hypothesis is that digital techniques offer many advantages over the human eye in terms of recognizing subtle differences in light and color. The objective of the study is to design, test and evaluate an automatic visual tool for identifying deteriorated or incomplete archaeological Islamic geometrical patterns captured in digital images, and then restoring them digitally, for the purpose of producing accurate 2D reconstructed metric models. An experimental approach is used to develop, test and evaluate the specialized software. The goal of the experiment is to analyze the output reconstructed patterns for the purpose of evaluating the digital tool in respect to reliability and structural accuracy, from the point of view of the researcher in the context of historic preservation. The research encapsulates two approaches within its methodology; Qualitative approach is evident in the process of program design, algorithm selection, and evaluation. Quantitative approach is manifested through using mathematical knowledge of pattern generation to interpret available data and to simulate the rest based on it. The reconstruction process involves induction, deduction and analogy. The proposed method was proven to be successful in capturing the accurate structural geometry of the deteriorated straight-lines patterns generated based on the octagon-square basic grid. This research also concluded that it is possible to apply the same conceptual method to reconstruct all two-dimensional Islamic geometric patterns. Moreover, the same methodology can be applied to reconstruct many other pattern systems. The conceptual framework proposed by this study can serve as a platform for developing professional softwares related to historic documentation. Future research should be directed more towards developing artificial intelligence and pattern recognition techniques that have the ability to suplement human power in accomplishing difficult tasks

    Understanding the Structure of 3D Shapes

    Get PDF
    Compact representations of three dimensional objects are very often used in computer graphics to create effective ways to analyse, manipulate and transmit 3D models. Their ability to abstract from the concrete shapes and expose their structure is important in a number of applications, spanning from computer animation, to medicine, to physical simulations. This thesis will investigate new methods for the generation of compact shape representations. In the first part, the problem of computing optimal PolyCube base complexes will be considered. PolyCubes are orthogonal polyhedra used in computer graphics to map both surfaces and volumes. Their ability to resemble the original models and at the same time expose a very simple and regular structure is important in a number of applications, such as texture mapping, spline fitting and hex-meshing. The second part will focus on medial descriptors. In particular, two new algorithms for the generation of curve-skeletons will be presented. These methods are completely based on the visual appearance of the input, therefore they are independent from the type, number and quality of the primitives used to describe a shape, determining, thus, an advancement to the state of the art in the field

    Understanding the Structure of 3D Shapes

    Get PDF
    Compact representations of three dimensional objects are very often used in computer graphics to create effective ways to analyse, manipulate and transmit 3D models. Their ability to abstract from the concrete shapes and expose their structure is important in a number of applications, spanning from computer animation, to medicine, to physical simulations. This thesis will investigate new methods for the generation of compact shape representations. In the first part, the problem of computing optimal PolyCube base complexes will be considered. PolyCubes are orthogonal polyhedra used in computer graphics to map both surfaces and volumes. Their ability to resemble the original models and at the same time expose a very simple and regular structure is important in a number of applications, such as texture mapping, spline fitting and hex-meshing. The second part will focus on medial descriptors. In particular, two new algorithms for the generation of curve-skeletons will be presented. These methods are completely based on the visual appearance of the input, therefore they are independent from the type, number and quality of the primitives used to describe a shape, determining, thus, an advancement to the state of the art in the field

    Document preprocessing and fuzzy unsupervised character classification

    Get PDF
    This dissertation presents document preprocessing and fuzzy unsupervised character classification for automatically reading daily-received office documents that have complex layout structures, such as multiple columns and mixed-mode contents of texts, graphics and half-tone pictures. First, the block segmentation algorithm is performed based on a simple two-step run-length smoothing to decompose a document into single-mode blocks. Next, the block classification is performed based on the clustering rules to classify each block into one of the types such as text, horizontal or vertical lines, graphics, and pictures. The mean white-to-black transition is shown as an invariance for textual blocks, and is useful for block discrimination. A fuzzy model for unsupervised character classification is designed to improve the robustness, correctness, and speed of the character recognition system. The classification procedures are divided into two stages. The first stage separates the characters into seven typographical categories based on word structures of a text line. The second stage uses pattern matching to classify the characters in each category into a set of fuzzy prototypes based on a nonlinear weighted similarity function. A fuzzy model of unsupervised character classification, which is more natural in the representation of prototypes for character matching, is defined and the weighted fuzzy similarity measure is explored. The characteristics of the fuzzy model are discussed and used in speeding up the classification process. After classification, the character recognition procedure is simply applied on the limited versions of the fuzzy prototypes. To avoid information loss and extra distortion, an topography-based approach is proposed to apply directly on the fuzzy prototypes to extract the skeletons. First, a convolution by a bell-shaped function is performed to obtain a smooth surface. Second, the ridge points are extracted by rule-based topographic analysis of the structure. Third, a membership function is assigned to ridge points with values indicating the degrees of membership with respect to the skeleton of an object. Finally, the significant ridge points are linked to form strokes of skeleton, and the clues of eigenvalue variation are used to deal with degradation and preserve connectivity. Experimental results show that our algorithm can reduce the deformation of junction points and correctly extract the whole skeleton although a character is broken into pieces. For some characters merged together, the breaking candidates can be easily located by searching for the saddle points. A pruning algorithm is then applied on each breaking position. At last, a multiple context confirmation can be applied to increase the reliability of breaking hypotheses

    Image Segmentation using Human Visual System Properties with Applications in Image Compression

    Get PDF
    In order to represent a digital image, a very large number of bits is required. For example, a 512 X 512 pixel, 256 gray level image requires over two million bits. This large number of bits is a substantial drawback when it is necessary to store or transmit a digital image. Image compression, often referred to as image coding, attempts to reduce the number of bits used to represent an image, while keeping the degradation in the decoded image to a minimum. One approach to image compression is segmentation-based image compression. The image to be compressed is segmented, i.e. the pixels in the image are divided into mutually exclusive spatial regions based on some criteria. Once the image has been segmented, information is extracted describing the shapes and interiors of the image segments. Compression is achieved by efficiently representing the image segments. In this thesis we propose an image segmentation technique which is based on centroid-linkage region growing, and takes advantage of human visual system (HVS) properties. We systematically determine through subjective experiments the parameters for our segmentation algorithm which produce the most visually pleasing segmented images, and demonstrate the effectiveness of our method. We also propose a method for the quantization of segmented images based on HVS contrast sensitivity, arid investigate the effect of quantization on segmented images

    Sparse Volumetric Deformation

    Get PDF
    Volume rendering is becoming increasingly popular as applications require realistic solid shape representations with seamless texture mapping and accurate filtering. However rendering sparse volumetric data is difficult because of the limited memory and processing capabilities of current hardware. To address these limitations, the volumetric information can be stored at progressive resolutions in the hierarchical branches of a tree structure, and sampled according to the region of interest. This means that only a partial region of the full dataset is processed, and therefore massive volumetric scenes can be rendered efficiently. The problem with this approach is that it currently only supports static scenes. This is because it is difficult to accurately deform massive amounts of volume elements and reconstruct the scene hierarchy in real-time. Another problem is that deformation operations distort the shape where more than one volume element tries to occupy the same location, and similarly gaps occur where deformation stretches the elements further than one discrete location. It is also challenging to efficiently support sophisticated deformations at hierarchical resolutions, such as character skinning or physically based animation. These types of deformation are expensive and require a control structure (for example a cage or skeleton) that maps to a set of features to accelerate the deformation process. The problems with this technique are that the varying volume hierarchy reflects different feature sizes, and manipulating the features at the original resolution is too expensive; therefore the control structure must also hierarchically capture features according to the varying volumetric resolution. This thesis investigates the area of deforming and rendering massive amounts of dynamic volumetric content. The proposed approach efficiently deforms hierarchical volume elements without introducing artifacts and supports both ray casting and rasterization renderers. This enables light transport to be modeled both accurately and efficiently with applications in the fields of real-time rendering and computer animation. Sophisticated volumetric deformation, including character animation, is also supported in real-time. This is achieved by automatically generating a control skeleton which is mapped to the varying feature resolution of the volume hierarchy. The output deformations are demonstrated in massive dynamic volumetric scenes

    Numerical methods for polyline‐to‐point‐cloud registration with applications to patient‐specific stent reconstruction

    Full text link
    We present novel numerical methods for polyline‐to‐point‐cloud registration and their application to patient‐specific modeling of deployed coronary artery stents from image data. Patient‐specific coronary stent reconstruction is an important challenge in computational hemodynamics and relevant to the design and improvement of the prostheses. It is an invaluable tool in large‐scale clinical trials that computationally investigate the effect of new generations of stents on hemodynamics and eventually tissue remodeling. Given a point cloud of strut positions, which can be extracted from images, our stent reconstruction method aims at finding a geometrical transformation that aligns a model of the undeployed stent to the point cloud. Mathematically, we describe the undeployed stent as a polyline, which is a piecewise linear object defined by its vertices and edges. We formulate the nonlinear registration as an optimization problem whose objective function consists of a similarity measure, quantifying the distance between the polyline and the point cloud, and a regularization functional, penalizing undesired transformations. Using projections of points onto the polyline structure, we derive novel distance measures. Our formulation supports most commonly used transformation models including very flexible nonlinear deformations. We also propose 2 regularization approaches ensuring the smoothness of the estimated nonlinear transformation. We demonstrate the potential of our methods using an academic 2D example and a real‐life 3D bioabsorbable stent reconstruction problem. Our results show that the registration problem can be solved to sufficient accuracy within seconds using only a few number of Gauss‐Newton iterations.We present novel numerical methods for nonlinear polyline‐to‐point‐cloud registration and their application to patient‐specific modeling of deployed coronary artery stents from image data. We design a general and mathematically sound framework that includes novel (almost everywhere) differentiable distance measures and 2 new regularization approaches to overcome the ill‐posedness and enable robust registration in the presence of outliers. We demonstrate that 3D registration problem arising in stent reconstruction can be solved within seconds using only a small number of Gauss‐Newton iterations.Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/142552/1/cnm2934.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/142552/2/cnm2934_am.pd

    Automatic skeletonization and skin attachment for realistic character animation.

    Get PDF
    The realism of character animation is associated with a number of tasks ranging from modelling, skin defonnation, motion generation to rendering. In this research we are concerned with two of them: skeletonization and weight assignment for skin deformation. The fonner is to generate a skeleton, which is placed within the character model and links the motion data to the skin shape of the character. The latter assists the modelling of realistic skin shape when a character is in motion. In the current animation production practice, the task of skeletonization is primarily undertaken by hand, i.e. the animator produces an appropriate skeleton and binds it with the skin model of a character. This is inevitably very time-consuming and costs a lot of labour. In order to improve this issue, in this thesis we present an automatic skeletonization framework. It aims at producing high-quality animatible skeletons without heavy human involvement while allowing the animator to maintain the overall control of the process. In the literature, the tenn skeletonization can have different meanings. Most existing research on skeletonization is in the remit of CAD (Computer Aided Design). Although existing research is of significant reference value to animation, their downside is the skeleton generated is either not appropriate for the particular needs of animation, or the methods are computationally expensive. Although some purpose-build animation skeleton generation techniques exist, unfortunately they rely on complicated post-processing procedures, such as thinning and pruning, which again can be undesirable. The proposed skeletonization framework makes use of a new geometric entity known as the 3D silhouette that is an ordinary silhouette with its depth information recorded. We extract a curve skeleton from two 3D silhouettes of a character detected from its two perpendicular projections. The skeletal joints are identified by down sampling the curve skeleton, leading to the generation of the final animation skeleton. The efficiency and quality are major performance indicators in animation skeleton generation. Our framework achieves the former by providing a 2D solution to the 3D skeletonization problem. Reducing in dimensions brings much faster performances. Experiments and comparisons are carried out to demonstrate the computational simplicity. Its accuracy is also verified via these experiments and comparisons. To link a skeleton to the skin, accordingly we present a skin attachment framework aiming at automatic and reasonable weight distribution. It differs from the conventional algorithms in taking topological information into account during weight computation. An effective range is defined for a joint. Skin vertices located outside the effective range will not be affected by this joint. By this means, we provide a solution to remove the influence of a topologically distant, hence highly likely irrelevant joint on a vertex. A user-defined parameter is also provided in this algorithm, which allows different deformation effects to be obtained according to user's needs. Experiments and comparisons prove that the presented framework results in weight distribution of good quality. Thus it frees animators from tedious manual weight editing. Furthermore, it is flexible to be used with various deformation algorithms

    Framework for Automatic Identification of Paper Watermarks with Chain Codes

    Get PDF
    Title from PDF of title page viewed May 21, 2018Dissertation advisor: Reza DerakhshaniVitaIncludes bibliographical references (pages 220-235)Thesis (Ph.D.)--School of Computing and Engineering. University of Missouri--Kansas City, 2017In this dissertation, I present a new framework for automated description, archiving, and identification of paper watermarks found in historical documents and manuscripts. The early manufacturers of paper have introduced the embedding of identifying marks and patterns as a sign of a distinct origin and perhaps as a signature of quality. Thousands of watermarks have been studied, classified, and archived. Most of the classification categories are based on image similarity and are searchable based on a set of defined contextual descriptors. The novel method presented here is for automatic classification, identification (matching) and retrieval of watermark images based on chain code descriptors (CC). The approach for generation of unique CC includes a novel image preprocessing method to provide a solution for rotation and scale invariant representation of watermarks. The unique codes are truly reversible, providing high ratio lossless compression, fast searching, and image matching. The development of a novel distance measure for CC comparison is also presented. Examples for the complete process are given using the recently acquired watermarks digitized with hyper-spectral imaging of Summa Theologica, the work of Antonino Pierozzi (1389 – 1459). The performance of the algorithm on large datasets is demonstrated using watermarks datasets from well-known library catalogue collections.Introduction -- Paper and paper watermarks -- Automatic identification of paper watermarks -- Rotation, Scale and translation invariant chain code -- Comparison of RST_Invariant chain code -- Automatic identification of watermarks with chain codes -- Watermark composite feature vector -- Summary -- Appendix A. Watermarks from the Bernstein Collection used in this study -- Appendix B. The original and transformed images of watermarks -- Appendix C. The transformed and scaled images of watermarks -- Appendix D. Example of chain cod
    corecore