41,180 research outputs found

    Multi-texture image segmentation

    Get PDF
    Visual perception of images is closely related to the recognition of the different texture areas within an image. Identifying the boundaries of these regions is an important step in image analysis and image understanding. This thesis presents supervised and unsupervised methods which allow an efficient segmentation of the texture regions within multi-texture images. The features used by the methods are based on a measure of the fractal dimension of surfaces in several directions, which allows the transformation of the image into a set of feature images, however no direct measurement of the fractal dimension is made. Using this set of features, supervised and unsupervised, statistical processing schemes are presented which produce low classification error rates. Natural texture images are examined with particular application to the analysis of sonar images of the seabed. A number of processes based on fractal models for texture synthesis are also presented. These are used to produce realistic images of natural textures, again with particular reference to sonar images of the seabed, and which show the importance of phase and directionality in our perception of texture. A further extension is shown to give possible uses for image coding and object identification

    Comparative performance analysis of texture characterization models in DIRSIG

    Get PDF
    The analysis and quantitative measurement of image texture is a complex and intriguing problem that has recently received a considerable amount of attention from the diverse fields of computer graphics, human vision, biomedical imaging, computer science, and remote sensing. In particular, textural feature quantification and extraction are crucial tasks for each of these disciplines, and as such numerous techniques have been developed in order to effectively segment or classify images based on textures, as well as for synthesizing textures. However, validation and performance analysis of these texture characterization models has been largely qualitative in nature based on conducting visual inspections of synthetic textures in order to judge the degree of similarity to the original sample texture imagery. In this work, four fundamentally different texture modeling algorithms have been implemented as necessary into the Digital Imaging and Remote Sensing Synthetic Image Generation (DIRSIG) model. Two of the models tested are variants of a statistical Z-Score selection model, while the remaining two involve a texture synthesis and a spectral end-member fractional abundance map approach, respectively. A detailed validation and comparative performance analysis of each model was then carried out on several texturally significant regions of two counterpart real and synthetic DIRSIG images which contain differing spatial and spectral resolutions. The quantitative assessment of each model utilized a set of four performance metrics that were derived from spatial Gray Level Co-occurrence Matrix (GLCM) analysis, hyperspectral Signal-to-Clutter Ratio (SCR) measures, mean filter (MF) spatial metrics, and a new concept termed the Spectral Co-Occurrence Matrix (SCM) metric which permits the simultaneous measurement of spatial and spectral texture. These performance measures in combination attempt to determine which texture characterization model best captures the correct statistical and radiometric attributes of the corresponding real image textures in both the spatial and spectral domains. The motivation for this work is to refine our understanding of the complexities of texture phenomena so that an optimal texture characterization model that can accurately account for these complexities can be eventually implemented into a synthetic image generation (SIG) model. Further, conclusions will be drawn regarding which of the existing texture models achieve realistic levels of spatial and spectral clutter, thereby permitting more effective and robust testing of hyperspectral algorithms in synthetic imagery

    Performance Analysis of Improved Methodology for Incorporation of Spatial/Spectral Variability in Synthetic Hyperspectral Imagery

    Get PDF
    Synthetic imagery has traditionally been used to support sensor design by enabling design engineers to pre-evaluate image products during the design and development stages. Increasingly exploitation analysts are looking to synthetic imagery as a way to develop and test exploitation algorithms before image data are available from new sensors. Even when sensors are available, synthetic imagery can significantly aid in algorithm development by providing a wide range of ground truthed images with varying illumination, atmospheric, viewing and scene conditions. One limitation of synthetic data is that the background variability is often too bland. It does not exhibit the spatial and spectral variability present in real data. In this work, four fundamentally different texture modeling algorithms will first be implemented as necessary into the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model environment. Two of the models to be tested are variants of a statistical Z-Score selection model, while the remaining two involve a texture synthesis and a spectral end-member fractional abundance map approach, respectively. A detailed comparative performance analysis of each model will then be carried out on several texturally significant regions of the resultant synthetic hyperspectral imagery. The quantitative assessment of each model will utilize a set of three performance metrics that have been derived from spatial Gray Level Co-Occunence Matrix (GLCM) analysis, hyperspectral Signalto- Clutter Ratio (5CR) measures, and a new concept termed the Spectral Co-Occurrence Matrix (SCM) metric which permits the simultaneous measurement of spatial and spectral texture. Previous research efforts on the validation and performance analysis of texture characterization models have been largely qualitative in nature based on conducting visual inspections of synthetic textures in order to judge the degree of similarity to the original sample texture imagery. The quantitative measures used in this study will in combination attempt to determine which texture characterization models best capture the correct statistical and radiometric attributes of the corresponding real image textures in both the spatial and spectral domains. The motivation for this work is to refine our understanding of the complexities of texture phenomena so that an optimal texture characterization model that can accurately account for these complexities can be eventually implemented into a synthetic image generation (SIG) model. Further, conclusions will be drawn regarding which of the candidate texture models are able to achieve realistic levels of spatial and spectral clutter, thereby permitting more effective and robust testing ofhyperspectral algorithms in synthetic imagery

    Visual Object Networks: Image Generation with Disentangled 3D Representation

    Full text link
    Recent progress in deep generative models has led to tremendous breakthroughs in image generation. However, while existing models can synthesize photorealistic images, they lack an understanding of our underlying 3D world. We present a new generative model, Visual Object Networks (VON), synthesizing natural images of objects with a disentangled 3D representation. Inspired by classic graphics rendering pipelines, we unravel our image formation process into three conditionally independent factors---shape, viewpoint, and texture---and present an end-to-end adversarial learning framework that jointly models 3D shapes and 2D images. Our model first learns to synthesize 3D shapes that are indistinguishable from real shapes. It then renders the object's 2.5D sketches (i.e., silhouette and depth map) from its shape under a sampled viewpoint. Finally, it learns to add realistic texture to these 2.5D sketches to generate natural images. The VON not only generates images that are more realistic than state-of-the-art 2D image synthesis methods, but also enables many 3D operations such as changing the viewpoint of a generated image, editing of shape and texture, linear interpolation in texture and shape space, and transferring appearance across different objects and viewpoints.Comment: NeurIPS 2018. Code: https://github.com/junyanz/VON Website: http://von.csail.mit.edu

    TextureGAN: Controlling Deep Image Synthesis with Texture Patches

    Full text link
    In this paper, we investigate deep image synthesis guided by sketch, color, and texture. Previous image synthesis methods can be controlled by sketch and color strokes but we are the first to examine texture control. We allow a user to place a texture patch on a sketch at arbitrary locations and scales to control the desired output texture. Our generative network learns to synthesize objects consistent with these texture suggestions. To achieve this, we develop a local texture loss in addition to adversarial and content loss to train the generative network. We conduct experiments using sketches generated from real images and textures sampled from a separate texture database and results show that our proposed algorithm is able to generate plausible images that are faithful to user controls. Ablation studies show that our proposed pipeline can generate more realistic images than adapting existing methods directly.Comment: CVPR 2018 spotligh
    • …
    corecore