10 research outputs found

    Feature Match for Medical Images

    Get PDF
    This paper represents an algorithm for Feature Match, a summed up estimated approximate nearest neighbor field (ANNF) calculation system, between a source and target image. The proposed calculation can estimate ANNF maps between any image sets, not as a matter of course related. This generalization is accomplished through proper spatial-range changes. To register ANNF maps, worldwide shading adjustment is connected as a reach change on the source picture. Image patches from the pair of pictures are approximated utilizing low-dimensional elements, which are utilized alongside KD-tree to appraise the ANNF map. This ANNF guide is further enhanced in view of picture coherency and spatial changes. The proposed generalization, empowers to handle a more extensive scope of vision applications, which have not been handled utilizing the ANNF structure. Here one such application is outlined namely: optic plate discovery .This application manages restorative imaging, where optic circles are found in retinal pictures utilizing a sound optic circle picture as regular target picture. ANNF mappings is used in this application and is shown experimentally that the proposed approaches are faster and accurate, compared with the state-of the-art techniques

    A New Texture Synthesis Algorithm Based on Wavelet Packet Tree

    Get PDF
    This paper presents an efficient texture synthesis based on wavelet packet tree (TSWPT). It has the advantage of using a multiresolution representation with a greater diversity of bases functions for the nonlinear time series applications such as fractal images. The input image is decomposed into wavelet packet coefficients, which are rearranged and organized to form hierarchical trees called wavelet packet trees. A 2-step matching, that is, coarse matching based on low-frequency wavelet packet coefficients followed by fine matching based on middle-high-frequency wavelet packet coefficients, is proposed for texture synthesis. Experimental results show that the TSWPT algorithm is preferable, especially in terms of computation time

    Transfer of albedo and local depth variation to photo-textures

    Get PDF
    Acquisition of displacement and albedo maps for full building façades is a difficult problem and traditionally achieved through a labor intensive artistic process. In this paper, we present a material appearance transfer method, Transfer by Analogy, designed to infer surface detail and diffuse reflectance for textured surfaces like the present in building façades. We begin by acquiring small exemplars (displacement and albedo maps), in accessible areas, where capture conditions can be controlled. We then transfer these properties to a complete phototexture constructed from reference images and captured under diffuse daylight illumination. Our approach allows super-resolution inference of albedo and displacement from information in the photo-texture. When transferring appearance from multiple exemplars to façades containing multiple materials, our approach also sidesteps the need for segmentation. We show how we use these methods to create relightable models with a high degree of texture detail, reproducing the visually rich self-shadowing effects that would normally be difficult to capture using just simple consumer equipment. Copyright © 2012 by the Association for Computing Machinery, Inc

    Optimized synthesis of art patterns and layered textures

    Get PDF
    published_or_final_versio

    PatchNet: a patch-based image representation for interactive library-driven image editing

    Get PDF
    We introduce PatchNets, a compact, hierarchical representation describing structural and appearance characteristics of image regions, for use in image editing. In a PatchNet, an image region with coherent appearance is summarized by a graph node, associated with a single representative patch, while geometric relationships between different regions are encoded by labelled graph edges giving contextual information. The hierarchical structure of a PatchNet allows a coarse-to-fine description of the image. We show how this PatchNet representation can be used as a basis for interactive, library-driven, image editing. The user draws rough sketches to quickly specify editing constraints for the target image. The system then automatically queries an image library to find semantically-compatible candidate regions to meet the editing goal. Contextual image matching is performed using the PatchNet representation, allowing suitable regions to be found and applied in a few seconds, even from a library containing thousands of images

    小袖屛風画像を利用した模様画像の合成

    Get PDF
    歴史資料のデジタルアーカイブ化によって,貴重な歴史資料を計算機で扱うことが容易になり,高度な知的利用や新たな展示技術に関する研究に注目が集まっている。デジタル化された歴史資料にコンピュータグラフィックスやバーチャルリアリティなどの技術を応用することで,インタラクティブな展示やバーチャルな展示といった従来とは異なる新しい展示形式の実現が可能になる。本研究では,小袖屛風と呼ばれる実物の小袖裂を屛風に貼装した歴史資料に着目する。一般の人々が小袖屛風を鑑賞する際に注目する要素のひとつとして,屛風に貼装された小袖の様々な模様が挙げられる。この小袖中の模様を使って自分自身のオリジナルの模様を簡単に作り出す方法が実現できれば,デジタルデータを活用した新たな小袖屛風の展示システムにつながると考えられる。このような背景のもと,本研究では小袖屛風に存在する模様をユーザが選択するだけで,自動的に新しい模様を生成する手法を提案する。博物館の来館者などの一般ユーザを対象として考えた場合,模様生成の過程で画像編集に関する知識や煩雑な操作を要求することは好ましくない。そこで,画像合成技術であるPoisson Image Editingやテクスチャ合成技術を組合せ,ユーザが選択した複数の小袖屛風の模様画像から各模様の特徴をもった新しい模様画像を自動生成する手法を開発した。提案手法は,(1)模様部分を取り除いた背景画像の作成,(2)Poisson Image Editingによる背景画像への模様の貼り付け,(3)テクスチャ合成による複数の模様を組合せた新しい模様合成画像の生成,の3ステップから構成される。さらに,提案手法で生成した模様画像と類似の模様画像を被験者に画像編集ソフトで作成してもらう評価実験を実施した。実験結果から,被験者が画像を作成するのと比べて,提案手法が安定して短い時間で画像を合成できることを示した。Recently, the digital archive technology of historical materials enables us to use precious historical materials on computers easily. With this situation, advanced studies such as the highly-intelligent applications using historical materials and unique exhibits with the latest technologies, such as computer graphics and virtual reality, are actively conducted. This research focuses on the digital data of Kosode Byobu. Kosode Byobu is a historical material in which the real Kosode (Japanese clothes) is pasted up on Byobu. The various patterns in Kosode are one of the reasons for visitors\u27 interests. Therefore, if we realize a method for easily generating the user\u27s original pattern images based on the existing patterns in Kosode Byobu, it would lead to a new Kosode Byobu display system using digital data. Because of this situation, this research attempts to construct a method that automatically generates a new pattern image from existing ones in Kosode Byobu selected by a user. Since our target users are the public visitors to the museum, a complicated operation, such as image editing, should not be included in the pattern generation procedure. In this research, based on image editing and texture synthesis technologies, we develop a method that automatically generates a new pattern image with pattern image features in Kosode Byobu as selected by a user. In the proposed method, the user is only required to select several images to be synthesized. The proposed method consists of following three steps: (1) make the background fabric image, (2) paste the patterns into the background fabric image using Poisson image editing, and (3) generate the synthetic pattern image by texture synthesis. Furthermore, we conduct an experiment in which the subjects create a similar pattern image with the one generated by the proposed method using image-editing software. The experimental result showed that the proposed method could generate the pattern images in a shorter time without depending on the number of input images, compared to the case in which the subjects manually create the synthetic image

    Synthesizing and Editing Photo-realistic Visual Objects

    Get PDF
    In this thesis we investigate novel methods of synthesizing new images of a deformable visual object using a collection of images of the object. We investigate both parametric and non-parametric methods as well as a combination of the two methods for the problem of image synthesis. Our main focus are complex visual objects, specifically deformable objects and objects with varying numbers of visible parts. We first introduce sketch-driven image synthesis system, which allows the user to draw ellipses and outlines in order to sketch a rough shape of animals as a constraint to the synthesized image. This system interactively provides feedback in the form of ellipse and contour suggestions to the partial sketch of the user. The user's sketch guides the non-parametric synthesis algorithm that blends patches from two exemplar images in a coarse-to-fine fashion to create a final image. We evaluate the method and synthesized images through two user studies. Instead of non-parametric blending of patches, a parametric model of the appearance is more desirable as its appearance representation is shared between all images of the dataset. Hence, we propose Context-Conditioned Component Analysis, a probabilistic generative parametric model, which described images with a linear combination of basis functions. The basis functions are evaluated for each pixel using a context vector computed from the local shape information. We evaluate C-CCA qualitatively and quantitatively on inpainting, appearance transfer and reconstruction tasks. Drawing samples of C-CCA generates novel, globally-coherent images, which, unfortunately, lack high-frequency details due to dimensionality reduction and misalignment. We develop a non-parametric model that enhances the samples of C-CCA with locally-coherent, high-frequency details. The non-parametric model efficiently finds patches from the dataset that match the C-CCA sample and blends the patches together. We analyze the results of the combined method on the datasets of horse and elephant images

    Synthesizing structured image hybrids

    No full text
    input exemplars output hybrids Figure 1: Image hybrids. Given a set of input images (left), our algorithm automatically produces arbitrarily many hybrid images (right). Example-based texture synthesis algorithms generate novel texture images from example data. A popular hierarchical pixel-based approach uses spatial jitter to introduce diversity, at the risk of breaking coarse structure beyond repair. We propose a multiscale descriptor that enables appearance-space jitter, which retains structure. This idea enables repurposing of existing texture synthesis implementations for a qualitatively different problem statement and class of inputs: generating hybrids of structured images.
    corecore