15,647 research outputs found

    Multi-Object Shape Retrieval Using Curvature Trees

    Get PDF
    This work presents a geometry-based image retrieval approach for multi-object images. We commence with developing an effective shape matching method for closed boundaries. Then, a structured representation, called curvature tree (CT), is introduced to extend the shape matching approach to handle images containing multiple objects with possible holes. We also propose an algorithm, based on Gestalt principles, to detect and extract high-level boundaries (or envelopes), which may evolve as a result of the spatial arrangement of a group of image objects. At first, a shape retrieval method using triangle-area representation (TAR) is presented for non-rigid shapes with closed boundaries. This representation is effective in capturing both local and global characteristics of a shape, invariant to translation, rotation, scaling and shear, and robust against noise and moderate amounts of occlusion. For matching, two algorithms are introduced. The first algorithm matches concavity maxima points extracted from TAR image obtained by thresholding the TAR. In the second matching algorithm, dynamic space warping (DSW) is employed to search efficiently for the optimal (least cost) correspondence between the points of two shapes. Experimental results using the MPEG-7 CE-1 database of 1400 shapes show the superiority of our method over other recent methods. Then, a geometry-based image retrieval system is developed for multi-object images. We model both shape and topology of image objects including holes using a structured representation called curvature tree (CT). To facilitate shape-based matching, the TAR of each object and hole is stored at the corresponding node in the CT. The similarity between two CTs is measured based on the maximum similarity subtree isomorphism (MSSI) where a one-to-one correspondence is established between the nodes of the two trees. Our matching scheme agrees with many recent findings in psychology about the human perception of multi-object images. Two algorithms are introduced to solve the MSSI problem: an approximate and an exact. Both algorithms have polynomial-time computational complexity and use the DSW as the similarity measure between the attributed nodes. Experiments on a database of 13500 medical images and a database of 1580 logo images have shown the effectiveness of the proposed method. The purpose of the last part is to allow for high-level shape retrieval in multi-object images by detecting and extracting the envelope of high-level object groupings in the image. Motivated by studies in Gestalt theory, a new algorithm for the envelope extraction is proposed that works in two stages. The first stage detects the envelope (if exists) and groups its objects using hierarchical clustering. In the second stage, each grouping is merged using morphological operations and then further refined using concavity tree reconstruction to eliminate odd concavities in the extracted envelope. Experiment on a set of 110 logo images demonstrates the feasibility of our approach

    DeformNet: Free-Form Deformation Network for 3D Shape Reconstruction from a Single Image

    Full text link
    3D reconstruction from a single image is a key problem in multiple applications ranging from robotic manipulation to augmented reality. Prior methods have tackled this problem through generative models which predict 3D reconstructions as voxels or point clouds. However, these methods can be computationally expensive and miss fine details. We introduce a new differentiable layer for 3D data deformation and use it in DeformNet to learn a model for 3D reconstruction-through-deformation. DeformNet takes an image input, searches the nearest shape template from a database, and deforms the template to match the query image. We evaluate our approach on the ShapeNet dataset and show that - (a) the Free-Form Deformation layer is a powerful new building block for Deep Learning models that manipulate 3D data (b) DeformNet uses this FFD layer combined with shape retrieval for smooth and detail-preserving 3D reconstruction of qualitatively plausible point clouds with respect to a single query image (c) compared to other state-of-the-art 3D reconstruction methods, DeformNet quantitatively matches or outperforms their benchmarks by significant margins. For more information, visit: https://deformnet-site.github.io/DeformNet-website/ .Comment: 11 pages, 9 figures, NIP

    3D Shape Reconstruction from Sketches via Multi-view Convolutional Networks

    Full text link
    We propose a method for reconstructing 3D shapes from 2D sketches in the form of line drawings. Our method takes as input a single sketch, or multiple sketches, and outputs a dense point cloud representing a 3D reconstruction of the input sketch(es). The point cloud is then converted into a polygon mesh. At the heart of our method lies a deep, encoder-decoder network. The encoder converts the sketch into a compact representation encoding shape information. The decoder converts this representation into depth and normal maps capturing the underlying surface from several output viewpoints. The multi-view maps are then consolidated into a 3D point cloud by solving an optimization problem that fuses depth and normals across all viewpoints. Based on our experiments, compared to other methods, such as volumetric networks, our architecture offers several advantages, including more faithful reconstruction, higher output surface resolution, better preservation of topology and shape structure.Comment: 3DV 2017 (oral
    • …
    corecore