26 research outputs found

    Learning View-Model Joint Relevance for 3D Object Retrieval

    Get PDF
    3D object retrieval has attracted extensive research efforts and become an important task in recent years. It is noted that how to measure the relevance between 3D objects is still a difficult issue. Most of the existing methods employ just the model-based or view-based approaches, which may lead to incomplete information for 3D object representation. In this paper, we propose to jointly learn the view-model relevance among 3D objects for retrieval, in which the 3D objects are formulated in different graph structures. With the view information, the multiple views of 3D objects are employed to formulate the 3D object relationship in an object hypergraph structure. With the model data, the model-based features are extracted to construct an object graph to describe the relationship among the 3D objects. The learning on the two graphs is conducted to estimate the relevance among the 3D objects, in which the view/model graph weights can be also optimized in the learning process. This is the first work to jointly explore the view-based and model-based relevance among the 3D objects in a graph-based framework. The proposed method has been evaluated in three data sets. The experimental results and comparison with the state-of-the-art methods demonstrate the effectiveness on retrieval accuracy of the proposed 3D object retrieval method

    A Novel Medical Freehand Sketch 3D Model Retrieval Method by Dimensionality Reduction and Feature Vector Transformation

    Get PDF
    To assist physicians to quickly find the required 3D model from the mass medical model, we propose a novel retrieval method, called DRFVT, which combines the characteristics of dimensionality reduction (DR) and feature vector transformation (FVT) method. The DR method reduces the dimensionality of feature vector; only the top M low frequency Discrete Fourier Transform coefficients are retained. The FVT method does the transformation of the original feature vector and generates a new feature vector to solve the problem of noise sensitivity. The experiment results demonstrate that the DRFVT method achieves more effective and efficient retrieval results than other proposed methods

    From 3D Point Clouds to Pose-Normalised Depth Maps

    Get PDF
    We consider the problem of generating either pairwise-aligned or pose-normalised depth maps from noisy 3D point clouds in a relatively unrestricted poses. Our system is deployed in a 3D face alignment application and consists of the following four stages: (i) data filtering, (ii) nose tip identification and sub-vertex localisation, (iii) computation of the (relative) face orientation, (iv) generation of either a pose aligned or a pose normalised depth map. We generate an implicit radial basis function (RBF) model of the facial surface and this is employed within all four stages of the process. For example, in stage (ii), construction of novel invariant features is based on sampling this RBF over a set of concentric spheres to give a spherically-sampled RBF (SSR) shape histogram. In stage (iii), a second novel descriptor, called an isoradius contour curvature signal, is defined, which allows rotational alignment to be determined using a simple process of 1D correlation. We test our system on both the University of York (UoY) 3D face dataset and the Face Recognition Grand Challenge (FRGC) 3D data. For the more challenging UoY data, our SSR descriptors significantly outperform three variants of spin images, successfully identifying nose vertices at a rate of 99.6%. Nose localisation performance on the higher quality FRGC data, which has only small pose variations, is 99.9%. Our best system successfully normalises the pose of 3D faces at rates of 99.1% (UoY data) and 99.6% (FRGC data)

    Three-dimensional shape descriptors and matching procedures

    Get PDF
    Shape descriptors are used to identify objects in the same way that human fingerprints are used to identify people. Features of an object are extracted by applying functions to the digital representation of the object. These features are structured as a vector which is known as the shape descriptor (feature vector) of that object. The objective when constructing a shape descriptor is to find functions that will yield shape descriptors that can be used to uniquely identify or at least classify an object. A measure of similarity is required to identify or classify an object. The similarity between two objects is computed by applying a distance function to the shape descriptors of the two objects. The objective of this paper is to examine two of the possible techniques in three-dimensional shape descriptor construction based on Fourier analysis, and to find a descriptor that is able to not only classify, but also identify objects

    3D Model Retrieval Using Probability Density-Based Shape Descriptors

    Full text link

    3D Shape Knowledge Graph for Cross-domain and Cross-modal 3D Shape Retrieval

    Full text link
    With the development of 3D modeling and fabrication, 3D shape retrieval has become a hot topic. In recent years, several strategies have been put forth to address this retrieval issue. However, it is difficult for them to handle cross-modal 3D shape retrieval because of the natural differences between modalities. In this paper, we propose an innovative concept, namely, geometric words, which is regarded as the basic element to represent any 3D or 2D entity by combination, and assisted by which, we can simultaneously handle cross-domain or cross-modal retrieval problems. First, to construct the knowledge graph, we utilize the geometric word as the node, and then use the category of the 3D shape as well as the attribute of the geometry to bridge the nodes. Second, based on the knowledge graph, we provide a unique way for learning each entity's embedding. Finally, we propose an effective similarity measure to handle the cross-domain and cross-modal 3D shape retrieval. Specifically, every 3D or 2D entity could locate its geometric terms in the 3D knowledge graph, which serve as a link between cross-domain and cross-modal data. Thus, our approach can achieve the cross-domain and cross-modal 3D shape retrieval at the same time. We evaluated our proposed method on the ModelNet40 dataset and ShapeNetCore55 dataset for both the 3D shape retrieval task and cross-domain 3D shape retrieval task. The classic cross-modal dataset (MI3DOR) is utilized to evaluate cross-modal 3D shape retrieval. Experimental results and comparisons with state-of-the-art methods illustrate the superiority of our approach

    Retrieval of 3-Dimensional Rigid and Non-Rigid Objects

    Get PDF
    Η παρούσα διδακτορική διατριβή εστιάζει στο πρόβλημα της ανάκτησης 3Δ αντικειμένων από μεγάλες βάσεις δεδομένων σε σχεδόν πραγματικό χρόνο. Για την αντιμετώπιση του προβλήματος αυτού, η έρευνα επικεντρώνεται σε τρία βασικά υποπροβλήματα του χώρου: (α) κανονικοποίηση θέσης άκαμπτων 3Δ μοντέλων με εφαρμογές στην ανάκτηση 3Δ αντικειμένων, (β) περιγραφή εύκαμπτων 3Δ αντικειμένων και (γ) αναζήτηση από βάσεις δεδομένων 3Δ αντικειμένων βασιζόμενη σε 2Δ εικόνες-ερώτησης. Σχετικά με το πρώτο υποπρόβλημα, την κανονικοποίηση θέσης 3Δ μοντέλων, παρουσιάζονται τρεις νέες μέθοδοι οι οποίες βασίζονται στις εξής αρχές: (α) Τριδιάστατη Ανακλαστική Συμμετρία Αντικειμένου (ROSy) και (β, γ) Διδιάστατη Ανακλαστική Συμμετρία Αντικειμένου υπολογιζόμενη επί Πανοραμικών Προβολών (SymPan και SymPan+). Όσον αφορά το δεύτερο υποπρόβλημα, αναπτύχθηκε μια μέθοδος ανάκτησης εύκαμπτων 3Δ αντικειμένων, η οποία συνδυάζει τις ιδιότητες της σύμμορφης γεωμετρίας και της τοπολογικής πληροφορίας βασιζόμενης σε γράφους, με ενιαίο τρόπο (ConTopo++). Επιπλέον, προτείνεται μια στρατηγική συνταιριασμού συμβολοσειρών, για τη σύγκριση των γράφων που αναπαριστούν 3Δ αντικείμενα. Σχετικά με το τρίτο υποπρόβλημα, παρουσιάζεται μια μέθοδος ανάκτησης 3Δ αντικειμένων, βασιζόμενη σε 2Δ εικόνες-ερώτησης, οι οποίες αντιπροσωπεύουν προβολές πραγματικών 3Δ αντικειμένων. Τα πλήρη 3Δ αντικείμενα της βάσης δεδομένων περιγράφονται από ένα σύνολο πανοραμικών προβολών και ένα μοντέλο Bag-of-Visual-Words δημιουργείται χρησιμοποιώντας τα χαρακτηριστικά SIFT που προέρχονται από αυτά. Οι μεθοδολογίες που αναπτύχθηκαν και περιγράφονται στην παρούσα διατριβή αξιολογούνται όσον αφορά την ακρίβεια ανάκτησης και παρουσιάζονται κάνοντας χρήση ποσοτικών και ποιοτικών μέτρων μέσω μιας εκτεταμένης και συνεκτικής αξιολόγησης σε σχέση με μεθόδους τρέχουσας τεχνολογικής στάθμης επάνω σε τυποποιημένες βάσεις δεδομένων.This dissertation focuses on the problem of 3D object retrieval from large datasets in a near realtime manner. In order to address this task we focus on three major subproblems of the field: (i) pose normalization of rigid 3D models with applications to 3D object retrieval, (ii) non-rigid 3D object description and (iii) search over rigid 3D object datasets based on 2D image queries. Regarding the first of the three subproblems, 3D model pose normalization, three novel pose normalization methods are presented, based on: (i) 3D Reflective Object Symmetry (ROSy) and (ii, iii) 2D Reflective Object Symmetry computed on Panoramic Views (SymPan and SymPan+). Considering the second subproblem, a non-rigid 3D object retrieval methodology, based on the properties of conformal geometry and graph-based topological information (ConTopo++) has been developed. Furthermore, a string matching strategy for the comparison of graphs that describe 3D objects, is proposed. Regarding the third subproblem a 3D object retrieval method, based on 2D range image queries that represent partial views of real 3D objects, is presented. The complete 3D objects of the database are described by a set of panoramic views and a Bag-of-Visual-Words model is built using SIFT features extracted from them. The methodologies developed and described in this dissertation are evaluated in terms of retrieval accuracy and demonstrated using both quantitative and qualitative measures via an extensive consistent evaluation against state-of-the-art methods on standard datasets

    A new protein binding pocket similarity measure based on comparison of clouds of atoms in 3D: application to ligand prediction

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Predicting which molecules can bind to a given binding site of a protein with known 3D structure is important to decipher the protein function, and useful in drug design. A classical assumption in structural biology is that proteins with similar 3D structures have related molecular functions, and therefore may bind similar ligands. However, proteins that do not display any overall sequence or structure similarity may also bind similar ligands if they contain similar binding sites. Quantitatively assessing the similarity between binding sites may therefore be useful to propose new ligands for a given pocket, based on those known for similar pockets.</p> <p>Results</p> <p>We propose a new method to quantify the similarity between binding pockets, and explore its relevance for ligand prediction. We represent each pocket by a cloud of atoms, and assess the similarity between two pockets by aligning their atoms in the 3D space and comparing the resulting configurations with a convolution kernel. Pocket alignment and comparison is possible even when the corresponding proteins share no sequence or overall structure similarities. In order to predict ligands for a given target pocket, we compare it to an ensemble of pockets with known ligands to identify the most similar pockets. We discuss two criteria to evaluate the performance of a binding pocket similarity measure in the context of ligand prediction, namely, area under ROC curve (AUC scores) and classification based scores. We show that the latter is better suited to evaluate the methods with respect to ligand prediction, and demonstrate the relevance of our new binding site similarity compared to existing similarity measures.</p> <p>Conclusions</p> <p>This study demonstrates the relevance of the proposed method to identify ligands binding to known binding pockets. We also provide a new benchmark for future work in this field. The new method and the benchmark are available at <url>http://cbio.ensmp.fr/paris/</url>.</p
    corecore