55,710 research outputs found
Enhanced 2D/3D Approaches Based on Relevance Index for 3D-Shape Retrieval
International audienceWe present a new approach for 3D model indexing and retrieval using 2D/3D shape descriptors based on silhouettes or depth-buffer images. To take into account the dispersion of information in the views, we associate to each view a relevance index which will be afterward used in the dissimilarity computation. The performance of this new approach is evaluated on the Princeton 3D Shape Benchmark database
The Effectiveness of Ellipsoidal Shape Representation Technique for 3D Object Recognition System
Shape representation methods play an important
role in 3D shape recognition system. Three-dimensional shape
recognition is widely used in 3D search engines, gravitational
field, medical imaging, computer vision and face recognition.
In this paper we propose an ellipsoidal shape representation
technique for 3D shape recognition. We present some
experimental and comparison results of our approach for
shape matching using a standard database, Princeton Shape
Benchmark. The effectiveness of our proposed algorithm is
measured using nearest neighborhood. We then introduced a
new idea which is a possible extension of the proposed
approach and evaluate the results against human observation
Improving 3D Shape Retrieval Methods based on Bag-of-Feature Approach by using Local Codebooks
Also available online at http://www.sersc.org/journals/IJFGCN/vol5_no4/3.pdfInternational audienceRecent investigations illustrate that view-based methods, with pose normalization pre-processing get better performances in retrieving rigid models than other approaches and still the most popular and practical methods in the field of 3D shape retrieval. In this paper we present an improvement of 3D shape retrieval methods based on bag-of features approach. These methods use this approach to integrate a set of features extracted from 2D views of the 3D objects using the SIFT (Scale Invariant Feature Transform) algorithm into histograms using vector quantization which is based on a global visual codebook. In order to improve the retrieval performances, we propose to associate to each 3D object its local visual codebook instead of a unique global codebook. The experimental results obtained on the Princeton Shape Benchmark database, for the BF-SIFT method proposed by Ohbuchi, et al., and CM-BOF proposed by Zhouhui, et al., show that the proposed approach performs better than the original approach
Rethinking benchmark dates in international relations
International Relations has an âorthodox setâ of benchmark dates by which much of its research and teaching is organized: 1500, 1648, 1919, 1945 and 1989. This article argues that International Relations scholars need to question the ways in which these orthodox dates serve as internal and external points of reference, think more critically about how benchmark dates are established, and generate a revised set of benchmark dates that better reflects macro-historical international dynamics. The first part of the article questions the appropriateness of the orthodox set of benchmark dates as ways of framing the disciplineâs self-understanding. The second and third sections look at what counts as a benchmark date, and why. We systematize benchmark dates drawn from mainstream International Relations theories (realism, liberalism, constructivism/English School and sociological approaches) and then aggregate their criteria. The fourth section of the article uses this exercise to construct a revised set of benchmark dates which can widen the disciplineâs theoretical and historical scope. We outline a way of ranking benchmark dates and suggest a means of assessing recent candidates for benchmark status. Overall, the article delivers two main benefits: first, an improved heuristic by which to think critically about foundational dates in the discipline; and, second, a revised set of benchmark dates which can help shift International Relationsâ centre of gravity away from dynamics of war and peace, and towards a broader range of macro-historical dynamics
Efficient Decomposition of Image and Mesh Graphs by Lifted Multicuts
Formulations of the Image Decomposition Problem as a Multicut Problem (MP)
w.r.t. a superpixel graph have received considerable attention. In contrast,
instances of the MP w.r.t. a pixel grid graph have received little attention,
firstly, because the MP is NP-hard and instances w.r.t. a pixel grid graph are
hard to solve in practice, and, secondly, due to the lack of long-range terms
in the objective function of the MP. We propose a generalization of the MP with
long-range terms (LMP). We design and implement two efficient algorithms
(primal feasible heuristics) for the MP and LMP which allow us to study
instances of both problems w.r.t. the pixel grid graphs of the images in the
BSDS-500 benchmark. The decompositions we obtain do not differ significantly
from the state of the art, suggesting that the LMP is a competitive formulation
of the Image Decomposition Problem. To demonstrate the generality of the LMP,
we apply it also to the Mesh Decomposition Problem posed by the Princeton
benchmark, obtaining state-of-the-art decompositions
Sketch-based 3D Shape Retrieval using Convolutional Neural Networks
Retrieving 3D models from 2D human sketches has received considerable
attention in the areas of graphics, image retrieval, and computer vision.
Almost always in state of the art approaches a large amount of "best views" are
computed for 3D models, with the hope that the query sketch matches one of
these 2D projections of 3D models using predefined features.
We argue that this two stage approach (view selection -- matching) is
pragmatic but also problematic because the "best views" are subjective and
ambiguous, which makes the matching inputs obscure. This imprecise nature of
matching further makes it challenging to choose features manually. Instead of
relying on the elusive concept of "best views" and the hand-crafted features,
we propose to define our views using a minimalism approach and learn features
for both sketches and views. Specifically, we drastically reduce the number of
views to only two predefined directions for the whole dataset. Then, we learn
two Siamese Convolutional Neural Networks (CNNs), one for the views and one for
the sketches. The loss function is defined on the within-domain as well as the
cross-domain similarities. Our experiments on three benchmark datasets
demonstrate that our method is significantly better than state of the art
approaches, and outperforms them in all conventional metrics.Comment: CVPR 201
- âŠ