2,393,210 research outputs found

    Modal Similarity

    Get PDF
    Just as Boolean rules define Boolean categories, the Boolean operators define higher-order Boolean categories referred to as modal categories. We examine the similarity order between these categories and the standard category of logical identity (i.e. the modal category defined by the biconditional or equivalence operator). Our goal is 4-fold: first, to introduce a similarity measure for determining this similarity order; second, to show that such a measure is a good predictor of the similarity assessment behaviour observed in our experiment involving key modal categories; third, to argue that as far as the modal categories are concerned, configural similarity assessment may be componential or analytical in nature; and lastly, to draw attention to the intimate interplay that may exist between deductive judgments, similarity assessment and categorisation

    Quantum similarity of isosteres coordinate versus momentum space and influence of alignment

    Get PDF
    Abstract: Molecular quantum similarity was studied for a set of peptide isosteres analyzed before by Boon et al. (Chem. Phys. Lett., 1998, 295 122). Overlap and Coulomb similarity measures in coordinate space were calculated using the TGSA (Topo-Geometrical Superposition Algorithm) algorithm for the alignment of molecules instead of the one used in the previous work, and a comparison between the superposition methods was made. Overlap and first order moment similarity indices in momentum space are computed for the same alignment. The results illustrate the importance of the alignment algorithm for the evaluation of molecular similarity in a given set of molecules and show that the degree of similarity depends dramatically on the similarity measure used and the space in which the similarity is computed. For a small set of propane derivatives where the similarity ranking is known from drug design, only momentum space similarity integrals give the expected similarity ordering

    Conditional Similarity Networks

    Full text link
    What makes images similar? To measure the similarity between images, they are typically embedded in a feature-vector space, in which their distance preserve the relative dissimilarity. However, when learning such similarity embeddings the simplifying assumption is commonly made that images are only compared to one unique measure of similarity. A main reason for this is that contradicting notions of similarities cannot be captured in a single space. To address this shortcoming, we propose Conditional Similarity Networks (CSNs) that learn embeddings differentiated into semantically distinct subspaces that capture the different notions of similarities. CSNs jointly learn a disentangled embedding where features for different similarities are encoded in separate dimensions as well as masks that select and reweight relevant dimensions to induce a subspace that encodes a specific similarity notion. We show that our approach learns interpretable image representations with visually relevant semantic subspaces. Further, when evaluating on triplet questions from multiple similarity notions our model even outperforms the accuracy obtained by training individual specialized networks for each notion separately.Comment: CVPR 201

    Text categorization and similarity analysis: similarity measure, literature review

    Get PDF
    Document classification and provenance has become an important area of computer science as the amount of digital information is growing significantly. Organisations are storing documents on computers rather than in paper form. Software is now required that will show the similarities between documents (i.e. document classification) and to point out duplicates and possibly the history of each document (i.e. provenance). Poor organisation is common and leads to situations like above. There exists a number of software solutions in this area designed to make document organisation as simple as possible. I'm doing my project with Pingar who are a company based in Auckland who aim to help organise the growing amount of unstructured digital data. This reports analyses the existing literature in this area with the aim to determine what already exists and how my project will be different from existing solutions

    Music Similarity Estimation

    Get PDF
    Music is a complicated form of communication, where creators and culture communicate and expose their individuality. After music digitalization took place, recommendation systems and other online services have become indispensable in the field of Music Information Retrieval (MIR). To build these systems and recommend the right choice of song to the user, classification of songs is required. In this paper, we propose an approach for finding similarity between music based on mid-level attributes like pitch, midi value corresponding to pitch, interval, contour and duration and applying text based classification techniques. Our system predicts jazz, metal and ragtime for western music. The experiment to predict the genre of music is conducted based on 450 music files and maximum accuracy achieved is 95.8% across different n-grams. We have also analyzed the Indian classical Carnatic music and are classifying them based on its raga. Our system predicts Sankarabharam, Mohanam and Sindhubhairavi ragas. The experiment to predict the raga of the song is conducted based on 95 music files and the maximum accuracy achieved is 90.3% across different n-grams. Performance evaluation is done by using the accuracy score of scikit-learn

    Fast Similarity Sketching

    Full text link
    We consider the Similarity Sketching problem: Given a universe [u]={0,,u1}[u]= \{0,\ldots,u-1\} we want a random function SS mapping subsets A[u]A\subseteq [u] into vectors S(A)S(A) of size tt, such that similarity is preserved. More precisely: Given sets A,B[u]A,B\subseteq [u], define Xi=[S(A)[i]=S(B)[i]]X_i=[S(A)[i]= S(B)[i]] and X=i[t]XiX=\sum_{i\in [t]}X_i. We want to have E[X]=tJ(A,B)E[X]=t\cdot J(A,B), where J(A,B)=AB/ABJ(A,B)=|A\cap B|/|A\cup B| and furthermore to have strong concentration guarantees (i.e. Chernoff-style bounds) for XX. This is a fundamental problem which has found numerous applications in data mining, large-scale classification, computer vision, similarity search, etc. via the classic MinHash algorithm. The vectors S(A)S(A) are also called sketches. The seminal t×t\timesMinHash algorithm uses tt random hash functions h1,,hth_1,\ldots, h_t, and stores (minaAh1(A),,minaAht(A))\left(\min_{a\in A}h_1(A),\ldots, \min_{a\in A}h_t(A)\right) as the sketch of AA. The main drawback of MinHash is, however, its O(tA)O(t\cdot |A|) running time, and finding a sketch with similar properties and faster running time has been the subject of several papers. Addressing this, Li et al. [NIPS'12] introduced one permutation hashing (OPH), which creates a sketch of size tt in O(t+A)O(t + |A|) time, but with the drawback that possibly some of the tt entries are "empty" when A=O(t)|A| = O(t). One could argue that sketching is not necessary in this case, however the desire in most applications is to have one sketching procedure that works for sets of all sizes. Therefore, filling out these empty entries is the subject of several follow-up papers initiated by Shrivastava and Li [ICML'14]. However, these "densification" schemes fail to provide good concentration bounds exactly in the case A=O(t)|A| = O(t), where they are needed. (continued...

    Empirical Similarity

    Get PDF
    An agent is asked to assess a real-valued variable Y_{p} based on certain characteristics X_{p} = (X_{p}^{1},...,X_{p}^{m}), and on a database consisting (X_{i}^{1},...,X_{i}^{m},Y_{i}) for i = 1,...,n. A possible approach to combine past observations of X and Y with the current values of X to generate an assessment of Y is similarity-weighted averaging. It suggests that the predicted value of Y, Y_{p}^{s}, be the weighted average of all previously observed values Y_{i}, where the weight of Y_{i}, for every i =1,...,n, is the similarity between the vector X_{p}^{1},...,X_{p}^{m}, associated with Y_{p}, and the previously observed vector, X_{i}^{1},...,X_{i}^{m}. We axiomatize this rule. We assume that, given every database, a predictor has a ranking over possible values, and we show that certain reasonable conditions on these rankings imply that they are determined by the proximity to a similarity-weighted average for a certain similarity function. The axiomatization does not suggest a particular similarity function, or even a particular functional form of this function. We therefore proceed to suggest that the similarity function be estimated from past observations. We develop tools of statistical inference for parametric estimation of the similarity function, for the case of a continuous as well as a discrete variable. Finally, we discuss the relationship of the proposed method to other methods of estimation and prediction.Similarity, estimation
    corecore