1,882,307 research outputs found

    Modal Similarity

    Get PDF
    Just as Boolean rules define Boolean categories, the Boolean operators define higher-order Boolean categories referred to as modal categories. We examine the similarity order between these categories and the standard category of logical identity (i.e. the modal category defined by the biconditional or equivalence operator). Our goal is 4-fold: first, to introduce a similarity measure for determining this similarity order; second, to show that such a measure is a good predictor of the similarity assessment behaviour observed in our experiment involving key modal categories; third, to argue that as far as the modal categories are concerned, configural similarity assessment may be componential or analytical in nature; and lastly, to draw attention to the intimate interplay that may exist between deductive judgments, similarity assessment and categorisation

    Text categorization and similarity analysis: similarity measure, literature review

    Get PDF
    Document classification and provenance has become an important area of computer science as the amount of digital information is growing significantly. Organisations are storing documents on computers rather than in paper form. Software is now required that will show the similarities between documents (i.e. document classification) and to point out duplicates and possibly the history of each document (i.e. provenance). Poor organisation is common and leads to situations like above. There exists a number of software solutions in this area designed to make document organisation as simple as possible. I'm doing my project with Pingar who are a company based in Auckland who aim to help organise the growing amount of unstructured digital data. This reports analyses the existing literature in this area with the aim to determine what already exists and how my project will be different from existing solutions

    Conditional Similarity Networks

    Full text link
    What makes images similar? To measure the similarity between images, they are typically embedded in a feature-vector space, in which their distance preserve the relative dissimilarity. However, when learning such similarity embeddings the simplifying assumption is commonly made that images are only compared to one unique measure of similarity. A main reason for this is that contradicting notions of similarities cannot be captured in a single space. To address this shortcoming, we propose Conditional Similarity Networks (CSNs) that learn embeddings differentiated into semantically distinct subspaces that capture the different notions of similarities. CSNs jointly learn a disentangled embedding where features for different similarities are encoded in separate dimensions as well as masks that select and reweight relevant dimensions to induce a subspace that encodes a specific similarity notion. We show that our approach learns interpretable image representations with visually relevant semantic subspaces. Further, when evaluating on triplet questions from multiple similarity notions our model even outperforms the accuracy obtained by training individual specialized networks for each notion separately.Comment: CVPR 201

    Fast Similarity Sketching

    Full text link
    We consider the Similarity Sketching problem: Given a universe [u]={0,,u1}[u]= \{0,\ldots,u-1\} we want a random function SS mapping subsets A[u]A\subseteq [u] into vectors S(A)S(A) of size tt, such that similarity is preserved. More precisely: Given sets A,B[u]A,B\subseteq [u], define Xi=[S(A)[i]=S(B)[i]]X_i=[S(A)[i]= S(B)[i]] and X=i[t]XiX=\sum_{i\in [t]}X_i. We want to have E[X]=tJ(A,B)E[X]=t\cdot J(A,B), where J(A,B)=AB/ABJ(A,B)=|A\cap B|/|A\cup B| and furthermore to have strong concentration guarantees (i.e. Chernoff-style bounds) for XX. This is a fundamental problem which has found numerous applications in data mining, large-scale classification, computer vision, similarity search, etc. via the classic MinHash algorithm. The vectors S(A)S(A) are also called sketches. The seminal t×t\timesMinHash algorithm uses tt random hash functions h1,,hth_1,\ldots, h_t, and stores (minaAh1(A),,minaAht(A))\left(\min_{a\in A}h_1(A),\ldots, \min_{a\in A}h_t(A)\right) as the sketch of AA. The main drawback of MinHash is, however, its O(tA)O(t\cdot |A|) running time, and finding a sketch with similar properties and faster running time has been the subject of several papers. Addressing this, Li et al. [NIPS'12] introduced one permutation hashing (OPH), which creates a sketch of size tt in O(t+A)O(t + |A|) time, but with the drawback that possibly some of the tt entries are "empty" when A=O(t)|A| = O(t). One could argue that sketching is not necessary in this case, however the desire in most applications is to have one sketching procedure that works for sets of all sizes. Therefore, filling out these empty entries is the subject of several follow-up papers initiated by Shrivastava and Li [ICML'14]. However, these "densification" schemes fail to provide good concentration bounds exactly in the case A=O(t)|A| = O(t), where they are needed. (continued...

    Music Similarity Estimation

    Get PDF
    Music is a complicated form of communication, where creators and culture communicate and expose their individuality. After music digitalization took place, recommendation systems and other online services have become indispensable in the field of Music Information Retrieval (MIR). To build these systems and recommend the right choice of song to the user, classification of songs is required. In this paper, we propose an approach for finding similarity between music based on mid-level attributes like pitch, midi value corresponding to pitch, interval, contour and duration and applying text based classification techniques. Our system predicts jazz, metal and ragtime for western music. The experiment to predict the genre of music is conducted based on 450 music files and maximum accuracy achieved is 95.8% across different n-grams. We have also analyzed the Indian classical Carnatic music and are classifying them based on its raga. Our system predicts Sankarabharam, Mohanam and Sindhubhairavi ragas. The experiment to predict the raga of the song is conducted based on 95 music files and the maximum accuracy achieved is 90.3% across different n-grams. Performance evaluation is done by using the accuracy score of scikit-learn

    Asymmetric Empirical Similarity

    Get PDF
    The paper offers a formal model of analogical legal reasoning and takes the model to data. Under the model, the outcome of a new case is a weighted average of the outcomes of prior cases. The weights capture precedential influence and depend on fact similarity (distance in fact space) and precedential authority (position in the judicial hierarchy). The empirical analysis suggests that the model is a plausible model for the time series of U.S. maritime salvage cases. Moreover, the results evince that prior cases decided by inferior courts have less influence than prior cases decided by superior courts
    corecore