94,636 research outputs found
The Google Similarity Distance
Words and phrases acquire meaning from the way they are used in society, from
their relative semantics to other words and phrases. For computers the
equivalent of `society' is `database,' and the equivalent of `use' is `way to
search the database.' We present a new theory of similarity between words and
phrases based on information distance and Kolmogorov complexity. To fix
thoughts we use the world-wide-web as database, and Google as search engine.
The method is also applicable to other search engines and databases. This
theory is then applied to construct a method to automatically extract
similarity, the Google similarity distance, of words and phrases from the
world-wide-web using Google page counts. The world-wide-web is the largest
database on earth, and the context information entered by millions of
independent users averages out to provide automatic semantics of useful
quality. We give applications in hierarchical clustering, classification, and
language translation. We give examples to distinguish between colors and
numbers, cluster names of paintings by 17th century Dutch masters and names of
books by English novelists, the ability to understand emergencies, and primes,
and we demonstrate the ability to do a simple automatic English-Spanish
translation. Finally, we use the WordNet database as an objective baseline
against which to judge the performance of our method. We conduct a massive
randomized trial in binary classification using support vector machines to
learn categories based on our Google distance, resulting in an a mean agreement
of 87% with the expert crafted WordNet categories.Comment: 15 pages, 10 figures; changed some text/figures/notation/part of
theorem. Incorporated referees comments. This is the final published version
up to some minor changes in the galley proof
Normalized Web Distance and Word Similarity
There is a great deal of work in cognitive psychology, linguistics, and
computer science, about using word (or phrase) frequencies in context in text
corpora to develop measures for word similarity or word association, going back
to at least the 1960s. The goal of this chapter is to introduce the
normalizedis a general way to tap the amorphous low-grade knowledge available
for free on the Internet, typed in by local users aiming at personal
gratification of diverse objectives, and yet globally achieving what is
effectively the largest semantic electronic database in the world. Moreover,
this database is available for all by using any search engine that can return
aggregate page-count estimates for a large range of search-queries. In the
paper introducing the NWD it was called `normalized Google distance (NGD),' but
since Google doesn't allow computer searches anymore, we opt for the more
neutral and descriptive NWD. web distance (NWD) method to determine similarity
between words and phrases. ItComment: Latex, 20 pages, 7 figures, to appear in: Handbook of Natural
Language Processing, Second Edition, Nitin Indurkhya and Fred J. Damerau
Eds., CRC Press, Taylor and Francis Group, Boca Raton, FL, 2010, ISBN
978-142008592
LIPN: Introducing a new Geographical Context Similarity Measure and a Statistical Similarity Measure based on the Bhattacharyya coefficient
International audienceThis paper describes the system used by the LIPN team in the task 10, Multilingual Semantic Textual Similarity, at SemEval 2014, in both the English and Spanish sub-tasks. The system uses a support vector regression model, combining different text similarity measures as features. With respect to our 2013 participation, we included a new feature to take into account the geographical context and a new semantic distance based on the Bhattacharyya distance calculated on co-occurrence distributions derived from the Spanish Google Books n-grams dataset
Automatic Meaning Discovery Using Google
We survey a new area of parameter-free similarity distance measures
useful in data-mining,
pattern recognition, learning and automatic semantics extraction.
Given a family of distances on a set of objects,
a distance is universal up to a certain precision for that family if it
minorizes every distance in the family between every two objects
in the set, up to the stated precision (we do not require the universal
distance to be an element of the family).
We consider similarity distances
for two types of objects: literal objects that as such contain all of their
meaning, like genomes or books, and names for objects.
The latter may have
literal embodyments like the first type, but may also
be abstract like ``red\u27\u27 or ``christianity.\u27\u27 For the first type
we consider
a family of computable distance measures
corresponding to parameters expressing similarity according to
particular features
between
pairs of literal objects. For the second type we consider similarity
distances generated by web users corresponding to particular semantic
relations between the (names for) the designated objects.
For both families we give universal similarity
distance measures, incorporating all particular distance measures
in the family. In the first case the universal
distance is based on compression and in the second
case it is based on Google page counts related to search terms.
In both cases experiments on a massive scale give evidence of the
viability of the approaches
Normalized Web Distance and Word Similarity
There is a great deal of work in cognitive psychology, linguistics, and computer science, about using word (or phrase) frequencies in context in text corpora to develop measures for word similarity or word association, going back to at least the 1960s. The goal of this chapter is to introduce the normalized is a general way to tap the amorphous low-grade knowledge available for free on the Internet, typed in by local users aiming at personal gratification of diverse objectives, and yet globally achieving what is effectively the largest semantic electronic database in the world. Moreover, this database is available for all by using any search engine that can return aggregate page-count estimates for a large range of search-queries. In the paper introducing the NWD it was called `normalized Google distance (NGD),' but since Google doesn't allow computer searches anymore, we opt for the more neutral and descriptive NWD. web distance (NWD) method to determine similarity between words and phrases
Just an Update on PMING Distance for Web-based Semantic Similarity in Artificial Intelligence and Data Mining
One of the main problems that emerges in the classic approach to semantics is
the difficulty in acquisition and maintenance of ontologies and semantic
annotations. On the other hand, the Internet explosion and the massive
diffusion of mobile smart devices lead to the creation of a worldwide system,
which information is daily checked and fueled by the contribution of millions
of users who interacts in a collaborative way. Search engines, continually
exploring the Web, are a natural source of information on which to base a
modern approach to semantic annotation. A promising idea is that it is possible
to generalize the semantic similarity, under the assumption that semantically
similar terms behave similarly, and define collaborative proximity measures
based on the indexing information returned by search engines. The PMING
Distance is a proximity measure used in data mining and information retrieval,
which collaborative information express the degree of relationship between two
terms, using only the number of documents returned as result for a query on a
search engine. In this work, the PMINIG Distance is updated, providing a novel
formal algebraic definition, which corrects previous works. The novel point of
view underlines the features of the PMING to be a locally normalized linear
combination of the Pointwise Mutual Information and Normalized Google Distance.
The analyzed measure dynamically reflects the collaborative change made on the
web resources
Mobile Application to Identify Indonesian Flowers on Android Platform
Although many people love flowers, they do not know their name. Especially, many people do not recognize local flowers. To find the flower image, we can use search engine such as Google, but it does not give much help to find the name of local flower. Sometimes, Google cannotshow the correct name of local flowers. This study proposes an application to identify Indonesian flowers that runs on the Android platform for easy use anywhere. Flower recognition is based on the color features using the Hue-Index, shape feature using Centroid Contour Distance (CCD), and the similarity measurement using Entropy calculations. The outputs of this application are information about inputted flower image including Latinname, local name, description, distribution and ecology. Based on tests performed on 44 types of flowers with 181 images in the database, the best similarity percentage is 97.72%. With this application, people will be expected to know more about Indonesia flowers.Keywords: Indonesian flowers, android, hue-index, CCD, entrop
A hybrid approach for paraphrase identification based on knowledge-enriched semantic heuristics
In this paper, we propose a hybrid approach for sentence paraphrase identification. The proposal addresses the problem of evaluating sentence-to-sentence semantic similarity when the sentences contain a set of named-entities. The essence of the proposal is to distinguish the computation of the semantic similarity of named-entity tokens from the rest of the sentence text. More specifically, this is based on the integration of word semantic similarity derived from WordNet taxonomic relations, and named-entity semantic relatedness inferred from Wikipedia entity co-occurrences and underpinned by Normalized Google Distance. In addition, the WordNet similarity measure is enriched with word part-of-speech (PoS) conversion aided with a Categorial Variation database (CatVar), which enhances the lexico-semantics of words. We validated our hybrid approach using two different datasets; Microsoft Research Paraphrase Corpus (MSRPC) and TREC-9 Question Variants. In our empirical evaluation, we showed that our system outperforms baselines and most of the related state-of-the-art systems for paraphrase detection. We also conducted a misidentification analysis to disclose the primary sources of our system errors
Web similarity in sets of search terms using database queries
Normalized web distance (NWD) is a similarity or normalized semantic distance based on the World Wide Web or another large electronic database, for instance Wikipedia, and a search engine that returns reliable aggregate page counts. For sets of search terms the NWD gives a common similarity (common semantics) on a scale from 0 (identical) to 1 (completely different). The NWD approximates the similarity of members of a set according to all (upper semi) computable properties. We develop the theory and give applications of classifying using Amazon, Wikipedia, and the NCBI website from the National Institutes of Health. The last gives new correlations between health hazards. A restriction of the NWD to a set of two yields the earlier normalized Google distance (NGD), but no combina
- …