15,441 research outputs found
An Evaluation of Link Neighborhood Lexical Signatures to Rediscover Missing Web Pages
For discovering the new URI of a missing web page, lexical signatures, which
consist of a small number of words chosen to represent the "aboutness" of a
page, have been previously proposed. However, prior methods relied on computing
the lexical signature before the page was lost, or using cached or archived
versions of the page to calculate a lexical signature. We demonstrate a system
of constructing a lexical signature for a page from its link neighborhood, that
is the "backlinks", or pages that link to the missing page. After testing
various methods, we show that one can construct a lexical signature for a
missing web page using only ten backlink pages. Further, we show that only the
first level of backlinks are useful in this effort. The text that the backlinks
use to point to the missing page is used as input for the creation of a
four-word lexical signature. That lexical signature is shown to successfully
find the target URI in over half of the test cases.Comment: 24 pages, 13 figures, 8 tables, technical repor
Clustering documents with active learning using Wikipedia
Wikipedia has been applied as a background knowledge base to various text mining problems, but very few attempts have been made to utilize it for document clustering. In this paper we propose to exploit the semantic knowledge in Wikipedia for clustering, enabling the automatic grouping of documents with similar themes. Although clustering is intrinsically unsupervised, recent research has shown that incorporating supervision improves clustering performance, even when limited supervision is provided. The approach presented in this paper applies supervision using active learning. We first utilize Wikipedia to create a concept-based representation of a text document, with each concept associated to a Wikipedia article. We then exploit the semantic relatedness between Wikipedia concepts to find pair-wise instance-level constraints for supervised clustering, guiding clustering towards the direction indicated by the constraints. We test our approach on three standard text document datasets. Empirical results show that our basic document representation strategy yields comparable performance to previous attempts; and adding constraints improves clustering performance further by up to 20%
An effective, low-cost measure of semantic relatedness obtained from Wikipedia links
This paper describes a new technique for obtaining measures of semantic relatedness. Like other recent approaches, it uses Wikipedia to provide structured world knowledge about the terms of interest. Out approach is unique in that it does so using the hyperlink structure of Wikipedia rather than its category hierarchy or textual content. Evaluation with manually defined measures of semantic relatedness reveals this to be an effective compromise between the ease of computation of the former approach and the accuracy of the latter
Investigating the unofficial factors in Google ranking
This paper evaluates the effectiveness of some āunofficialā factors in Search Engine Optimisation. A summary of official Google guidelines is given followed by a review of āunofficialā ranking factors as reported by a number of experts in the field of Search Engine Optimisationā. These opinions vary and do not always agree. Experiments on keyword density, web page titles and the use of outbound links were conducted to investigate the expertās hypotheses by analysing Google result pages. The results demonstrate
that webmasters should avoid having unnecessary outbound links, while attempting to repeat the important keywords of each page one time in their titles, to increase the pages ranking in the results page
University of Glasgow at WebCLEF 2005: experiments in per-field normalisation and language specific stemming
We participated in the WebCLEF 2005 monolingual task. In this task, a search system aims to retrieve relevant documents from a multilingual corpus of Web documents from Web sites of European governments. Both the documents and the queries are written in a wide range of European languages. A challenge in this setting is to detect the language of documents and topics, and to process them appropriately. We develop a language specific technique for applying the correct stemming approach, as well as for removing the correct stopwords from the queries. We represent documents using three fields, namely content, title, and anchor text of incoming hyperlinks. We use a technique called per-field normalisation, which extends the Divergence From Randomness (DFR) framework, to normalise the term frequencies, and to combine them across the three fields. We also employ the length of the URL path of Web documents. The ranking is based on combinations of both the language specific stemming, if applied, and the per-field normalisation. We use our Terrier platform for all our experiments. The overall performance of our techniques is outstanding, achieving the overall top four performing runs, as well as the top performing run without metadata in the monolingual task. The best run only uses per-field normalisation, without applying stemming
Using the Web Infrastructure for Real Time Recovery of Missing Web Pages
Given the dynamic nature of the World Wide Web, missing web pages, or 404 Page not Found responses, are part of our web browsing experience. It is our intuition that information on the web is rarely completely lost, it is just missing. In whole or in part, content often moves from one URI to another and hence it just needs to be (re-)discovered. We evaluate several methods for a \justin- time approach to web page preservation. We investigate the suitability of lexical signatures and web page titles to rediscover missing content. It is understood that web pages change over time which implies that the performance of these two methods depends on the age of the content. We therefore conduct a temporal study of the decay of lexical signatures and titles and estimate their half-life. We further propose the use of tags that users have created to annotate pages as well as the most salient terms derived from a page\u27s link neighborhood. We utilize the Memento framework to discover previous versions of web pages and to execute the above methods. We provide a work ow including a set of parameters that is most promising for the (re-)discovery of missing web pages. We introduce Synchronicity, a web browser add-on that implements this work ow. It works while the user is browsing and detects the occurrence of 404 errors automatically. When activated by the user Synchronicity offers a total of six methods to either rediscover the missing page at its new URI or discover an alternative page that satisfies the user\u27s information need. Synchronicity depends on user interaction which enables it to provide results in real time
- ā¦