20,288 research outputs found
Profiling Web Archive Coverage for Top-Level Domain and Content Language
The Memento aggregator currently polls every known public web archive when
serving a request for an archived web page, even though some web archives focus
on only specific domains and ignore the others. Similar to query routing in
distributed search, we investigate the impact on aggregated Memento TimeMaps
(lists of when and where a web page was archived) by only sending queries to
archives likely to hold the archived page. We profile twelve public web
archives using data from a variety of sources (the web, archives' access logs,
and full-text queries to archives) and discover that only sending queries to
the top three web archives (i.e., a 75% reduction in the number of queries) for
any request produces the full TimeMaps on 84% of the cases.Comment: Appeared in TPDL 201
Dominant Search Engines: An Essential Cultural & Political Facility
When American lawyers talk about essential facilities, they are usually referring to antitrust doctrine that has required certain platforms to provide access on fair and nondiscriminatory terms to all comers. Some have recently characterized Google as an essential facility. Antitrust law may shape the search engine industry in positive ways. However, scholars and activists must move beyond the crabbed vocabulary of competition policy to develop a richer normative critique of search engine dominance.
In this chapter, I sketch a new concept of essential cultural and political facility, which can help policymakers recognize and address situations where a bottleneck has become important enough that special scrutiny is warranted. This scrutiny may not always culminate in regulation. However, it clearly suggests a need for publicly funded alternatives to the concentrated conduits and content providers colonizing the web
Report of MIRACLE team for the Ad-Hoc track in CLEF 2006
This paper presents the 2006 MIRACLE’s team approach to the AdHoc Information Retrieval track. The experiments for this campaign keep on testing our IR approach. First, a baseline set of runs is obtained, including standard components: stemming, transforming, filtering, entities detection and extracting, and others. Then, a extended set of runs is obtained using several types of combinations of these baseline runs. The improvements introduced for this campaign have been a few ones: we have used an entity recognition and indexing prototype tool into our tokenizing scheme, and we have run more combining experiments for the robust multilingual case than in previous campaigns. However, no significative improvements have been achieved. For the this campaign, runs were submitted for the following languages and tracks: - Monolingual: Bulgarian, French, Hungarian, and Portuguese. - Bilingual: English to Bulgarian, French, Hungarian, and Portuguese; Spanish to French and Portuguese; and French to Portuguese. - Robust monolingual: German, English, Spanish, French, Italian, and Dutch. - Robust bilingual: English to German, Italian to Spanish, and French to Dutch. - Robust multilingual: English to robust monolingual languages. We still need to work harder to improve some aspects of our processing scheme, being the most important, to our knowledge, the entities recognition and normalization
Evaluation of MIRACLE approach results for CLEF 2003
This paper describes MIRACLE (Multilingual Information RetrievAl for the CLEf campaign) approach and results for the mono, bi and multilingual Cross Language Evaluation Forum tasks. The approach is based on the combination of linguistic and statistic techniques to perform indexing and retrieval tasks
Global disease monitoring and forecasting with Wikipedia
Infectious disease is a leading threat to public health, economic stability,
and other key social structures. Efforts to mitigate these impacts depend on
accurate and timely monitoring to measure the risk and progress of disease.
Traditional, biologically-focused monitoring techniques are accurate but costly
and slow; in response, new techniques based on social internet data such as
social media and search queries are emerging. These efforts are promising, but
important challenges in the areas of scientific peer review, breadth of
diseases and countries, and forecasting hamper their operational usefulness.
We examine a freely available, open data source for this use: access logs
from the online encyclopedia Wikipedia. Using linear models, language as a
proxy for location, and a systematic yet simple article selection procedure, we
tested 14 location-disease combinations and demonstrate that these data
feasibly support an approach that overcomes these challenges. Specifically, our
proof-of-concept yields models with up to 0.92, forecasting value up to
the 28 days tested, and several pairs of models similar enough to suggest that
transferring models from one location to another without re-training is
feasible.
Based on these preliminary results, we close with a research agenda designed
to overcome these challenges and produce a disease monitoring and forecasting
system that is significantly more effective, robust, and globally comprehensive
than the current state of the art.Comment: 27 pages; 4 figures; 4 tables. Version 2: Cite McIver & Brownstein
and adjust novelty claims accordingly; revise title; various revisions for
clarit
Recommended from our members
Freeing up access to learning: the role for Open Educational Resources
The internet revolution of the last few years has had an impact on how we all live our lives. So it is not surprising that this is also a time of change in attitudes towards how we learn. Free access to information through computer networks has expanded, and part of that information flow are materials designed to help people learn. In addition there are many further online resources that help the learning process, even if that was not the original aim. However, there are risks in this evolution in access to information both for the end user, who can be confused by the options available to them, and to those involved in providing education, who may see their traditional role changing and becoming harder to perform. This situation provides the background for a growing movement to directly consider how education can be provided in a freer and more open way. This has been termed “Open Educational Resources” (OER). The exact definition of the term depends on interpretation, however a useful statement was provided as an outcome from an event organized by UNESCO in 2002 as:
“OER are teaching, learning, and research resources that reside in the public domain or have been released under an intellectual property license that permits their free use or re-purposing by others. Open educational resources include full courses, course materials, modules, textbooks, streaming videos, tests, software, and any other tools, materials, or techniques used to support access to knowledge (Atkins, Brown and Hammond, 2007, p4).”
Arguably the only difference between an online learning object and an open educational resource is the declaration that it is open. This may be true but that turns out to be a powerful difference. By being open the content can be accessed by any learner who can do so, it can be taken and run in new contexts, it can be reworked by others and adapted for local needs (with the result shared back if desired), it can be made part of shared pool of resources, it can be the shared point of reference for collaboration, and it can be the key to building policies that work in different domain
- …