13 research outputs found

    Google Web APIs - an Instrument for Webometric Analyses?

    Get PDF
    This paper introduces Google Web APIs (Google APIs) as an instrument and playground for webometric studies. Several examples of Google APIs implementations are given. Our examples show that this Google Web Service can be used successfully for informetric Internet based studies albeit with some restrictions.Comment: 2 pages, 2 figures, 10th International Conference of the International Society for Scientometrics and Informetric

    Comparing webometric with web-independent rankings: a case study with German universities

    Get PDF
    In this paper we examine if hyperlink-based (webometric) indicators can be used to rank academic websites. Therefore we analyzed the interlinking structure of German university websites and compared our simple hyperlink-based ranking with official and web-independent rankings of universities. We found that link impact could not easily be seen as a prestige factor for universities.Comment: 3 pages, ACM Web Science 201

    Constructing experimental indicators for Open Access documents

    Get PDF
    The ongoing paradigm change in the scholarly publication system ('science is turning to e-science') makes it necessary to construct alternative evaluation criteria/metrics which appropriately take into account the unique characteristics of electronic publications and other research output in digital formats. Today, major parts of scholarly Open Access (OA) publications and the self-archiving area are not well covered in the traditional citation and indexing databases. The growing share and importance of freely accessible research output demands new approaches/metrics for measuring and for evaluating of these new types of scientific publications. In this paper we propose a simple quantitative method which establishes indicators by measuring the access/download pattern of OA documents and other web entities of a single web server. The experimental indicators (search engine, backlink and direct access indicator) are constructed based on standard local web usage data. This new type of web-based indicator is developed to model the specific demand for better study/evaluation of the accessibility, visibility and interlinking of open accessible documents. We conclude that e-science will need new stable e-indicators.Comment: 9 pages, 3 figure

    Webometrie - Entwicklung und Perspektiven

    Get PDF
    Das Forschungsgebiet der Webometrie untersucht quantitative Aspekte der Erstellung und der Nutzung von Informationsquellen im Web und greift dabei auf bibliometrische und informetrische Methoden zurück. Anstatt bibliographische Datenbanken für die Sammlung wissenschaftlicher Veröffentlichungen heranzuziehen, nutzt sie Web-Suchmaschinen wie Google, Yahoo! oder Bing, um wissenschaftliche Strukturbildungsprozesse für die Analyse zugänglich zu machen. Für die Bibliothek und ihre Digitalen Dienste im Web eröffnet die Webometrie neue Perspektiven. Sie erlaubt nicht nur die Beobachtung der (Nach)Nutzung von Webangeboten und, darauf aufbauend, ihrer Optimierung, sondern bietet auch Methoden des Vergleichs mit anderen Einrichtungen an. Der Vortrag führt zuerst kurz in die Grundbegriffe und Methoden der Webometrie sowie ihrer Rankingverfahren ein. Anschließend werden praktische Tools und Strategien vorgestellt, wie die UB Bielefeld diesen neuen Herausforderungen begegnet. Hierbei stehen das Ranking Web of World Repositories sowie eine Linkanalyse der Webseiten der UB Bielefeld und des Repositories BiPrints im Vordergrund

    Webometrie - Entwicklung und Perspektiven

    Get PDF
    Das Forschungsgebiet der Webometrie untersucht quantitative Aspekte der Erstellung und der Nutzung von Informationsquellen im Web und greift dabei auf bibliometrische und informetrische Methoden zurück. Anstatt bibliographische Datenbanken für die Sammlung wissenschaftlicher Veröffentlichungen heranzuziehen, nutzt sie Web-Suchmaschinen wie Google, Yahoo! oder Bing, um wissenschaftliche Strukturbildungsprozesse für die Analyse zugänglich zu machen. Für die Bibliothek und ihre Digitalen Dienste im Web eröffnet die Webometrie neue Perspektiven. Sie erlaubt nicht nur die Beobachtung der (Nach)Nutzung von Webangeboten und, darauf aufbauend, ihrer Optimierung, sondern bietet auch Methoden des Vergleichs mit anderen Einrichtungen an. Der Vortrag führt zuerst kurz in die Grundbegriffe und Methoden der Webometrie sowie ihrer Rankingverfahren ein. Anschließend werden praktische Tools und Strategien vorgestellt, wie die UB Bielefeld diesen neuen Herausforderungen begegnet. Hierbei stehen das Ranking Web of World Repositories sowie eine Linkanalyse der Webseiten der UB Bielefeld und des Repositories BiPrints im Vordergrund

    A three-year study on the freshness of Web search engine databases

    Get PDF
    This paper deals with one aspect of the index quality of search engines: index freshness. The purpose is to analyse the update strategies of the major Web search engines Google, Yahoo, and MSN/Live.com. We conducted a test of the updates of 40 daily updated pages and 30 irregularly updated pages, respectively. We used data from a time span of six weeks in the years 2005, 2006, and 2007. We found that the best search engine in terms of up-to-dateness changes over the years and that none of the engines has an ideal solution for index freshness. Frequency distributions for the pages’ ages are skewed, which means that search engines do differentiate between often- and seldom-updated pages. This is confirmed by the difference between the average ages of daily updated pages and our control group of pages. Indexing patterns are often irregular, and there seems to be no clear policy regarding when to revisit Web pages. A major problem identified in our research is the delay in making crawled pages available for searching, which differs from one engine to another

    An exploratory study of Google Scholar

    Get PDF
    Purpose – This paper discusses the new scientific search service Google Scholar (GS). This search engine, intended for searching exclusively scholarly documents, will be described with its most important functionality and then empirically tested. The focus is on an exploratory study which investigates the coverage of scientific serials in GS. Design/methodology/approach – The study is based on queries against different journal lists: international scientific journals from Thomson Scientific (SCI, SSCI, AH), Open Access journals from the DOAJ list and journals of the German social sciences literature database SOLIS as well as the analysis of result data from GS. All data gathering took place in August 2006. Findings – The study shows deficiencies in the coverage and up-to-dateness of the GS index. Furthermore, the study points up which web servers are the most important data providers for this search service and which information sources are highly represented. We can show that there is a relatively large gap in Google Scholar’s coverage of German literature as well as weaknesses in the accessibility of Open Access content. Major commercial academic publishers are currently the main data providers. Research limitations/implications – Five different journal lists were analyzed, including approximately 9,500 single titles. The lists are from different fields and of various sizes. This limits comparability. There were also some problems matching the journal titles of the original lists to the journal title data provided by Google Scholar. We were only able to analyze the top 100 Google Scholar hits per journal. Practical implications – We conclude that Google Scholar has some interesting pros (such as citation analysis and free materials) but the service can not be seen as a substitute for the use of special abstracting and indexing databases and library catalogues due to various weaknesses (such as transparency, coverage and up-to-dateness). Originality/value – We do not know of any other study using such a brute force approach and such a large empirical basis. Our study can be considered as using brute force in the sense that we gathered lots of data from Google, then analyzed the data in a macroscopic way

    Does it matter which search engine is used? A user study using post-task relevance judgments

    Full text link
    The objective of this research was to find out how the two search engines Google and Bing perform when users work freely on pre-defined tasks, and judge the relevance of the results immediately after finishing their search session. In a user study, 64 participants conducted two search tasks each, and then judged the results on the following: (1) The quality of the results they selected in their search sessions, (2) The quality of the results they were presented with in their search sessions (but which they did not click on), (3) The quality of the results from the competing search engine for their queries (which they did not see in their search session). We found that users heavily relied on Google, that Google produced more relevant results than Bing, that users were well able to select relevant results from the results lists, and that users judged the relevance of results lower when they regarded a task as difficult and did not find the correct information

    A three-year study on the freshness of Web search engine databases

    Get PDF
    This paper deals with one aspect of the index quality of search engines: index freshness. The purpose is to analyse the update strategies of the major Web search engines Google, Yahoo, and MSN/Live.com. We conducted a test of the updates of 40 daily updated pages and 30 irregularly updated pages, respectively. We used data from a time span of six weeks in the years 2005, 2006, and 2007. We found that the best search engine in terms of up-to-dateness changes over the years and that none of the engines has an ideal solution for index freshness. Frequency distributions for the pages’ ages are skewed, which means that search engines do differentiate between often- and seldom-updated pages. This is confirmed by the difference between the average ages of daily updated pages and our control group of pages. Indexing patterns are often irregular, and there seems to be no clear policy regarding when to revisit Web pages. A major problem identified in our research is the delay in making crawled pages available for searching, which differs from one engine to another

    eingesetzt wird. A. nutzt dazu die Boolesche Logik (Algebra) mit den Operatoren AND, OR und NOT. Die logischen Operatoren ergeben folgende Mengen: A AND B

    Get PDF
    N2- Drucker, die ein Verfahren zum schnellen und kostengünstigen Erstellen von dreidimensionalen physikalischen Modellen (Prototypen) nutzen. Sie verwenden Pulvermaterialien, die sich durch Einspritzen eines Bindemittels verfestigen. 3D-D. verarbeiten CAD-Daten und sind für das „rapid prototyping “ inzwischen für unter $ 5.000 zu erwerben
    corecore