21 research outputs found

    Complete Issue 15, 1997

    Get PDF

    Crossing the academic ocean? Judit Bar-Ilan's oeuvre on search engines studies

    Full text link
    [EN] The main objective of this work is to analyse the contributions of Judit Bar-Ilan to the search engines studies. To do this, two complementary approaches have been carried out. First, a systematic literature review of 47 publications authored and co-authored by Judit and devoted to this topic. Second, an interdisciplinarity analysis based on the cited references (publications cited by Judit) and citing documents (publications that cite Judit's work) through Scopus. The systematic literature review unravels an immense amount of search engines studied (43) and indicators measured (especially technical precision, overlap and fluctuation over time). In addition to this, an evolution over the years is detected from descriptive statistical studies towards empirical user studies, with a mixture of quantitative and qualitative methods. Otherwise, the interdisciplinary analysis evidences that a significant portion of Judit's oeuvre was intellectually founded on the computer sciences, achieving a significant, but not exclusively, impact on library and information sciences.Orduña-Malea, E. (2020). Crossing the academic ocean? Judit Bar-Ilan's oeuvre on search engines studies. Scientometrics. 123(3):1317-1340. https://doi.org/10.1007/s11192-020-03450-4S131713401233Bar-Ilan, J. (1998a). On the overlap, the precision and estimated recall of search engines. A case study of the query “Erdos”. Scientometrics,42(2), 207–228. https://doi.org/10.1007/bf02458356.Bar-Ilan, J. (1998b). The mathematician, Paul Erdos (1913–1996) in the eyes of the Internet. Scientometrics,43(2), 257–267. https://doi.org/10.1007/bf02458410.Bar-Ilan, J. (2000). The web as an information source on informetrics? A content analysis. Journal of the American Society for Information Science and Technology,51(5), 432–443. https://doi.org/10.1002/(sici)1097-4571(2000)51:5%3C432:aid-asi4%3E3.0.co;2-7.Bar-Ilan, J. (2001). Data collection methods on the web for informetric purposes: A review and analysis. Scientometrics,50(1), 7–32.Bar-Ilan, J. (2002). Methods for measuring search engine performance over time. Journal of the American Society for Information Science and Technology,53(4), 308–319. https://doi.org/10.1002/asi.10047.Bar-Ilan, J. (2003). Search engine results over time: A case study on search engine stability. Cybermetrics,2/3, 1–16.Bar-Ilan, J. (2005a). Expectations versus reality—Search engine features needed for Web research at mid 2005. Cybermetrics,9, 1–26.Bar-Ilan, J. (2005b). Expectations versus reality—Web search engines at the beginning of 2005. In Proceedings of ISSI 2005: 10th international conference of the international society for scientometrics and informetrics (Vol. 1, pp. 87–96).Bar-Ilan, J. (2010). The WIF of Peter Ingwersen’s website. In B. Larsen, J. W. Schneider, & F. Åström (Eds.), The Janus Faced Scholar a Festschrift in honour of Peter Ingwersen (pp. 119–121). Det Informationsvidenskabelige Akademi. Retrieved 15 January 15, 2020, from https://vbn.aau.dk/ws/portalfiles/portal/90357690/JanusFacedScholer_Festschrift_PeterIngwersen_2010.pdf#page=122.Bar-Ilan, J. (2018). Eugene Garfield on the web in 2001. Scientometrics,114(2), 389–399. https://doi.org/10.1007/s11192-017-2590-9.Bar-Ilan, J., Mat-Hassan, M., & Levene, M. (2006). Methods for comparing rankings of search engine results. Computer Networks,50(10), 1448–1463. https://doi.org/10.1016/j.comnet.2005.10.020.Thelwall, M. (2017). Judit Bar-Ilan: Information scientist, computer scientist, scientometrician. Scientometrics,113(3), 1235–1244. https://doi.org/10.1007/s11192-017-2551-3

    Complete Issue 15, 1997

    Get PDF

    The Full Value of the Nobel Prize - Part 1: Mining “Data Without Theory”

    Get PDF
    This paper comes in two parts, this being the first. Part 1 is not a research paper in the sense of the Scientific Method; it is rather unsophisticated data mining - a cheap data mining exercise for that matter, because it does not follow any received economic, or other, theory. In the sense of Ed E. Leamer, it is “data without theory,” and data without theory does not speak for itself, despite the common cliché of “letting the data speak for itself.” The objective here is to adjust the money value of the Nobel Prize to include the values of the Nobel Prize medal and diploma. It is an arithmetic exercise that reveals that Alfred Nobel’s monetary contribution to humanity is huge. More importantly, the calculations generate data that make it possible to focus on the economic implications of Nobel’s bequest for human capital accumulation, technological progress, and long-run economic growth, which are subjects of a separate effort in Part 2. In this “paper” I indicate some basic relationships among and between key variables in Section 4, and remark in the last section that the Nobel Prize is a massive contribution, even without taking into account the time value of money. For instance, the unadjusted value of the Economics Nobel Prize in 1969 awarded to Ragnar Frisch and Jan Tinbergen was only 2.92 million SEK (US0.57million),butadjustedforthemedalanddiplomavaluestheawardwas5.85millionSEK(US0.57 million), but adjusted for the medal and diploma values the award was 5.85 million SEK (US1.14 million).Nobel Prize full value, Nobel Prize and human development, nobel prize and human capital, Nobel Prize and technological change, Nobel Prize and economic performance

    Scatter matters: Regularities and implications for the scatter of healthcare information on the Web

    Full text link
    Despite the development of huge healthcare Web sites and powerful search engines, many searchers end their searches prematurely with incomplete information. Recent studies suggest that users often retrieve incomplete information because of the complex scatter of relevant facts about a topic across Web pages. However, little is understood about regularities underlying such information scatter. To probe regularities within the scatter of facts across Web pages, this article presents the results of two analyses: (a) a cluster analysis of Web pages that reveals the existence of three page clusters that vary in information density and (b) a content analysis that suggests the role each of the above-mentioned page clusters play in providing comprehensive information. These results provide implications for the design of Web sites, search tools, and training to help users find comprehensive information about a topic and for a hypothesis describing the underlying mechanisms causing the scatter. We conclude by briefly discussing how the analysis of information scatter, at the granularity of facts, complements existing theories of information-seeking behavior.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/69202/1/21217_ftp.pd

    The Appearance of Statistical Ideas in Prose, Poetry, and Drama: A Dictionary of Quotations, Aphorisms, Apothegms, Excerpts and Epigrams

    Get PDF
    It is not always easy to understand ideas that are statistical or probabilistic in character. It is even less easy to explain those ideas well. The quotations in this collection were assembled partly to understand how to understand, at least insofar as words (rather than statistical models) permit, and how writers think and explain. One of the motives here is also to assure that readers know from where the quotations come. Anybody nowadays can do a Google search and get what is alleged to a quote by someone famous. But the Googler might never know from where the thing came. Here, the intent is to assure that the right sources, properly cited, and the page numbers etc. are identified. The final motive is amusement. If these quotations amuse and entice others’ interest, that would be lovely

    Why is it difficult to find comprehensive information? Implications of information scatter for search and design

    Full text link
    The rapid development of Web sites providing extensive coverage of a topic, coupled with the development of powerful search engines (designed to help users find such Web sites), suggests that users can easily find comprehensive information about a topic. In domains such as consumer healthcare, finding comprehensive information about a topic is critical as it can improve a patient's judgment in making healthcare decisions, and can encourage higher compliance with treatment. However, recent studies show that despite using powerful search engines, many healthcare information seekers have difficulty finding comprehensive information even for narrow healthcare topics because the relevant information is scattered across many Web sites. To date, no studies have analyzed how facts related to a search topic are distributed across relevant Web pages and Web sites. In this study, the distribution of facts related to five common healthcare topics across high-quality sites is analyzed, and the reasons underlying those distributions are explored. The analysis revealed the existence of few pages that had many facts, many pages that had few facts, and no single page or site that provided all the facts. While such a distribution conforms to other information-related phenomena, a deeper analysis revealed that the distributions were caused by a trade-off between depth and breadth, leading to the existence of general, specialized, and sparse pages. Furthermore, the results helped to make explicit the knowledge needed by searchers to find comprehensive healthcare information, and suggested the motivation to explore distribution-conscious approaches for the development of future search systems, search interfaces, Web page designs, and training.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/48701/1/20189_ftp.pd

    TME Volume 9, Numbers 1 and 2

    Get PDF

    The Rock, Fall 2011 (vol. 82, no. 1)

    Get PDF
    https://poetcommons.whittier.edu/rock/1197/thumbnail.jp
    corecore