51 research outputs found

    Distribution and Use of Knowledge under the “Laws of the Web”

    Get PDF
    Empirical evidence shows that the perception of information is strongly concentrated in those environments in which a mass of producers and users of knowledge interact through a distribution medium. This paper considers the consequences of this fact for economic equilibrium analysis. In particular, it examines how the ranking schemes applied by the distribution technology affect the use of knowledge, and it then describes the characteristics of an optimal ranking scheme. The analysis is carried out using a model in which agents’ productivity is based on the stock of knowledge used. The value of a piece of information is assessed in terms of its contribution to productivity.global rankings, information and internet services, limited attention, diversity, knowledge society

    Experience versus Talent Shapes the Structure of the Web

    Full text link
    We use sequential large-scale crawl data to empirically investigate and validate the dynamics that underlie the evolution of the structure of the web. We find that the overall structure of the web is defined by an intricate interplay between experience or entitlement of the pages (as measured by the number of inbound hyperlinks a page already has), inherent talent or fitness of the pages (as measured by the likelihood that someone visiting the page would give a hyperlink to it), and the continual high rates of birth and death of pages on the web. We find that the web is conservative in judging talent and the overall fitness distribution is exponential, showing low variability. The small variance in talent, however, is enough to lead to experience distributions with high variance: The preferential attachment mechanism amplifies these small biases and leads to heavy-tailed power-law (PL) inbound degree distributions over all pages, as well as over pages that are of the same age. The balancing act between experience and talent on the web allows newly introduced pages with novel and interesting content to grow quickly and surpass older pages. In this regard, it is much like what we observe in high-mobility and meritocratic societies: People with entitlement continue to have access to the best resources, but there is just enough screening for fitness that allows for talented winners to emerge and join the ranks of the leaders. Finally, we show that the fitness estimates have potential practical applications in ranking query results

    The egalitarian effect of search engines

    Full text link
    Search engines have become key media for our scientific, economic, and social activities by enabling people to access information on the Web in spite of its size and complexity. On the down side, search engines bias the traffic of users according to their page-ranking strategies, and some have argued that they create a vicious cycle that amplifies the dominance of established and already popular sites. We show that, contrary to these prior claims and our own intuition, the use of search engines actually has an egalitarian effect. We reconcile theoretical arguments with empirical evidence showing that the combination of retrieval by search engines and search behavior by users mitigates the attraction of popular pages, directing more traffic toward less popular sites, even in comparison to what would be expected from users randomly surfing the Web.Comment: 9 pages, 8 figures, 2 appendices. The final version of this e-print has been published on the Proc. Natl. Acad. Sci. USA 103(34), 12684-12689 (2006), http://www.pnas.org/cgi/content/abstract/103/34/1268

    Quantifying Biases in Online Information Exposure

    Full text link
    Our consumption of online information is mediated by filtering, ranking, and recommendation algorithms that introduce unintentional biases as they attempt to deliver relevant and engaging content. It has been suggested that our reliance on online technologies such as search engines and social media may limit exposure to diverse points of view and make us vulnerable to manipulation by disinformation. In this paper, we mine a massive dataset of Web traffic to quantify two kinds of bias: (i) homogeneity bias, which is the tendency to consume content from a narrow set of information sources, and (ii) popularity bias, which is the selective exposure to content from top sites. Our analysis reveals different bias levels across several widely used Web platforms. Search exposes users to a diverse set of sources, while social media traffic tends to exhibit high popularity and homogeneity bias. When we focus our analysis on traffic to news sites, we find higher levels of popularity bias, with smaller differences across applications. Overall, our results quantify the extent to which our choices of online systems confine us inside "social bubbles."Comment: 25 pages, 10 figures, to appear in the Journal of the Association for Information Science and Technology (JASIST

    Study, Analysis and Comparison between Amazon A10 and A11 Search Algorithm

    Get PDF
    The entirety of Amazon’s sales being powered by Amazon Search, one of the leading e-commerce platforms around the globe. As a result, even slight boosts in appropriateness can have a major impact on profits as well as the shopping experience of millions of users. Throughout the beginning, Amazon’s product search engine was made up of a number of manually adjusted ranking processes that made use of a limited number of input features. Since that time, a significant amount has transpired. Many people overlook the fact that Amazon is a search engine, and even the biggest one for e-commerce. It is indeed time to begin treating Amazon truly as the top e-commerce search engine across the globe because it currently serves 54% of all product queries. In this paper, the authors have considered two most important Amazon search engine algorithms viz. A10 and A11 and comparative study has been discussed

    Shuffling a Stacked Deck: The Case for Partially Randomized Ranking of Search Engine Results

    Get PDF
    In-degree, PageRank, number of visits and other measures of Web page popularity significantly influence the ranking of search results by modern search engines. The assumption is that popularity is closely correlated with quality, a more elusive concept that is difficult to measure directly. Unfortunately, the correlation between popularity and quality is very weak for newly-created pages that have yet to receive many visits and/or in-links. Worse, since discovery of new content is largely done by querying search engines, and because users usually focus their attention on the top few results, newly-created but high-quality pages are effectively ``shut out,'' and it can take a very long time before they become popular. We propose a simple and elegant solution to this problem: the introduction of a controlled amount of randomness into search result ranking methods. Doing so offers new pages a chance to prove their worth, although clearly using too much randomness will degrade result quality and annul any benefits achieved. Hence there is a tradeoff between exploration to estimate the quality of new pages and exploitation of pages already known to be of high quality. We study this tradeoff both analytically and via simulation, in the context of an economic objective function based on aggregate result quality amortized over time. We show that a modest amount of randomness leads to improved search results
    • 

    corecore