120 research outputs found

    Dominant Search Engines: An Essential Cultural & Political Facility

    Get PDF
    When American lawyers talk about essential facilities, they are usually referring to antitrust doctrine that has required certain platforms to provide access on fair and nondiscriminatory terms to all comers. Some have recently characterized Google as an essential facility. Antitrust law may shape the search engine industry in positive ways. However, scholars and activists must move beyond the crabbed vocabulary of competition policy to develop a richer normative critique of search engine dominance. In this chapter, I sketch a new concept of essential cultural and political facility, which can help policymakers recognize and address situations where a bottleneck has become important enough that special scrutiny is warranted. This scrutiny may not always culminate in regulation. However, it clearly suggests a need for publicly funded alternatives to the concentrated conduits and content providers colonizing the web

    Search engine bias: the structuration of traffic on the World-Wide Web

    Get PDF
    Search engines are essential components of the World Wide Web; both commercially and in terms of everyday usage, their importance is hard to overstate. This thesis examines the question of why there is bias in search engine results – bias that invites users to click on links to large websites, commercial websites, websites based in certain countries, and websites written in certain languages. In this thesis, the historical development of the search engine industry is traced. Search engines first emerged as prototypical technological startups emanating from Silicon Valley, followed by the acquisition of search engine companies by major US media corporations and their development into portals. The subsequent development of pay-per-click advertising is central to the current industry structure, an oligarchy of virtually integrated companies managing networks of syndicated advertising and traffic distribution. The study also shows a global landscape in which search production is concentrated in and caters for large global advertising markets, leaving the rest of the world with patchy and uneven search results coverage. The analysis of interviews with senior search engine engineers indicates that issues of quality are addressed in terms of customer service and relevance in their discourse, while the analysis of documents, interviews with search marketers, and participant observation within a search engine marketing firm showed that producers and marketers had complex relationships that combine aspects of collaboration, competition, and indifference. The results of the study offer a basis for the synthesis of insights of the political economy of media and communication and the social studies of technology tradition, emphasising the importance of culture in constructing and maintaining both local structures and wider systems. In the case of search engines, the evidence indicates that the culture of the technological entrepreneur is very effective in creating a new megabusiness, but less successful in encouraging a debate on issues of the public good or public responsibility as they relate to the search engine industry

    A Novel Method to Calculate Click Through Rate for Sponsored Search

    Full text link
    Sponsored search adopts generalized second price (GSP) auction mechanism which works on the concept of pay per click which is most commonly used for the allocation of slots in the searched page. Two main aspects associated with GSP are the bidding amount and the click through rate (CTR). The CTR learning algorithms currently being used works on the basic principle of (#clicks_i/ #impressions_i) under a fixed window of clicks or impressions or time. CTR are prone to fraudulent clicks, resulting in sudden increase of CTR. The current algorithms are unable to find the solutions to stop this, although with the use of machine learning algorithms it can be detected that fraudulent clicks are being generated. In our paper, we have used the concept of relative ranking which works on the basic principle of (#clicks_i /#clicks_t). In this algorithm, both the numerator and the denominator are linked. As #clicks_t is higher than previous algorithms and is linked to the #clicks_i, the small change in the clicks which occurs in the normal scenario have a very small change in the result but in case of fraudulent clicks the number of clicks increases or decreases rapidly which will add up with the normal clicks to increase the denominator, thereby decreasing the CTR.Comment: 10 pages, 1 figur

    Searching for an Answer: Can Google Legally Manipulate Search Engine Results?

    Get PDF

    New perspectives on Web search engine research

    Get PDF
    Purpose–The purpose of this chapter is to give an overview of the context of Web search and search engine-related research, as well as to introduce the reader to the sections and chapters of the book. Methodology/approach–We review literature dealing with various aspects of search engines, with special emphasis on emerging areas of Web searching, search engine evaluation going beyond traditional methods, and new perspectives on Webs earching. Findings–The approaches to studying Web search engines are manifold. Given the importance of Web search engines for knowledge acquisition, research from different perspectives needs to be integrated into a more cohesive perspective. Researchlimitations/implications–The chapter suggests a basis for research in the field and also introduces further research directions. Originality/valueofpaper–The chapter gives a concise overview of the topics dealt with in the book and also shows directions for researchers interested in Web search engines

    Searching for an Answer: Can Google Legally Manipulate Search Engine Results?

    Get PDF

    Wrong Turn on the Information Superhighway: Education and the Commercialization of the Internet, by Bettina Fabos

    Get PDF

    Wrangling the Web: Advanced Tools for Effective Internet Searching

    Full text link
    Describes how to conduct effective Internet searches for legal information with a focus on advanced Google tools, but also includes Bing, Wolfram|Alpha, Legal Research Engine at Cornell, DocStoc and Scribd

    Establishing Secondary Liability with a Higher Degree of Culpability: Redefining Chinese Internet Copyright Law to Encourage Technology Development

    Get PDF
    While enjoying the tremendous economic benefit brought by the Internet to the nation, China has been attempting to update its intellectual property law to address online copyright infringement issues. The current legal framework, which premises copyright liability upon a direct infringement and joint liability theory, unfortunately has produced considerable ambiguity both within the judiciary and the affected industries. As shown in recent cases, the theory of joint liability, in addition to the broad scope of Chinese copyright law, has been particularly troublesome for China’s technology industry. Given China’s priority in technology innovation, its current copyright law has too low a threshold for liability on the part of Internet service and technology providers. To better facilitate its national technology development strategy, Chinese copyright law should redefine the balance between copyright protection and encouraging technology innovation. It needs to establish safe harbors to technology providers from the broad statutory rights enjoyed by copyright holders. More importantly, a secondary liability theory that requires a higher-than-negligence degree of culpability will provide a better legal platform for online copyright adjudication
    • …
    corecore