415 research outputs found

    THE ROLE OF SWITCHING COSTS IN ANTITRUST ANALYSIS: A COMPARISON OF MICROSOFT AND GOOGLE

    Get PDF
    Recently there has been a chorus of competition complaints asserting that Google\u27s conduct and position today is parallel to Microsoft\u27s position in the “Microsoft case,” the antitrust case brought by the Department of Justice in 1998. Any monopolization case against Google Search would have to be very different from the Microsoft browser case, because the cost for a user switching from Google Search is much lower than was the cost in the 1990s (or today) of switching away from the Microsoft operating system. It would likewise need to be different because Google has not attempted to manipulate the cost of a user switching away from Google Search, at least not to a significant degree. Low switching costs should and likely will have important implications for antitrust analysis of Google

    The Internet Ecosystem: The Potential for Discrimination

    Get PDF
    Symposium: Rough Consensus and Running Code: Integrating Engineering Principles into Internet Policy Debates, held at the University of Pennsylvania\u27s Center for Technology Innovation and Competition on May 6-7, 2010. This Article explores how the emerging Internet architecture of cloud computing, content distribution networks, private peering and data-center services can simultaneously foster a perception of unfair network access while at the same time enabling significant competition for services, content, and innovation. A key enabler of these changes is the emergence of technologies that lower the barrier for entry in developing and deploying new services. Another is the design of successful Internet applications, which already accommodate the variation in service afforded by the current Internet. Regulators should be aware of the potential for anti-competitive practices in this broader Internet Ecosystem, but should carefully consider the effects of regulation on that ecosystem

    Veebi otsingumootorid ja vajadus keeruka informatsiooni järele

    Get PDF
    Väitekirja elektrooniline versioon ei sisalda publikatsioone.Veebi otsingumootorid on muutunud põhiliseks teabe hankimise vahenditeks internetist. Koos otsingumootorite kasvava populaarsusega on nende kasutusala kasvanud lihtsailt päringuilt vajaduseni küllaltki keeruka informatsiooni otsingu järele. Samas on ka akadeemiline huvi otsingu vastu hakanud liikuma lihtpäringute analüüsilt märksa keerukamate tegevuste suunas, mis hõlmavad ka pikemaid ajaraame. Praegused otsinguvahendid ei toeta selliseid tegevusi niivõrd hästi nagu lihtpäringute juhtu. Eriti kehtib see toe osas koondada mitme päringu tulemusi kokku sünteesides erinevate lihtotsingute tulemusi ühte uude dokumenti. Selline lähenemine on alles algfaasis ja ning motiveerib uurijaid arendama vastavaid vahendeid toetamaks taolisi informatsiooniotsingu ülesandeid. Käesolevas dissertatsioonis esitatakse rida uurimistulemusi eesmärgiga muuta keeruliste otsingute tuge paremaks kasutades tänapäevaseid otsingumootoreid. Alameesmärkideks olid: (a) arendada välja keeruliste otsingute mudel, (b) mõõdikute loomine kompleksotsingute mudelile, (c) eristada kompleksotsingu ülesandeid lihtotsingutest ning teha kindlaks, kas neid on võimalik mõõta leides ühtlasi lihtsaid mõõdikuid kirjeldamaks nende keerukust, (d) analüüsida, kui erinevalt kasutajad käituvad sooritades keerukaid otsinguülesandeid kasutades veebi otsingumootoreid, (e) uurida korrelatsiooni inimeste tava-veebikasutustavade ja nende otsingutulemuslikkuse vahel, (f) kuidas inimestel läheb eelhinnates otsinguülesande raskusastet ja vajaminevat jõupingutust ning (g) milline on soo ja vanuse mõju otsingu tulemuslikkusele. Keeruka veebiotsingu ülesanded jaotatakse edukalt kolmeastmeliseks protsessiks. Esitatakse sellise protsessi mudel; seda protsessi on ühtlasi võimalik ka mõõta. Edasi näidatakse kompleksotsingu loomupäraseid omadusi, mis teevad selle eristatavaks lihtsamatest juhtudest ning näidatakse ära katsemeetod sooritamaks kompleksotsingu kasutaja-uuringuid. Demonstreeritakse põhilisi samme raamistiku “Search-Logger” (eelmainitud metodoloogia tehnilise teostuse) rakendamisel kasutaja-uuringutes. Esitatakse sellisel viisil teostatud uuringute tulemused. Lõpuks esitatakse ATMS meetodi realisatsioon ja rakendamine parandamaks kompleksotsingu vajaduste tuge kaasaegsetes otsingumootorites.Search engines have become the means for searching information on the Internet. Along with the increasing popularity of these search tools, the areas of their application have grown from simple look-up to rather complex information needs. Also the academic interest in search has started to shift from analyzing simple query and response patterns to examining more sophisticated activities covering longer time spans. Current search tools do not support those activities as well as they do in the case of simple look-up tasks. Especially the support for aggregating search results from multiple search-queries, taking into account discoveries made and synthesizing them into a newly compiled document is only at the beginning and motivates researchers to develop new tools for supporting those information seeking tasks. In this dissertation I present the results of empirical research with the focus on evaluating search engines and developing a theoretical model of the complex search process that can be used to better support this special kind of search with existing search tools. It is not the goal of the thesis to implement a new search technology. Therefore performance benchmarks against established systems such as question answering systems are not part of this thesis. I present a model that decomposes complex Web search tasks into a measurable, three-step process. I show the innate characteristics of complex search tasks that make them distinguishable from their less complex counterparts and showcase an experimentation method to carry out complex search related user studies. I demonstrate the main steps taken during the development and implementation of the Search-Logger study framework (the technical manifestation of the aforementioned method) to carry our search user studies. I present the results of user studies carried out with this approach. Finally I present development and application of the ATMS (awareness-task-monitor-share) model to improve the support for complex search needs in current Web search engines

    A Model for Managing Information Flow on the World Wide Web

    Get PDF
    Metadata merged with duplicate record (http://hdl.handle.net/10026.1/330) on 20.12.2016 by CS (TIS).This is a digitised version of a thesis that was deposited in the University Library. If you are the author please contact PEARL Admin ([email protected]) to discuss options.This thesis considers the nature of information management on the World Wide Web. The web has evolved into a global information system that is completely unregulated, permitting anyone to publish whatever information they wish. However, this information is almost entirely unmanaged, which, together with the enormous number of users who access it, places enormous strain on the web's architecture. This has led to the exposure of inherent flaws, which reduce its effectiveness as an information system. The thesis presents a thorough analysis of the state of this architecture, and identifies three flaws that could render the web unusable: link rot; a shrinking namespace; and the inevitable increase of noise in the system. A critical examination of existing solutions to these flaws is provided, together with a discussion on why the solutions have not been deployed or adopted. The thesis determines that they have failed to take into account the nature of the information flow between information provider and consumer, or the open philosophy of the web. The overall aim of the research has therefore been to design a new solution to these flaws in the web, based on a greater understanding of the nature of the information that flows upon it. The realization of this objective has included the development of a new model for managing information flow on the web, which is used to develop a solution to the flaws. The solution comprises three new additions to the web's architecture: a temporal referencing scheme; an Oracle Server Network for more effective web browsing; and a Resource Locator Service, which provides automatic transparent resource migration. The thesis describes their design and operation, and presents the concept of the Request Router, which provides a new way of integrating such distributed systems into the web's existing architecture without breaking it. The design of the Resource Locator Service, including the development of new protocols for resource migration, is covered in great detail, and a prototype system that has been developed to prove the effectiveness of the design is presented. The design is further validated by comprehensive performance measurements of the prototype, which show that it will scale to manage a web whose size is orders of magnitude greater than it is today

    Internet Marketing for Profit Organizations: A framework for the implementation of strategic internet marketing

    Get PDF
    Merged with duplicate record 10026.1/828 on 13.03.2017 by CS (TIS)The development of the Internet has significantly changed the face of established markets and operation approaches across a tremendous spectrum of different industries. Within the competitive environment of those industries, the opportunities and risks derived from the new platform are so ubiquitous that unused opportunities quickly translate into potential risks. Those opportunities and risks demand for a structured approach how to implement a sustainable Internet marketing strategy that targets clear business objectives. Marketing and strategic management theory describes very clear structural principles towards their operational implementation. Based on those principles an extensive literature review has been conducted which confirms the result from representative statistics that demonstrate the lack of a comprehensive framework for strategic Internet marketing. The distinct result of this research is such a comprehensive framework which has been directly derived from the illustrated principles of strategic management and Internet marketing. All major components of this generic framework are designed, evaluated in dedicated surveys and validated in extensive case studies. The main achievements of the research are: • A comprehensive review of the current state-of-the-art Internet marketing strategies • Conceptual specification of a strategic Internet marketing framework with generic applicability to profit organizations • Demonstration of the practical feasibility of the proposed framework at the implementation level (via several examples like the SIMTF and SIMPF) • Confirmation of the applicability of the framework based upon a survey of potential beneficiaries • Validation of the effectiveness of the approach via case study scenarios Changing the understanding of a former technical discipline, the thesis describes how Internet marketing becomes a precise strategic instrument for profit organizations. The new structured, complete and self-similar framework facilitates sales organizations to significantly increase the effectiveness and efficiency of their marketing operations. Furthermore, the framework ensures a high level of transparency about the impact and benefit of individual activities. The new model explicitly answers concerns and problems raised and documented in existing research and accommodate for the current limitations of strategic Internet marketing. The framework allows evaluating existing as well as future Internet marketing tactics and provides a reference model for all other definitions of objectives, KPI and work packages. Finally this thesis also matures the subject matter of Internet marketing as a discipline of independent scientific research providing an underlying structure for subsequent studies.Darmstadt Node of the CSCAN Network at University of Applied Sciences, Darmstad

    Internet search techniques: using word count, links and directory structure as internet search tools

    Get PDF
    A thesis submitted for the degree of Doctor of Philosophy ofthe University of LutonAs the Web grows in size it becomes increasingly important that ways are developed to maximise the efficiency of the search process and index its contents with minimal human intervention. An evaluation is undertaken of current popular search engines which use a centralised index approach. Using a number of search terms and metrics that measure similarity between sets of results, it was found that there is very little commonality between the outcome of the same search performed using different search engines. A semi-automated system for searching the web is presented, the Internet Search Agent (ISA), this employs a method for indexing based upon the idea of "fingerprint types". These fingerprint types are based upon the text and links contained in the web pages being indexed. Three examples of fingerprint type are developed, the first concentrating upon the textual content of the indexed files, the other two augment this with the use of links to and from these files. By looking at the results returned as a search progresses in terms of numbers and measures of content of results for effort expended, comparisons can be made between the three fingerprint types. The ISA model allows the searcher to be presented with results in context and potentially allows for distributed searching to be implemented

    Technological Impediments to B2C Electronic Commerce: An Update

    Get PDF
    In 1999, Rose et al. identified six categories of technological impediments inhibiting the growth of electronic commerce: (1) download delays, (2) interface limitations, (3) search problems, (4) inadequate measures of Web application success, (5) security, and (6) a lack of Internet standards. This paper updates findings in the original paper by surveying the practitioner literature for the five-year period from June 1999 to June 2004. We identify how advances in technology both partially resolve concerns with the original technological impediments, and inhibit their full resolution. We find that, despite five years of technological progress, the six categories of technological impediments remain relevant. Furthermore, the maturation of e-Commerce increased the Internet\u27s complexity, making these impediments harder to address. Two kinds of complexity are especially relevant: evolutionary complexity, and skill complexity. Evolutionary complexity refers to the need to preserve the existing Internet and resolve impediments simultaneously. Unfortunately, because the Internet consists of multiple incompatible technologies, philosophies, and attitudes, additions to the Internet infrastructure are difficult to integrate. Skill complexity refers to the skill sets necessary for managing e-Commerce change. As the Internet evolves, more skills become relevant. Unfortunately, individuals, companies and organizations are unable to master and integrate all necessary skills. As a result, new features added to the Internet do not consider all relevant factors, and are thus sub-optimal. NOTE THAT THIS ARTICLE IS APPROXIMATELY 600kb. IF YOU USE A SLOW MODEM, IT MAY TAKE A WHILE TO LOA

    The XPSL Query component: a framework for pattern searches in code

    Get PDF
    This thesis describes the tool support for the query component of the eXtensible Pattern Specification Language (XPSL). The XPSL framework is a part of the Knowledge-Centric Software (KCS) platform of tools for software analysis and transformation. XPSL provides a language for the specification of patterns. Currently, there is no tool support to perform software analysis and transformation patterns specified through XPSL. The objective of this research is to provide tool support for analysis. An analysis task is viewed by the tool as a query that can be executed to produce the appropriate results. The goal is to produce a tool which is extensible and easily maintainable. This thesis outlines the framework design of the query component of XPSL, wherein it is presented as a library of basic queries on patterns in code, together with a composition mechanism for writing queries of greater sophistication. The tool is implemented as a translator which takes an XPSL specification as input, and converts it into an equivalent query in a target language of choice. We consider XQuery and XSLT as possible target languages. We discuss the comparative merits and demerits of XSLT and XQuery as the target languages, and explain why our choice of XQuery as the target language is desirable. The pattern search is then done by an XQuery engine. The translation mechanism precisely defines of the semantics of execution of the query, and chooses the various data formats and the technologies for its stages. These are discussed in the thesis. We also do an empirical study of the efficacy and efficiency of the approach taken. Some queries which were executed demonstrate the fact that queries composed in XPSL and executed using the tool can go beyond what is possible in the current Aspect-Oriented Languages. We discuss the applicability of the tool to various software engineering paradigms. We also explore future extensions to the querying mechanism, and discuss the issues that may arise in adding a transformation component to the current framework
    corecore