33 research outputs found

    The Economics of Open Source Hijacking and Declining Quality of Digital Information Resources: A Case for Copyleft

    Get PDF
    The economics of information goods suggest the need for institutional intervention to address the problem of revenue extraction from investments in resources characterized by high fixed costs of production and low marginal costs of reproduction and distribution. Solutions to the appropriation issue, such as copyright, are supposed to guarantee an incentive for innovative activities at the price of few vices marring their rationale. In the case of digital information resources, apart from conventional inefficiencies, copyright shows an extra vice since it might be used perversely as a tool to hijack and privatise collectively provided open source and open content knowledge assemblages. Whilst the impact of hijacking on open source software development may be uncertain or uneven, some risks are clear in the case of open content works. The paper presents some evidence of malicious effects of hijacking in the Internet search market by discussing the case of The Open Directory Project. Furthermore, it calls for a wider use of novel institutional remedies such as copyleft and Creative Commons licensing, built upon the paradigm of copyright customisation.Economics of information and knowledge, intellectual property rights, copyright, copyleft, public domain, open source, open content, hijacking, customisation, Creative Commons, DMOZ, search engine, directory.

    Federating Heterogeneous Digital Libraries by Metadata Harvesting

    Get PDF
    This dissertation studies the challenges and issues faced in federating heterogeneous digital libraries (DLs) by metadata harvesting. The objective of federation is to provide high-level services (e.g. transparent search across all DLs) on the collective metadata from different digital libraries. There are two main approaches to federate DLs: distributed searching approach and harvesting approach. As the distributed searching approach replies on executing queries to digital libraries in real time, it has problems with scalability. The difficulty of creating a distributed searching service for a large federation is the motivation behind Open Archives Initiatives Protocols for Metadata Harvesting (OAI-PMH). OAI-PMH supports both data providers (repositories, archives) and service providers. Service providers develop value-added services based on the information collected from data providers. Data providers are simply collections of harvestable metadata. This dissertation examines the application of the metadata harvesting approach in DL federations. It addresses the following problems: (1) Whether or not metadata harvesting provides a realistic and scalable solution for DL federation. (2) What is the status of and problems with current data provider implementations, and how to solve these problems. (3) How to synchronize data providers and service providers. (4) How to build different types of federation services over harvested metadata. (5) How to create a scalable and reliable infrastructure to support federation services. The work done in this dissertation is based on OAI-PMH, and the results have influenced the evolution of OAI-PMH. However, the results are not limited to the scope of OAI-PMH. Our approach is to design and build key services for metadata harvesting and to deploy them on the Web. Implementing a publicly available service allows us to demonstrate how these approaches are practical. The problems posed above are evaluated by performing experiments over these services. To summarize the results of this thesis, we conclude that the metadata harvesting approach is a realistic and scalable approach to federate heterogeneous DLs. We present two models of building federation services: a centralized model and a replicated model. Our experiments also demonstrate that the repository synchronization problem can be addressed by push, pull, and hybrid push/pull models; each model has its strengths and weaknesses and fits a specific scenario. Finally, we present a scalable and reliable infrastructure to support the applications of metadata harvesting

    Globalization and E-Commerce VII: Environment and Policy in the U.S.

    Get PDF
    The United States is a global leader in both Business-to-Customer (B2C) and Business-to-Business (B2B) electronic commerce. This leadership comes in part from the historical US strengths in information technology, telecommunications, financial services, and transportation - all of which are essential enabling components of e-commerce. The size and strength of the US economy, the wealth of its consumer base, and the relatively open access to venture capital creates an attractive environment for e-commerce investment. Official US Government policy toward e-commerce is to let the private sector take the lead, with government helping to make the business climate right for innovation and investment. Prior US Government investments in essential e-commerce infrastructure for military purposes (e.g., digital computing, the Internet) and for civilian purposes (e.g., interstate highways, air transport) played an important role in the US lead in e-commerce. US Government policies favoring widespread economic liberalization since the 1970\u27s in areas such as financial services, transportation, and telecommunications helped enable and stimulate private sector investment and innovation in e-commerce. The collapse of the dot.com era in the late 1990\u27s hit key sectors of e-commerce hard, suggesting that some of the more dramatic and positive predictions of e-commerce growth and impact will either be delayed substantially or will not come to pass. The strength of surviving e-commerce companies (e.g., Amazon and eBay), as well as the relative stability of the technology sector (e.g., Cisco Systems, Dell, Intel, IBM) and the continued investment of large industry sectors (e.g., autos, finance) suggest that e-commerce is still growing and is here to stay. Consumers are intrigued by B2C e-commerce, and many have used such services, but serious concerns related to privacy and transaction security remain obstacles to universal adoption of B2C e-commerce

    CHORUS Deliverable 2.1: State of the Art on Multimedia Search Engines

    Get PDF
    Based on the information provided by European projects and national initiatives related to multimedia search as well as domains experts that participated in the CHORUS Think-thanks and workshops, this document reports on the state of the art related to multimedia content search from, a technical, and socio-economic perspective. The technical perspective includes an up to date view on content based indexing and retrieval technologies, multimedia search in the context of mobile devices and peer-to-peer networks, and an overview of current evaluation and benchmark inititiatives to measure the performance of multimedia search engines. From a socio-economic perspective we inventorize the impact and legal consequences of these technical advances and point out future directions of research

    Handling Information Overload on Usenet : Advanced Caching Methods for News

    Get PDF
    Usenet is the name of a world wide network of servers for group communication between people. From 1979 and onwards, it has seen a near exponential growth in the amount of data transported, which has been a strain on bandwidth and storage. There has been a wide range of academic research with focus on the WWW, but Usenet has been neglected. Instead, Usenet's evolution has been dominated by practical solutions. This thesis describes the history of Usenet in a growth perspective, and introduces methods for collection and analysis of statistical data for testing the usefulness of various caching strategies. A set of different caching strategies are proposed and examined in light of bandwidth and storage demands as well as user perceived performance. I have shown that advanced caching methods for news offers relief for reading servers' storage and bandwidth capacity by exploiting usage patterns for fetching or pre\-fetching articles the users may want to read, but it will not solve the problem of near exponential growth nor the problems of Usenet's backbone peers

    A Model for Managing Information Flow on the World Wide Web

    Get PDF
    Metadata merged with duplicate record (http://hdl.handle.net/10026.1/330) on 20.12.2016 by CS (TIS).This is a digitised version of a thesis that was deposited in the University Library. If you are the author please contact PEARL Admin ([email protected]) to discuss options.This thesis considers the nature of information management on the World Wide Web. The web has evolved into a global information system that is completely unregulated, permitting anyone to publish whatever information they wish. However, this information is almost entirely unmanaged, which, together with the enormous number of users who access it, places enormous strain on the web's architecture. This has led to the exposure of inherent flaws, which reduce its effectiveness as an information system. The thesis presents a thorough analysis of the state of this architecture, and identifies three flaws that could render the web unusable: link rot; a shrinking namespace; and the inevitable increase of noise in the system. A critical examination of existing solutions to these flaws is provided, together with a discussion on why the solutions have not been deployed or adopted. The thesis determines that they have failed to take into account the nature of the information flow between information provider and consumer, or the open philosophy of the web. The overall aim of the research has therefore been to design a new solution to these flaws in the web, based on a greater understanding of the nature of the information that flows upon it. The realization of this objective has included the development of a new model for managing information flow on the web, which is used to develop a solution to the flaws. The solution comprises three new additions to the web's architecture: a temporal referencing scheme; an Oracle Server Network for more effective web browsing; and a Resource Locator Service, which provides automatic transparent resource migration. The thesis describes their design and operation, and presents the concept of the Request Router, which provides a new way of integrating such distributed systems into the web's existing architecture without breaking it. The design of the Resource Locator Service, including the development of new protocols for resource migration, is covered in great detail, and a prototype system that has been developed to prove the effectiveness of the design is presented. The design is further validated by comprehensive performance measurements of the prototype, which show that it will scale to manage a web whose size is orders of magnitude greater than it is today

    Plan for an Aquatic Invasive Species Web Portal

    Get PDF
    This report, prepared for NOAA, presents recommendations for an aquatic invasive species (AIS) central web portal and an analysis of existing AIS databases. We gathered data from federal agencies and private entities through interviews, questionnaires, and surveys. The information compiled from our research was used to determine and justify the details of our recommendations

    Sept. 1999

    Get PDF

    Economic Trends in Enterprise Search Solutions

    Get PDF
    Enterprise search technology retrieves information within organizations. This data can be proprietary and public, its access to it may be restricted or not. Enterprise search solutions render business processes more efficient particularly in data-intensive companies. This technology is key to increasing the competitiveness of the digital economy; thus it constitutes a strategic market for the European Union. The Enterprise Search Solution (ESS) market was worth close to one billion USD in 2008 and is expected to grow quicker than the overall market for information and knowledge management systems. Optimistic market forecasts expect market size to exceed 1,200 million USD by the end of 2010. Other market analyses see the growth rate slowing down and stabilizing at around 10% a year in 2010. Even in the least favourable case, enterprise search remains an attractive market, particularly because of the opportunities expected to arise from the convergence of ESS and Information Systems. This report looks at the demand and supply side of ESS and provides data about the market. It presents the evolution of market dynamics over the past decade and describes the current situation. Our main thesis is that ESS is currently placed at the point where two established markets, namely web search and the management of information systems, overlap. The report offers evidence that these two markets are converging and discusses the role of the different stakeholders (providers of web search engines, enterprise resource management tools, pure enterprise search tools, etc.) in this changing context.JRC.DDG.J.4-Information Societ
    corecore