304 research outputs found

    Application-Centered Internet Analysis

    Get PDF
    There is a now-standard debate about law and the Internet. One side asserts that the Internet is so new and different that it calls for new legal approaches, even its own sovereign law. The other side argues that, although it is a new technology, the Internet nonetheless presents familiar legal problems. It is a battle of analogies: One side refers to Cyberspace as a place, while the other essentially equates the Internet and the telephone. In my view, these two positions are both wrong and right: wrong in their characterization of the Internet as a whole, yet potentially right about particular ways of using the Internet. The real problem is that both sides (and indeed, most legal writing) rely on a singular model of the Internet. They take one way of using the Internet as a proxy for the whole thing and conclude the Internet this or the Internet that

    Proceedings of the 2011 Great Lakes Connections Conference : Discourse & Illumination, May 20-21, 2011, School of Information Studies, University of Wisconsin-Milwaukee

    Get PDF
    The 2011 Great Lakes Connections Conference was a conference for all Library and Information Science (LIS) doctoral students and candidates. It was a student-focused conference that was intended to provide an opportunity for LIS doctoral students to share and exchange ideas and research. The conference was open to all LIS doctoral students, and included both works in progress and full papers. The accepted papers and works in progress were selected through a double-blind review process

    Privacy and Security in the Cloud: Some Realism About Technical Solutions to Transnational Surveillance in the Post-Snowden Era

    Get PDF
    Since June 2013, the leak of thousands of classified documents regarding highly sensitive U.S. surveillance activities by former National Security Agency (NSA) contractor Edward Snowden has greatly intensified discussions of privacy, trust, and freedom in relation to the use of global computing and communication services. This is happening during a period of ongoing transition to cloud computing services by organizations, businesses, and individuals. There has always been a question of inherent in this transition: are cloud services sufficiently able to guarantee the security of their customers’ data as well s the proper restrictions on access by third parties, including governments? While worries over government access to data in the cloud is a predominate part of the ongoing debate over the use of cloud serives, the Snowden revelations highlight that intelligence agency operations pose a unique threat to the ability of services to keep their customers’ data out of the hands of domestic as well as foreign governments. The search for a proper response is ongoing, from the perspective of market players, governments, and civil society. At the technical and organizational level, industry players are responding with the wider and more sophisticated deployment of encryption as well as a new emphasis on the use of privacy enhancing technologies and innovative architectures for securing their services. These responses are the focus of this Article, which contributes to the discussion of transnational surveillance by looking at the interaction between the relevant legal frameworks on the one hand, and the possible technical and organizational responses of cloud service providers to such surveillance on the other. While the Article’s aim is to contribute to the debate about government surveillance with respect to cloud services in particular, much of the discussion is relevant for Internet services more broadly

    Conundrum

    Get PDF

    Using Graphic Turing Tests To Counter Automated DDoS Attacks Against Web Servers

    Get PDF
    We present WebSOS, a novel overlay-based architecture that provides guaranteed access to a web server that is targeted by a denial of service (DoS) attack. Our approach exploits two key characteristics of the web environment: its design around a human-centric interface, and the extensibility inherent in many browsers through downloadable "applets." We guarantee access to a web server for a large number of previously unknown users, without requiring pre-existing trust relationships between users and the system.Our prototype requires no modifications to either servers or browsers, and makes use of graphical Turing tests, web proxies, and client authentication using the SSL/TLS protocol, all readily supported by modern browsers. We use the WebSOS prototype to conduct a performance evaluation over the Internet using PlanetLab, a testbed for experimentation with network overlays. We determine the end-to-end latency using both a Chord-based approach and our shortcut extension. Our evaluation shows the latency increase by a factor of 7 and 2 respectively, confirming our simulation results

    An Economic Analysis of Domain Name Policy

    Get PDF
    One of the most important features of the architecture of the Internet is the Domain Name System (DNS), which is administered by the Internet Corporation for Assigned Names and Numbers (ICANN). Logically, the DNS is organized into Top Level Domains (such as .com), Second Level Domains (such as amazon.com), and third, fourth, and higher level domains (such as www.amazon.com). The physically infrastructure of the DNS consists of name servers, including the Root Server System which provides the information that directs name queries for each Top Level Domain to the appropriate server. ICANN is responsible for the allocation of the root and the creation or reallocation of Top Level Domains. The Root Server System and associated name space are scarce resources in the economic sense. The root servers have a finite capacity and expansion of the system is costly. The name space is scarce, because each string (or set of characters) can only be allocated to one Registry (or operator of a Top Level Domain). In addition, name service is not a public good in the economic sense, because it is possible to exclude strings from the DNS and because the allocation of a string to one firm results in the inability of other firms to use that name string. From the economic perspective, therefore, the question arises: what is the most efficient method for allocating the root resource? There are only five basic options available for allocation of the root. (1) a static root, equivalent to a decision to waste the currently unallocated capacity; (2) public interest hearings (or beauty contests); (3) lotteries; (4) a queuing mechanism; or (5) an auction. The fundamental economic question about the Domain Name System is which of these provides the most efficient mechanism for allocating the root resource? This resource allocation problem is analogous to problems raised in the telecommunications sector, where the Federal Communications Commission has a long history of attempting to allocate broadcast spectrum and the telephone number space. This experience reveals that a case-by-case allocation on the basis of ad hoc judgments about the public interest is doomed to failure, and that auctions (as opposed to lotteries or queues) provide the best mechanism for insuring that such public-trust resources find their highest and best use. Based on the telecommunications experience, the best method for ICANN to allocate new Top Level Domains would be to conduct an auction. Many auction designs are possible. One proposal is to auction a fixed number of new Top Level Domain slots each year. This proposal would both expand the root resource at a reasonable pace and insure that the slots went to their highest and best use. Public interest Top Level Domains could be allocated by another mechanism such as a lottery and their costs to ICANN could be subsidized by the proceeds of the auction

    Bandwidth management and monitoring for IP network traffic : an investigation

    Get PDF
    Bandwidth management is a topic which is often discussed, but on which relatively little work has been done with regard to compiling a comprehensive set of techniques and methods for managing traffic on a network. What work has been done has concentrated on higher end networks, rather than the low bandwidth links which are commonly available in South Africa and other areas outside the United States. With more organisations increasingly making use of the Internet on a daily basis, the demand for bandwidth is outstripping the ability of providers to upgrade their infrastructure. This resource is therefore in need of management. In addition, for Internet access to become economically viable for widespread use by schools, NGOs and other academic institutions, the associated costs need to be controlled. Bandwidth management not only impacts on direct cost control, but encompasses the process of engineering a network and network resources in order to ensure the provision of as optimal a service as possible. Included in this is the provision of user education. Software has been developed for the implementation of traffic quotas, dynamic firewalling and visualisation. The research investigates various methods for monitoring and management of IP traffic with particular applicability to low bandwidth links. Several forms of visualisation for the analysis of historical and near-realtime traffic data are also discussed, including the use of three-dimensional landscapes. A number of bandwidth management practices are proposed, and the advantages of their combination, and complementary use are highlighted. By implementing these suggested policies, a holistic approach can be taken to the issue of bandwidth management on Internet links
    • …
    corecore