11,232 research outputs found

    Internet Filters: A Public Policy Report (Second edition; fully revised and updated)

    Get PDF
    No sooner was the Internet upon us than anxiety arose over the ease of accessing pornography and other controversial content. In response, entrepreneurs soon developed filtering products. By the end of the decade, a new industry had emerged to create and market Internet filters....Yet filters were highly imprecise from the beginning. The sheer size of the Internet meant that identifying potentially offensive content had to be done mechanically, by matching "key" words and phrases; hence, the blocking of Web sites for "Middlesex County," or words such as "magna cum laude". Internet filters are crude and error-prone because they categorize expression without regard to its context, meaning, and value. Yet these sweeping censorship tools are now widely used in companies, homes, schools, and libraries. Internet filters remain a pressing public policy issue to all those concerned about free expression, education, culture, and democracy. This fully revised and updated report surveys tests and studies of Internet filtering products from the mid-1990s through 2006. It provides an essential resource for the ongoing debate

    Regulating Access to Adult Content (with Privacy Preservation)

    Get PDF
    In the physical world we have well-established mechanisms for keeping children out of adult-only areas. In the virtual world this is generally replaced by self declaration. Some service providers resort to using heavy-weight identification mechanisms, judging adulthood as a side effect thereof. Collection of identification data arguably constitutes an unwarranted privacy invasion in this context, if carried out merely to perform adulthood estimation. This paper presents a mechanism that exploits the adult's more extensive exposure to public media, relying on the likelihood that they will be able to recall details if cued by a carefully chosen picture. We conducted an online study to gauge the viability of this scheme. With our prototype we were able to predict that the user was a child 99% of the time. Unfortunately the scheme also misclassified too many adults. We discuss our results and suggest directions for future research

    An investigation into the efficacy of URL content filtering systems

    Get PDF
    Content filters are used to restrict to restrict minors from accessing to online content deemed inappropriate. While much research and evaluation has been done on the efficiency of content filters, there is little in the way of empirical research as to their efficacy. The accessing of inappropriate material by minors, and the role content filtering systems can play in preventing the accessing of inappropriate material, is largely assumed with little or no evidence. This thesis investigates if a content filter implemented with the stated aim of restricting specific Internet content from high school students achieved the goal of stopping students from accessing the identified material. The case is of a high school in Western Australia where the logs of a proxy content filter that included all Internet traffic requested by students were examined to determine the efficacy of the content filter. Using text extraction and pattern matching techniques to look for evidence of access to restricted content within this study, the results demonstrate that the belief that content filtering systems reliably prevent access to restricted content is misplaced. in this study there is direct evidence of circumvention of the content filter. This is single case study in one school and as such, the results are not generalisable to all schools or even through subsequent systems that replaced the content filter examined in this study, but it does raise the issue of the ability of these content filter systems to restrict content from high school students. Further studies across multiple schools and more complex circumvention methods would be required to identify if circumvention of content filters is a widespread issue

    Collaborative Tagging and Taxonomy by Vector Space Approach

    Get PDF
    Collaborative tagging or group tagging is tagging performed by a group of users usually to support in re-finding the items. The limberness of tagging allows users to classify their collections of items in the ways that they find useful, but the personalized variety of expressions can present challenges when searching and browsing. When users can liberally choose tags (users create and apply public tags to online items as different to selecting terms from a proscribed terminology based on the users feedback), the resulting metadata can consist of homonyms (the same tags used with dissimilar implication) and synonyms (multiple tags for the same concept) which may direct to inappropriate connections between items and wasteful searches for information about a subject. Collaborative tagging requires the enforcement of method that enables users to protect their privacy by allowing them to hide certain user-generated contents without making them useless for the purposes they have been provided in a given online service. This means that privacy-preserving mechanisms must not harmfully affect the service truthfulness and usefulness.The proposed approach defends the user privacy to a certain level by reducing the tags that make a user profile let somebody see partiality toward certain categories of interest or feedback

    Filtering in Oz: Australia\u27s Foray into Internet Censorship

    Get PDF

    Filtering in Oz: Australia\u27s Foray into Internet Censorship

    Get PDF

    Data Management Challenges for Internet-scale 3D Search Engines

    Full text link
    This paper describes the most significant data-related challenges involved in building internet-scale 3D search engines. The discussion centers on the most pressing data management issues in this domain, including model acquisition, support for multiple file formats, asset versioning, data integrity errors, the data lifecycle, intellectual property, and the legality of web crawling. The paper also discusses numerous issues that fall under the rubric of trustworthy computing, including privacy, security, inappropriate content, and copying/remixing of assets. The goal of the paper is to provide an overview of these general issues, illustrated by empirical data drawn from the internet's largest operational search engine. While numerous works have been published on 3D information retrieval, this paper is the first to discuss the real-world challenges that arise in building practical search engines at scale.Comment: Second version, distributed by SIGIR Foru

    Web page multi-label classification for filtering content from the web

    Full text link
    In this paper, we describe a simple approach to filter unwanted web pages, according to their content. The result of this work is a demo of an application that is usable in realtime filtering and in non-real-time indexing of any given web pages. We describe a proposed technique step by step, while discussing possible alternative ways for each part. In the end we discuss the overall quality and proposed next steps that could lead to a fully usable business application

    Report of the Internet Content Governance Advisory Group

    Get PDF

    Advanced quantum based neural network classifier and its application for objectionable web content filtering

    Full text link
    © 2013 IEEE. In this paper, an Advanced Quantum-based Neural Network Classifier (AQNN) is proposed. The proposed AQNN is used to form an objectionable Web content filtering system (OWF). The aim is to design a neural network with a few numbers of hidden layer neurons with the optimal connection weights and the threshold of neurons. The proposed algorithm uses the concept of quantum computing and genetic concept to evolve connection weights and the threshold of neurons. Quantum computing uses qubit as a probabilistic representation which is the smallest unit of information in the quantum computing concept. In this algorithm, a threshold boundary parameter is also introduced to find the optimal value of the threshold of neurons. The proposed algorithm forms neural network architecture which is used to form an objectionable Web content filtering system which detects objectionable Web request by the user. To judge the performance of the proposed AQNN, a total of 2000 (1000 objectionable + 1000 non-objectionable) Website's contents have been used. The results of AQNN are also compared with QNN-F and well-known classifiers as backpropagation, support vector machine (SVM), multilayer perceptron, decision tree algorithm, and artificial neural network. The results show that the AQNN as classifier performs better than existing classifiers. The performance of the proposed objectionable Web content filtering system (OWF) is also compared with well-known objectionable Web filtering software and existing models. It is found that the proposed OWF performs better than existing solutions in terms of filtering objectionable content
    corecore