5,317 research outputs found

    Active Learning Strategies for Technology Assisted Sensitivity Review

    Get PDF
    Government documents must be reviewed to identify and protect any sensitive information, such as personal information, before the documents can be released to the public. However, in the era of digital government documents, such as e-mail, traditional sensitivity review procedures are no longer practical, for example due to the volume of documents to be reviewed. Therefore, there is a need for new technology assisted review protocols to integrate automatic sensitivity classification into the sensitivity review process. Moreover, to effectively assist sensitivity review, such assistive technologies must incorporate reviewer feedback to enable sensitivity classifiers to quickly learn and adapt to the sensitivities within a collection, when the types of sensitivity are not known a priori. In this work, we present a thorough evaluation of active learning strategies for sensitivity review. Moreover, we present an active learning strategy that integrates reviewer feedback, from sensitive text annotations, to identify features of sensitivity that enable us to learn an effective sensitivity classifier (0.7 Balanced Accuracy) using significantly less reviewer effort, according to the sign test (p < 0.01 ). Moreover, this approach results in a 51% reduction in the number of documents required to be reviewed to achieve the same level of classification accuracy, compared to when the approach is deployed without annotation features

    H2O: An Autonomic, Resource-Aware Distributed Database System

    Get PDF
    This paper presents the design of an autonomic, resource-aware distributed database which enables data to be backed up and shared without complex manual administration. The database, H2O, is designed to make use of unused resources on workstation machines. Creating and maintaining highly-available, replicated database systems can be difficult for untrained users, and costly for IT departments. H2O reduces the need for manual administration by autonomically replicating data and load-balancing across machines in an enterprise. Provisioning hardware to run a database system can be unnecessarily costly as most organizations already possess large quantities of idle resources in workstation machines. H2O is designed to utilize this unused capacity by using resource availability information to place data and plan queries over workstation machines that are already being used for other tasks. This paper discusses the requirements for such a system and presents the design and implementation of H2O.Comment: Presented at SICSA PhD Conference 2010 (http://www.sicsaconf.org/

    Towards Maximising Openness in Digital Sensitivity Review using Reviewing Time Predictions

    Get PDF
    The adoption of born-digital documents, such as email, by governments, such as in the UK and USA, has resulted in a large backlog of born-digital documents that must be sensitivity reviewed before they can be opened to the public, to ensure that no sensitive information is released, e.g. personal or confidential information. However, it is not practical to review all of the backlog with the available reviewing resources and, therefore, there is a need for automatic techniques to increase the number of documents that can be opened within a fixed reviewing time budget. In this paper, we conduct a user study and use the log data to build models to predict reviewing times for an average sensitivity reviewer. Moreover, we show that using our reviewing time predictions to select the order that documents are reviewed can markedly increase the ratio of reviewed documents that are released to the public, e.g. +30% for collections with high levels of sensitivity, compared to reviewing by shortest document first. This, in turn, increases the total number of documents that are opened to the public within a fixed reviewing time budget, e.g. an extra 200 documents in 100 hours reviewing

    Spark for Social Science

    Get PDF
    Urban has developed an elastic and powerful approach to the analysis of massive datasets using Amazon Web Services’ Elastic MapReduce (EMR) and the Spark framework for distributed memory and processing. The goal of the project is to deliver powerful and elastic Spark clusters to researchers and data analysts with as little setup time and effort possible, and at low cost. To do that, at the Urban Institute, we use two critical components: (1) an Amazon Web Services (AWS) CloudFormation script to launch AWS Elastic MapReduce (EMR) clusters (2) a bootstrap script that runs on the Master node of the new cluster to install statistical programs and development environments (RStudio and Jupyter Notebooks). The Urban Institute’s Spark for Social Science Github page holds code used to setup the cluster and tutorials for learning how to program in R and Python

    Towards a classifier for digital sensitivity review

    Get PDF
    The sensitivity review of government records is essential before they can be released to the official government archives, to prevent sensitive information (such as personal information, or that which is prejudicial to international relations) from being released. As records are typically reviewed and released after a period of decades, sensitivity review practices are still based on paper records. The transition to digital records brings new challenges, e.g. increased volume of digital records, making current practices impractical to use. In this paper, we describe our current work towards developing a sensitivity review classifier that can identify and prioritise potentially sensitive digital records for review. Using a test collection built from government records with real sensitivities identified by government assessors, we show that considering the entities present in each record can markedly improve upon a text classification baseline

    An Approach to Ad hoc Cloud Computing

    Get PDF
    We consider how underused computing resources within an enterprise may be harnessed to improve utilization and create an elastic computing infrastructure. Most current cloud provision involves a data center model, in which clusters of machines are dedicated to running cloud infrastructure software. We propose an additional model, the ad hoc cloud, in which infrastructure software is distributed over resources harvested from machines already in existence within an enterprise. In contrast to the data center cloud model, resource levels are not established a priori, nor are resources dedicated exclusively to the cloud while in use. A participating machine is not dedicated to the cloud, but has some other primary purpose such as running interactive processes for a particular user. We outline the major implementation challenges and one approach to tackling them

    A Study of SVM Kernel Functions for Sensitivity Classification Ensembles with POS Sequences

    Get PDF
    Freedom of Information (FOI) laws legislate that government documents should be opened to the public. However, many government documents contain sensitive information, such as confidential information, that is exempt from release. Therefore, government documents must be sensitivity reviewed prior to release, to identify and close any sensitive information. With the adoption of born-digital documents, such as email, there is a need for automatic sensitivity classification to assist digital sensitivity review. SVM classifiers and Part-of-Speech sequences have separately been shown to be promising for sensitivity classification. However, sequence classification methodologies, and specifically SVM kernel functions, have not been fully investigated for sensitivity classification. Therefore, in this work, we present an evaluation of five SVM kernel functions for sensitivity classification using POS sequences. Moreover, we show that an ensemble classifier that combines POS sequence classification with text classification can significantly improve sensitivity classification effectiveness (+6.09% F2) compared with a text classification baseline, according to McNemar's test of significance

    Study in epidemiology: illustrated by a diphtheria epidemic

    Get PDF
    The consideration of Epidemic disease here set out, while illustrated by the study of a Diphtheria Epidemic, represents rather the conclusions arrived at after some twenty years experience of disease in general gained in the practice of Medicine and Surgery, amplified by two years recently devoted to the study of Preventive Medicine and Bacteriology in particular. Little of originality is claimed ; but the exceptional advantage of being able to discuss an epidemic from personal investigation thereof, from the clinical and epidemiological at the same time as from the Bacteriological standpoint is undoubted. Personal experience would stand for little, unless it had been aided by the study of the recent and current literature on the subject of Diphtheria and Epidemics in general. Contributions to the various Medical Journals, `Public Health,' Journal of Hygiene, and different publications and text books have all been taken advantage of ; and amongst others contributing to these may be mentioned the names of Loeffler, Newsholme, Newman, Theobald Smith, Cobbett, Graham Smith, Delépine, Woodhead. I may add that I owe much to Professor Delépine of Manchester, under whom I spent various periods of study from 1908 to 1910 ; for no one can come in contact with him without being infected with his enthusiasm in the cause of Preventive Medicine and trying to emulate in some degree his extraordinary precision and thoroughness and attempting to adopt his breadth of outlook. The personal friendship as well as continuous help and suggestion of Dr. Buckley, Medical Officer of Health for Crewe, greatly assisted my work at Crewe
    • 

    corecore