9,720 research outputs found

    Balancing Access to Data And Privacy. A review of the issues and approaches for the future

    Get PDF
    Access to sensitive micro data should be provided using remote access data enclaves. These enclaves should be built to facilitate the productive, high-quality usage of microdata. In other words, they should support a collaborative environment that facilitates the development and exchange of knowledge about data among data producers and consumers. The experience of the physical and life sciences has shown that it is possible to develop a research community and a knowledge infrastructure around both research questions and the different types of data necessary to answer policy questions. In sum, establishing a virtual organization approach would provided the research community with the ability to move away from individual, or artisan, science, towards the more generally accepted community based approach. Enclave should include a number of features: metadata documentation capacity so that knowledge about data can be shared; capacity to add data so that the data infrastructure can be augmented; communication capacity, such as wikis, blogs and discussion groups so that knowledge about the data can be deepened and incentives for information sharing so that a community of practice can be built. The opportunity to transform micro-data based research through such a organizational infrastructure could potentially be as far-reaching as the changes that have taken place in the biological and astronomical sciences. It is, however, an open research question how such an organization should be established: whether the approach should be centralized or decentralized. Similarly, it is an open research question as to the appropriate metrics of success, and the best incentives to put in place to achieve success.Methodology for Collecting, Estimating, Organizing Microeconomic Data

    A model for digital preservation repository risk relationships

    Get PDF
    The paper introduces the Preserved Object and Repository Risk Ontology (PORRO), a model that relates preservation functionality with associated risks and opportunities for their mitigation. Building on work undertaken in a range of EU and UK funded research projects (including the Digital Curation Centre , DigitalPreservationEurope and DELOS ), this ontology illustrates relationships between fundamental digital library goals and their parameters; associated rights and responsibilities; practical activities and resources involved in their accomplishment; and risks facing digital libraries and their collections. Its purpose is to facilitate a comprehensive understanding of risk causality and to illustrate opportunities for mitigation and avoidance. The ontology reflects evidence accumulated from a series of institutional audits and evaluations, including a specific subset of digital libraries in the DELOS project which led to the definition of a digital library preservation risk profile. Its applicability is intended to be widespread, and its coverage expected to evolve to reflect developments within the community. Attendees will gain an understanding of the model and learn how they can utilize this online resource to inform their own risk management activities

    Addressing Information Asymmetry In The Social Contract: An Archival-diplomatic Approach To Open Government Data Curation

    Get PDF
    This thesis shows that the concepts and practices developed in the field of record-keeping can be applied to the curation of open government data to strengthen the trustworthiness of that data. It begins by situating open government data in the context of the social contract, which operates through the exchange of information. The thesis develops the notions of the ‘record-as-command’ and ‘data-as-command’ to explain the dialogical but asymmetrical information relationship between the individual and the state, which is modelled as a principal-agent problem. Using concepts from information economics, the study argues that open government data is the latest monitoring mechanism in a long history of government secrecy and openness. This establishes the significance of the curation of open government data beyond technical questions. The thesis then considers how trustworthiness has figured in thinking about record-keeping, identifying three core record-keeping controls; 1) metadata used to document 2) custodianship in 3) auditable systems. To show how these three broad controls have been put into practice, the study examines two examples of record-keeping guidance, one for paper records and one for hybrid records, which demonstrates the application of the three core controls across time and media. The study then looks for the presence or absence of these controls in government datasets published in Kenya and Australia. These analyses trace the datasets back to their source(s), at each step looking for evidence of custodial and curatorial interventions documented in metadata managed in auditable systems. The study’s contribution to open government data work is its demonstration of the value of record-keeping controls in the curation of data. Additionally, the study contributes new insights to information in the principal-agent problem of the social contract, contributes to archival theory and finds a need to foster critical data literacy in the body politic if open government data is to be read and used to correct information asymmetry

    Analytic Tradecraft in the U.S. Intelligence Community

    Get PDF
    The Intelligence Reform and Terrorism Prevention Act of 2004 addressed the belief that weak analytic tradecraft had been an underlying cause of intelligence failures in the U.S. by requiring the Director of National Intelligence to establish and enforce tradecraft standards throughout the U.S. intelligence community (IC). However, analytic tradecraft-the innate abilities and learned skills of intelligence analysts, combined with the tools and technology needed to conduct analysis-is an understudied and poorly understood concept and a decade later, the frequency of intelligence failures has not improved. Using actor-network theory (ANT) as the foundation, the purpose of this qualitative narrative study was to gain greater clarity regarding the process of intelligence analysis and corresponding tradecraft. Data were collected through 7 semi-structured interviews from a purposely selected sample of U.S intelligence analysts to determine how they understood and navigated the analytic process. These data were inductively coded, and following the tenets of the ANT, the process and actors involved in transforming customer requirements and intelligence information into analytic products and refined collection requirements were identified and mapped. The central finding of this study is that current tradecraft standards address neither the full range of activities taking place nor the complete roster of actors involved in the analytic process. With this knowledge, the U.S. IC may be better positioned to identify specific training and equipment shortfalls, develop tailored reform efforts, and improve intelligence operations, resulting in potential positive social change

    Trustworthiness Requirements in Information Systems Design: Lessons Learned from the Blockchain Community

    Get PDF
    In modern society, where digital security is a major preoccupation, the perception of trust is undergoing fundamental transformations. Blockchain community created a substantial body of knowledge on design and development of trustworthy information systems and digital trust. Yet, little research is focused on broader scope and other forms of trust. In this study, we review the research literature reporting on design and development of blockchain solutions and focus on trustworthiness requirements that drive these solutions. Our findings show that digital trust is not the only form of trust that the organizations seek to reenforce: trust in technology and social trust remain powerful drivers in decision making. We analyze 56 primary studies, extract and formulate a set of 21 trustworthiness requirements. While originated from blockchain literature, the formulated requirements are technology-neutral: they aim at supporting business and technology experts in translating their trust issues into specific design decisions and in rationalizing their technological choices. To bridge the gap between social and technological domains, we associate the trustworthiness requirements with three trustworthiness factors defined in the social science: ability, benevolence and integrity

    Invest to Save: Report and Recommendations of the NSF-DELOS Working Group on Digital Archiving and Preservation

    Get PDF
    Digital archiving and preservation are important areas for research and development, but there is no agreed upon set of priorities or coherent plan for research in this area. Research projects in this area tend to be small and driven by particular institutional problems or concerns. As a consequence, proposed solutions from experimental projects and prototypes tend not to scale to millions of digital objects, nor do the results from disparate projects readily build on each other. It is also unclear whether it is worthwhile to seek general solutions or whether different strategies are needed for different types of digital objects and collections. The lack of coordination in both research and development means that there are some areas where researchers are reinventing the wheel while other areas are neglected. Digital archiving and preservation is an area that will benefit from an exercise in analysis, priority setting, and planning for future research. The WG aims to survey current research activities, identify gaps, and develop a white paper proposing future research directions in the area of digital preservation. Some of the potential areas for research include repository architectures and inter-operability among digital archives; automated tools for capture, ingest, and normalization of digital objects; and harmonization of preservation formats and metadata. There can also be opportunities for development of commercial products in the areas of mass storage systems, repositories and repository management systems, and data management software and tools.

    Packet analysis for network forensics: A comprehensive survey

    Get PDF
    Packet analysis is a primary traceback technique in network forensics, which, providing that the packet details captured are sufficiently detailed, can play back even the entire network traffic for a particular point in time. This can be used to find traces of nefarious online behavior, data breaches, unauthorized website access, malware infection, and intrusion attempts, and to reconstruct image files, documents, email attachments, etc. sent over the network. This paper is a comprehensive survey of the utilization of packet analysis, including deep packet inspection, in network forensics, and provides a review of AI-powered packet analysis methods with advanced network traffic classification and pattern identification capabilities. Considering that not all network information can be used in court, the types of digital evidence that might be admissible are detailed. The properties of both hardware appliances and packet analyzer software are reviewed from the perspective of their potential use in network forensics

    Balancing Access to Data and Privacy: a review of the issues and approaches for the future

    Full text link
    "Access to sensitive micro data should be provided using remote access data enclaves. These enclaves should be built to facilitate the productive, high-quality usage of microdata. In other words, they should support a collaborative environment that facilitates the development and exchange of knowledge about data among data producers and consumers. The experience of the physical and life sciences has shown that it is possible to develop a research community and a knowledge infrastructure around both research questions and the different types of data necessary to answer policy questions. In sum, establishing a virtual organization approach would provided the research community with the ability to move away from individual, or artisan, science, towards the more generally accepted community based approach. Enclave should include a number of features: metadata documentation capacity so that knowledge about data can be shared; capacity to add data so that the data infrastructure can be augmented; communication capacity, such as wikis, blogs and discussion groups so that knowledge about the data can be deepened and incentives for information sharing so that a community of practice can be built. The opportunity to transform micro-data based research through such a organizational infrastructure could potentially be as far-reaching as the changes that have taken place in the biological and astronomical sciences. It is, however, an open research question how such an organization should be established: whether the approach should be centralized or decentralized. Similarly, it is an open research question as to the appropriate metrics of success, and the best incentives to put in place to achieve success." (author's abstract
    • 

    corecore