12,632 research outputs found

    Transparent government, not transparent citizens: a report on privacy and transparency for the Cabinet Office

    No full text
    1. Privacy is extremely important to transparency. The political legitimacy of a transparency programme will depend crucially on its ability to retain public confidence. Privacy protection should therefore be embedded in any transparency programme, rather than bolted on as an afterthought. 2. Privacy and transparency are compatible, as long as the former is carefully protected and considered at every stage. 3. Under the current transparency regime, in which public data is specifically understood not to include personal data, most data releases will not raise privacy concerns. However, some will, especially as we move toward a more demand-driven scheme. 4. Discussion about deanonymisation has been driven largely by legal considerations, with a consequent neglect of the input of the technical community. 5. There are no complete legal or technical fixes to the deanonymisation problem. We should continue to anonymise sensitive data, being initially cautious about releasing such data under the Open Government Licence while we continue to take steps to manage and research the risks of deanonymisation. Further investigation to determine the level of risk would be very welcome. 6. There should be a focus on procedures to output an auditable debate trail. Transparency about transparency – metatransparency – is essential for preserving trust and confidence. Fourteen recommendations are made to address these conclusions

    Location Privacy in Spatial Crowdsourcing

    Full text link
    Spatial crowdsourcing (SC) is a new platform that engages individuals in collecting and analyzing environmental, social and other spatiotemporal information. With SC, requesters outsource their spatiotemporal tasks to a set of workers, who will perform the tasks by physically traveling to the tasks' locations. This chapter identifies privacy threats toward both workers and requesters during the two main phases of spatial crowdsourcing, tasking and reporting. Tasking is the process of identifying which tasks should be assigned to which workers. This process is handled by a spatial crowdsourcing server (SC-server). The latter phase is reporting, in which workers travel to the tasks' locations, complete the tasks and upload their reports to the SC-server. The challenge is to enable effective and efficient tasking as well as reporting in SC without disclosing the actual locations of workers (at least until they agree to perform a task) and the tasks themselves (at least to workers who are not assigned to those tasks). This chapter aims to provide an overview of the state-of-the-art in protecting users' location privacy in spatial crowdsourcing. We provide a comparative study of a diverse set of solutions in terms of task publishing modes (push vs. pull), problem focuses (tasking and reporting), threats (server, requester and worker), and underlying technical approaches (from pseudonymity, cloaking, and perturbation to exchange-based and encryption-based techniques). The strengths and drawbacks of the techniques are highlighted, leading to a discussion of open problems and future work

    A survey of machine and deep learning methods for privacy protection in the Internet of things

    Get PDF
    Recent advances in hardware and information technology have accelerated the proliferation of smart and interconnected devices facilitating the rapid development of the Internet of Things (IoT). IoT applications and services are widely adopted in environments such as smart cities, smart industry, autonomous vehicles, and eHealth. As such, IoT devices are ubiquitously connected, transferring sensitive and personal data without requiring human interaction. Consequently, it is crucial to preserve data privacy. This paper presents a comprehensive survey of recent Machine Learning (ML)- and Deep Learning (DL)-based solutions for privacy in IoT. First, we present an in depth analysis of current privacy threats and attacks. Then, for each ML architecture proposed, we present the implementations, details, and the published results. Finally, we identify the most effective solutions for the different threats and attacks.This work is partially supported by the Generalitat de Catalunya under grant 2017 SGR 962 and the HORIZON-GPHOENIX (101070586) and HORIZON-EUVITAMIN-V (101093062) projects.Peer ReviewedPostprint (published version

    Entire Issue Volume 23, Number 2

    Get PDF
    Complete issue of Vol. 23, No. 2 of The Primary Source

    Advancing Microdata Privacy Protection: A Review of Synthetic Data

    Full text link
    Synthetic data generation is a powerful tool for privacy protection when considering public release of record-level data files. Initially proposed about three decades ago, it has generated significant research and application interest. To meet the pressing demand of data privacy protection in a variety of contexts, the field needs more researchers and practitioners. This review provides a comprehensive introduction to synthetic data, including technical details of their generation and evaluation. Our review also addresses the challenges and limitations of synthetic data, discusses practical applications, and provides thoughts for future work

    Towards a Digital Ecosystem of Trust: Ethical, Legal and Societal Implications

    Get PDF
    The European vision of a digital ecosystem of trust rests on innovation, powerful technological solutions, a comprehensive regulatory framework and respect for the core values and principles of ethics. Innovation in the digital domain strongly relies on data, as has become obvious during the current pandemic. Successful data science, especially where health data are concerned, necessitates establishing a framework where data subjects can feel safe to share their data. In this paper, methods for facilitating data sharing, privacy-preserving technologies, decentralization, data altruism, as well as the interplay between the Data Governance Act and the GDPR, are presented and discussed by reference to use cases from the largest pan-European social science data research project, SoBigData++. In doing so, we argue that innovation can be turned into responsible innovation and Europe can make its ethics work in digital practice

    Transgendered in Alaska: Navigating the Changing Legal Landscape for Change in Gender Petitions

    Get PDF
    Background: Detecting intracellular bacterial symbionts can be challenging when they persist at very low densities. Wolbachia, a widespread bacterial endosymbiont of invertebrates, is particularly challenging. Although it persists at high titers in many species, in others its densities are far below the detection limit of classic end-point Polymerase Chain Reaction (PCR). These low-titer infections can be reliably detected by combining PCR with DNA hybridization, but less elaborate strategies based on end-point PCR alone have proven less sensitive or less general. Results: We introduce a multicopy PCR target that allows fast and reliable detection of A-supergroup Wolbachia -even at low infection titers -with standard end-point PCR. The target is a multicopy motif (designated ARM: A-supergroup repeat motif) discovered in the genome of wMel (the Wolbachia in Drosophila melanogaster). ARM is found in at least seven other Wolbachia A-supergroup strains infecting various Drosophila, the wasp Muscidifurax and the tsetse fly Glossina. We demonstrate that end-point PCR targeting ARM can reliably detect both high-and low-titer Wolbachia infections in Drosophila, Glossina and interspecific hybrids. Conclusions: Simple end-point PCR of ARM facilitates detection of low-titer Wolbachia A-supergroup infections. Detecting these infections previously required more elaborate procedures. Our ARM target seems to be a general feature of Wolbachia A-supergroup genomes, unlike other multicopy markers such as insertion sequences (IS)

    Capability Challenges in Transforming Government through Open and Big Data: Tales of Two Cities

    Get PDF
    Hyper-connected and digitized governments are increasingly advancing a vision of data-driven government as producers and consumers of big data in the big data ecosystem. Despite the growing interests in the potential power of big data, we found paucity of empirical research on big data use in government. This paper explores organizational capability challenges in transforming government through big data use. Using systematic literature review approach we developed initial framework for examining impacts of socio-political, strategic change, analytical, and technical capability challenges in enhancing public policy and service through big data. We then applied the framework to conduct case study research on two large-size city governments’ big data use. The findings indicate the framework’s usefulness, shedding new insights into the unique government context. Consequently, the framework was revised by adding big data public policy, political leadership structure, and organizational culture to further explain impacts of organizational capability challenges in transforming government
    • …
    corecore