11 research outputs found

    Re-Identification Attacks – A Systematic Literature Review

    Get PDF
    The publication of increasing amounts of anonymised open source data has resulted in a worryingly rising number of successful re-identification attacks. This has a number of privacy and security implications both on an individual and corporate level. This paper uses a Systematic Literature Review to investigate the depth and extent of this problem as reported in peer reviewed literature. Using a detailed protocol ,seven research portals were explored, 10,873 database entries were searched, from which a subset of 220 papers were selected for further review. From this total, 55 papers were selected as being within scope and to be included in the final review. The main review findings are that 72.7% of all successful re-identification attacks have taken place since 2009. Most attacks use multiple datasets. The majority of them have taken place on global datasets such as social networking data, and have been conducted by US based researchers. Furthermore, the number of datasets can be used as an attribute. Because privacy breaches have security, policy and legal implications (e.g. data protection, Safe Harbor etc.), the work highlights the need for new and improved anonymisation techniques or indeed, a fresh approach to open source publishing

    Incorporating contextual integrity into privacy decision making: a risk based approach.

    Get PDF
    This work sought to create a privacy assessment framework that would encompass legal, policy and contextual considerations to provide a practical decision support tool or prototype for determining privacy risks, thereby integrating the privacy decision-making function into organisational decision-making by default. This was achieved by way of a meta-model from which two separate privacy assessment frameworks were derived, each represented as a stand-alone prototype spreadsheet tool for privacy assessment before being amalgamated into the main contribution of this work, the PACT (PrivACy Throughout) framework, also presented as a prototype spreadsheet. Thus, this work makes four contributions. First, a meta-model of Contextual Integrity (CI) (Nissenbaum 2010) is presented, where CI has been broken down into its component parts to provide an easy to interpret visual representation of CI. Second, a practical privacy decision support framework for assessing data suitability for publication as open data, the ContextuaL Integrity For Open Data (CLIFOD) questionnaire is presented. Third, the scope of the framework is expanded upon to include other industry sectors or domains. To this end, a data protection impact assessment (DPIA), the DPIA Data Wheel, is exhibited that integrates the provisions brought in by the General Data Protection Regulation (GDPR) with CI and a revised version of CLIFOD. This framework is applied and evaluated in the charity sector to demonstrate the applicability of the concepts derived in CLIFOD to any domain where data is processed or shared. Finally, this work culminates with the main contribution of this work, one overarching framework, PrivACy Throughout (PACT). PACT is a privacy decision framework for assessing privacy risks throughout the data lifecycle. It has been derived and underpinned by existing theory though the amalgamation of CLIFOD and the DPIA Data Wheel and extended upon to include a privacy lifecycle plan (PLAN) for managing the data throughout its data life cycle. PACT, incorporates context (using CI), with contemporary legislation, in particular, the General Data Protection Regu- lation (GDPR), to facilitate consistent and repeatable privacy risk assessment from both the perspective of the data subject and the organisation, thereby supporting organisational decision making around privacy risk for both existing and new projects, systems, data and processes

    DPIA in Context: Applying DPIA to Assess Privacy Risks of Cyber Physical Systems

    Get PDF
    Cyber Physical Systems (CPS) seamlessly integrate physical objects with technology, thereby blurring the boundaries between the physical and virtual environments. While this brings many opportunities for progress, it also adds a new layer of complexity to the risk assessment process when attempting to ascertain what privacy risks this might impose on an organisation. In addition, privacy regulations, such as the General Data Protection Regulation (GDPR), mandate assessment of privacy risks, including making Data Protection Impact Assessments (DPIAs) compulsory. We present the DPIA Data Wheel, a holistic privacy risk assessment framework based on Contextual Integrity (CI), that practitioners can use to inform decision making around the privacy risks of CPS. This framework facilitates comprehensive contextual inquiry into privacy risk, that accounts for both the elicitation of privacy risks, and the identification of appropriate mitigation strategies. Further, by using this DPIA framework we also provide organisations with a means of assessing privacy from both the perspective of the organisation and the individual, thereby facilitating GDPR compliance. We empirically evaluate this framework in three different real-world settings. In doing so, we demonstrate how CI can be incorporated into the privacy risk decision-making process in a usable, practical manner that will aid decision makers in making informed privacy decisions

    Translating contextual integrity into practice using CLIFOD.

    Get PDF
    Public open data increases transparency, but raises questions about the privacy implications of affected individuals. We present a case for using CLIFOD (ContextuaL Integrity for Open Data), a step-by-step privacy decision framework derived from contextual integrity, to assess the hidden risks of making data obtained from Internet of Things (IoT) and Smart City devices before any data is released and made openly available. We believe CLIFOD helps reduce the risk of any personal or sensitive data being inadvertently published or made available by guiding decision makers into thinking about privacy in context and what privacy risks might be associated with making the data available and how this might impact prosumers

    Privacy Risk Assessment in Context: A Meta-Model based on Contextual Integrity

    Get PDF
    Publishing data in open format is a growing trend, particularly for public bodies who have a legal obligation to make data available as open data. We look at the privacy implications of publishing open data and, in particular, how organisations can make informed decisions around privacy risks in relation to open data publishing before publication occurs. Using a well established theoretical privacy assessment framework, Contextual Integrity, we illustrate how this can be translated into a practical metamodel that can assist public bodies in assessing what privacy implications or risks might be associated with making a particular dataset available as open data. We validate the metamodel by providing a worked example and illustrate the effectiveness of this by reference to a case study application where the metamodel was successfully applied in practice

    Privacy Goals for the Data Lifecycle

    Get PDF
    The introduction of Data Protection by Default and Design (DPbDD) brought in as part of the General Data Protection Regulation (GDPR) in 2018, has necessitated that businesses review how best to incorporate privacy into their processes in a transparent manner, so as to build trust and improve decisions around privacy best practice. To address this issue, this paper presents a 7-stage data lifecycle, supported by nine privacy goals that together, will help practitioners manage data holdings throughout data lifecycle. The resulting data lifecycle (7-DL) was created as part of the Ideal-Cities project, a Horizon-2020 Smart-city initiative, that seeks to facilitate data re-use and/or repurposed. We evaluate 7-DL through peer review and an exemplar worked example that applies the data lifecycle to a real-time life logging fire incident scenario, one of the Ideal-Cities use cases to demonstrate the applicability of the framework

    Investigating IPTV Malware in the Wild

    Get PDF
    Technologies providing copyright-infringing IPTV content are commonly used as an illegal alternative to legal IPTV subscriptions and services, as they usually have lower monetary costs and can be more convenient for users who follow content from different sources. These infringing IPTV technologies may include websites, software, software add-ons, and physical set-top boxes. Due to the free or low cost of illegal IPTV technologies, illicit IPTV content providers will often resort to intrusive advertising, scams, and the distribution of malware to increase their revenue. We developed an automated solution for collecting and analysing malware from illegal IPTV technologies and used it to analyse a sample of illicit IPTV websites, application (app) stores, and software. Our results show that our IPTV Technologies Malware Analysis Framework (IITMAF) classified 32 of the 60 sample URLs tested as malicious compared to running the same test using publicly available online antivirus solutions, which only detected 23 of the 60 sample URLs as malicious. Moreover, the IITMAF also detected malicious URLs and files from 31 of the sample’s websites, one of which had reported ransomware behaviour

    Data Sanitisation and Redaction for Cyber Threat Intelligence Sharing Platforms

    Get PDF
    The recent technological advances and the recent changes in the daily human activities increased the production and sharing of data. In the ecosystem of interconnected systems, data can be circulated among systems for various reasons. This could lead to exchange of private or sensitive information between entities. Data Sanitisation involves processes and practices that remove sensitive and private information from documents before sharing them with entities that should not be exposed to the removed information. This paper presents the design and development of a data sanitisation and redaction solution for a Cyber Threat Intelligence sharing platform. The Data Sanitisation and Redaction Plugin has been designed with the purpose of operating as a plugin for the ECHO Project’s Early Warning System platform and enhancing its operative capabilities during information sharing. This plugin aims to provide automated security and privacy-based controls to the concept of CTI sharing over a ticketing system. The plugin has been successfully tested and the results are presented in this paper

    Privacy Essentials!

    No full text
    A privacy framework that guides the user through a decision tree artefact, to not only produce the required documentation pack to satisfy UK GDPR compliance, but also to highlight key responsibilities when handling sensitive data. The artefact has been evaluated and the initial survey results are encouraging

    Privacy Essentials

    Get PDF
    Following a series of legislative changes around privacy over the past 25 years, this study highlights data protection regulations and the complexities of applying these frameworks. To address this, we created a privacy framework to guide organisations in what steps they need to undertake to achieve compliance with the UK GDPR, highlighting the existing privacy frameworks for best practice and the requirements from the Information Commissioners Office. We applied our framework to a UK charity sector; to account for the specific nuances that working in a charity brings, we worked closely with local charities to understand their requirements, and interviewed privacy experts to develop a framework that is readily accessible and provides genuine value. Feeding the results into our privacy framework, a decision tree artefact has been developed for compliance. The artefact has been tested against black-box tests, System Usability Tests and UX Honeycomb tests. Results show that Privacy Essentials! provides the foundation of a data protection management framework and offers organisations the catalyst to start, enhance, or even validate a solid and effective data privacy programme
    corecore