33,163 research outputs found

    Measuring violence to end violence: mainstreaming gender

    Get PDF
    Mainstreaming gender into the measurement of violence, in order to assist the development of the theory of change needed to support actions to end violence, is the aim of this paper. It addresses the division between gender-neutral and women-only strategies of data collection that is failing to deliver the quality evidence needed to address the extent and distribution of violence. It develops a better operationalisation of the concepts of gender and violence for purposes of statistical analysis. It produces a check list of criteria to assess the quality of statistics on gendered violence. It assesses the strengths and weakness of surveys linked to two contrasting theoretical perspectives: the Fundamental Rights Agency Survey of Violence against Women; and the Crime Survey for England and Wales. It shows how FRA fails. It shows how the ONS has limited the potential of the CSEW. It offers a solution in: a short questionnaire that is fit for purpose; and ways of analysing data that escape the current polarisation

    Monitoring and evaluation of family intervention projects and services to March 2011

    Get PDF

    On the Measurement of Privacy as an Attacker's Estimation Error

    Get PDF
    A wide variety of privacy metrics have been proposed in the literature to evaluate the level of protection offered by privacy enhancing-technologies. Most of these metrics are specific to concrete systems and adversarial models, and are difficult to generalize or translate to other contexts. Furthermore, a better understanding of the relationships between the different privacy metrics is needed to enable more grounded and systematic approach to measuring privacy, as well as to assist systems designers in selecting the most appropriate metric for a given application. In this work we propose a theoretical framework for privacy-preserving systems, endowed with a general definition of privacy in terms of the estimation error incurred by an attacker who aims to disclose the private information that the system is designed to conceal. We show that our framework permits interpreting and comparing a number of well-known metrics under a common perspective. The arguments behind these interpretations are based on fundamental results related to the theories of information, probability and Bayes decision.Comment: This paper has 18 pages and 17 figure

    User's Privacy in Recommendation Systems Applying Online Social Network Data, A Survey and Taxonomy

    Full text link
    Recommender systems have become an integral part of many social networks and extract knowledge from a user's personal and sensitive data both explicitly, with the user's knowledge, and implicitly. This trend has created major privacy concerns as users are mostly unaware of what data and how much data is being used and how securely it is used. In this context, several works have been done to address privacy concerns for usage in online social network data and by recommender systems. This paper surveys the main privacy concerns, measurements and privacy-preserving techniques used in large-scale online social networks and recommender systems. It is based on historical works on security, privacy-preserving, statistical modeling, and datasets to provide an overview of the technical difficulties and problems associated with privacy preserving in online social networks.Comment: 26 pages, IET book chapter on big data recommender system

    Literature Overview - Privacy in Online Social Networks

    Get PDF
    In recent years, Online Social Networks (OSNs) have become an important\ud part of daily life for many. Users build explicit networks to represent their\ud social relationships, either existing or new. Users also often upload and share a plethora of information related to their personal lives. The potential privacy risks of such behavior are often underestimated or ignored. For example, users often disclose personal information to a larger audience than intended. Users may even post information about others without their consent. A lack of experience and awareness in users, as well as proper tools and design of the OSNs, perpetuate the situation. This paper aims to provide insight into such privacy issues and looks at OSNs, their associated privacy risks, and existing research into solutions. The final goal is to help identify the research directions for the Kindred Spirits project

    Algorithms that Remember: Model Inversion Attacks and Data Protection Law

    Get PDF
    Many individuals are concerned about the governance of machine learning systems and the prevention of algorithmic harms. The EU's recent General Data Protection Regulation (GDPR) has been seen as a core tool for achieving better governance of this area. While the GDPR does apply to the use of models in some limited situations, most of its provisions relate to the governance of personal data, while models have traditionally been seen as intellectual property. We present recent work from the information security literature around `model inversion' and `membership inference' attacks, which indicate that the process of turning training data into machine learned systems is not one-way, and demonstrate how this could lead some models to be legally classified as personal data. Taking this as a probing experiment, we explore the different rights and obligations this would trigger and their utility, and posit future directions for algorithmic governance and regulation.Comment: 15 pages, 1 figur
    corecore