45 research outputs found

    Gépi tanulási módszerek alkalmazása deanonimizálásra

    Get PDF
    Számos olyan adathalmaz áll a rendelkezésünkre, amelyek jelentős üzleti és kutatási potenciált hordoznak. Azonban – gondoljunk például a hordozható eszközök által gyűjtött egészségügyi adatokra – a hasznosítás mellett kiemelkedő kockázati tényező a privátszféra sérülése, amelynek elkerülésére többek között anonimizálási algoritmusokat alkalmaznak. Jelen tanulmányban az anonimizálás „visszafordítására” szakosodott algoritmusokat, az úgynevezett deanonimizációs eljárásokat, illetve azoknak egy speciális és újnak tekinthető szegmensét tekintjük át, amelyeknél gépi tanulási eljárásokat alkalmaznak a robusztusság, illetve a hatékonyság növelése érdekében. A tanulmányban a privátszféra-sértő üzleti célú támadások és a biztonsági alkalmazások hasonlóságára is rámutatunk: ugyanaz az algoritmus hogyan tud biztonsági indokkal a privátszférával szemben dolgozni, kontextustól függően

    Controlled Functional Encryption

    Full text link
    3École polytechnique fédérale de Lausanne Motivated by privacy and usability requirements in various sce-narios where existing cryptographic tools (like secure multi-party computation and functional encryption) are not adequate, we in-troduce a new cryptographic tool called Controlled Functional En-cryption (C-FE). As in functional encryption, C-FE allows a user (client) to learn only certain functions of encrypted data, using keys obtained from an authority. However, we allow (and require) the client to send a fresh key request to the authority every time it wants to evaluate a function on a ciphertext. We obtain efficient solu-tions by carefully combining CCA2 secure public-key encryption (or rerandomizable RCCA secure public-key encryption, depend-ing on the nature of security desired) with Yao’s garbled circuit. Our main contributions in this work include developing and for-mally defining the notion of C-FE; designing theoretical and prac-tical constructions of C-FE schemes achieving these definitions for specific and general classes of functions; and evaluating the perfor-mance of our constructions on various application scenarios

    Smartphone owners need security advice. How can we ensure they get it?

    Get PDF
    Computer users often behave insecurely, and do not take the precautions they ought to. One reads almost daily about people not protecting their devices, not making backups and falling for phishing messages. This impacts all of society since people increasingly carry a computer in their pockets: their smartphones. It could be that smartphone owners simply do not know enough about security threats or precautions. To address this, many official bodies publish advice online. For such a broadcast-type educational approach to work, two assumptions must be satisfied. The first is that people will deliberately seek out security-related information and the second is that they will consult official sources to satisfy their information needs. Assumptions such as these ought to be verified, especially with the numbers of cyber attacks on the rise. It was decided to explore the validity of these assumptions by surveying students at a South African university, including both Computer Science and Non-Computer Science students. The intention was to explore levels of awareness of Smartphone security practice, the sources of advice the students used, and the impact of a Computer Science education on awareness and information seeking behaviours. Awareness, it was found, was variable across the board but poorer amongst students without a formal computing education. Moreover, it became clear that students often found Facebook more helpful than public media, in terms of obtaining security advice

    Using Metrics Suites to Improve the Measurement of Privacy in Graphs

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Social graphs are widely used in research (e.g., epidemiology) and business (e.g., recommender systems). However, sharing these graphs poses privacy risks because they contain sensitive information about individuals. Graph anonymization techniques aim to protect individual users in a graph, while graph de-anonymization aims to re-identify users. The effectiveness of anonymization and de-anonymization algorithms is usually evaluated with privacy metrics. However, it is unclear how strong existing privacy metrics are when they are used in graph privacy. In this paper, we study 26 privacy metrics for graph anonymization and de-anonymization and evaluate their strength in terms of three criteria: monotonicity indicates whether the metric indicates lower privacy for stronger adversaries; for within-scenario comparisons, evenness indicates whether metric values are spread evenly; and for between-scenario comparisons, shared value range indicates whether metrics use a consistent value range across scenarios. Our extensive experiments indicate that no single metric fulfills all three criteria perfectly. We therefore use methods from multi-criteria decision analysis to aggregate multiple metrics in a metrics suite, and we show that these metrics suites improve monotonicity compared to the best individual metric. This important result enables more monotonic, and thus more accurate, evaluations of new graph anonymization and de-anonymization algorithms

    Hypothesis Testing Interpretations and Renyi Differential Privacy

    Full text link
    Differential privacy is a de facto standard in data privacy, with applications in the public and private sectors. A way to explain differential privacy, which is particularly appealing to statistician and social scientists is by means of its statistical hypothesis testing interpretation. Informally, one cannot effectively test whether a specific individual has contributed her data by observing the output of a private mechanism---any test cannot have both high significance and high power. In this paper, we identify some conditions under which a privacy definition given in terms of a statistical divergence satisfies a similar interpretation. These conditions are useful to analyze the distinguishability power of divergences and we use them to study the hypothesis testing interpretation of some relaxations of differential privacy based on Renyi divergence. This analysis also results in an improved conversion rule between these definitions and differential privacy

    A Survey on Routing in Anonymous Communication Protocols

    No full text
    The Internet has undergone dramatic changes in the past 15 years, and now forms a global communication platform that billions of users rely on for their daily activities. While this transformation has brought tremendous benefits to society, it has also created new threats to online privacy, ranging from profiling of users for monetizing personal information to nearly omnipotent governmental surveillance. As a result, public interest in systems for anonymous communication has drastically increased. Several such systems have been proposed in the literature, each of which offers anonymity guarantees in different scenarios and under different assumptions, reflecting the plurality of approaches for how messages can be anonymously routed to their destination. Understanding this space of competing approaches with their different guarantees and assumptions is vital for users to understand the consequences of different design options. In this work, we survey previous research on designing, developing, and deploying systems for anonymous communication. To this end, we provide a taxonomy for clustering all prevalently considered approaches (including Mixnets, DC-nets, onion routing, and DHT-based protocols) with respect to their unique routing characteristics, deployability, and performance. This, in particular, encompasses the topological structure of the underlying network; the routing information that has to be made available to the initiator of the conversation; the underlying communication model; and performance-related indicators such as latency and communication layer. Our taxonomy and comparative assessment provide important insights about the differences between the existing classes of anonymous communication protocols, and it also helps to clarify the relationship between the routing characteristics of these protocols, and their performance and scalability

    Practical Multi-party Private Set Intersection Protocols

    Get PDF
    Privacy-preserving techniques for processing sets of information have attracted the research community’s attention in recent years due to society’s increasing dependency on the availability of data at any time. One of the fundamental problems in set operations is known as Private Set Intersection (PSI). The problem requires two parties to compute the intersection between their sets while preserving correctness and privacy. Although several efficient two-party PSI protocols already exist, protocols for PSI in the multi-party setting (MPSI) currently scale poorly with a growing number of parties, even though this applies to many real-life scenarios. This paper fills this gap by proposing two multi-party protocols based on Bloom filters and threshold homomorphic PKEs, which are secure in the semi-honest model. The first protocol is a multi-party PSI, whereas the second provides a more subtle functionality - threshold multi-party PSI (T-MPSI) - which outputs items of the server that appear in at least some number of other private sets. The protocols are inspired by the Davidson-Cid protocol based on Bloom filters. We compare our MPSI protocol against Kolesnikov et al., which is among the fastest known MPSI protocols. Our MPSI protocol performs better than Kolesnikov et al. in terms of run time, given that the sets are small and there is a large number of parties. Our T-MPSI protocol performs better than other existing works: the computational and communication complexities are linear in the number of elements in the largest set given a fixed number of colluding parties. We conclude that our MPSI and T-MPSI protocols are practical solutions suitable for emerging use-case scenarios with many parties, where previous solutions did not scale well

    “I thought you were okay”: Participatory Design with Young Adults to Fight Multiparty Privacy Conflicts in Online Social Networks

    Get PDF
    International audienceWhile sharing multimedia content on Online Social Networks (OSNs) has many benefits, exposing other people without obtaining permission could cause Multiparty Privacy Conflicts (MPCs). Earlier studies developed technical solutions and dissuasive approaches to address MPCs. However, none of these studies involved OSN users who have experienced MPCs, in the design process, possibly overlooking the valuable experiences these individuals might have accrued. To fill this gap, we recruited participants specifically from this population of users, and involved them in participatory design sessions aiming at ideating solutions to reduce the incidence of MPCs. To frame the activities of our participants, we borrowed terminology and concepts from a well known framework used in the justice systems. Over the course of several design sessions, our participants designed 10 solutions to mitigate MPCs. The designed solutions leverage different mechanisms, including preventing MPCs from happening, dissuading users from sharing, mending the harm, and educating users about the community standards. We discuss the open design and research opportunities suggested by the designed solutions and we contribute an ideal workflow that synthesizes the best of each solution. This study contributes to the innovation of privacy-enhancing technologies to limit the incidences of MPCs in OSNs
    corecore