5 research outputs found
Physical Randomness Extractors: Generating Random Numbers with Minimal Assumptions
How to generate provably true randomness with minimal assumptions? This
question is important not only for the efficiency and the security of
information processing, but also for understanding how extremely unpredictable
events are possible in Nature. All current solutions require special structures
in the initial source of randomness, or a certain independence relation among
two or more sources. Both types of assumptions are impossible to test and
difficult to guarantee in practice. Here we show how this fundamental limit can
be circumvented by extractors that base security on the validity of physical
laws and extract randomness from untrusted quantum devices. In conjunction with
the recent work of Miller and Shi (arXiv:1402:0489), our physical randomness
extractor uses just a single and general weak source, produces an arbitrarily
long and near-uniform output, with a close-to-optimal error, secure against
all-powerful quantum adversaries, and tolerating a constant level of
implementation imprecision. The source necessarily needs to be unpredictable to
the devices, but otherwise can even be known to the adversary.
Our central technical contribution, the Equivalence Lemma, provides a general
principle for proving composition security of untrusted-device protocols. It
implies that unbounded randomness expansion can be achieved simply by
cross-feeding any two expansion protocols. In particular, such an unbounded
expansion can be made robust, which is known for the first time. Another
significant implication is, it enables the secure randomness generation and key
distribution using public randomness, such as that broadcast by NIST's
Randomness Beacon. Our protocol also provides a method for refuting local
hidden variable theories under a weak assumption on the available randomness
for choosing the measurement settings.Comment: A substantial re-writing of V2, especially on model definitions. An
abstract model of robustness is added and the robustness claim in V2 is made
rigorous. Focuses on quantum-security. A future update is planned to address
non-signaling securit
EmPoWeb: Empowering Web Applications with Browser Extensions
Browser extensions are third party programs, tightly integrated to browsers,
where they execute with elevated privileges in order to provide users with
additional functionalities. Unlike web applications, extensions are not subject
to the Same Origin Policy (SOP) and therefore can read and write user data on
any web application. They also have access to sensitive user information
including browsing history, bookmarks, cookies and list of installed
extensions. Extensions have a permanent storage in which they can store data
and can trigger the download of arbitrary files on the user's device. For
security reasons, browser extensions and web applications are executed in
separate contexts. Nonetheless, in all major browsers, extensions and web
applications can interact by exchanging messages. Through these communication
channels, a web application can exploit extension privileged capabilities and
thereby access and exfiltrate sensitive user information. In this work, we
analyzed the communication interfaces exposed to web applications by Chrome,
Firefox and Opera browser extensions. As a result, we identified many
extensions that web applications can exploit to access privileged capabilities.
Through extensions' APIS, web applications can bypass SOP, access user cookies,
browsing history, bookmarks, list of installed extensions, extensions storage,
and download arbitrary files on the user's device. Our results demonstrate that
the communications between browser extensions and web applications pose serious
security and privacy threats to browsers, web applications and more importantly
to users. We discuss countermeasures and proposals, and believe that our study
and in particular the tool we used to detect and exploit these threats, can be
used as part of extensions review process by browser vendors to help them
identify and fix the aforementioned problems in extensions.Comment: 40th IEEE Symposium on Security and Privacy May 2019 Application
security; Attacks and defenses; Malware and unwanted software; Mobile and Web
security and privacy; Privacy technologies and mechanism
EmPoWeb: Empowering Web Applications with Browser Extensions
International audienceBrowser extensions are third party programs, tightly integrated to browsers, where they execute with elevated privileges in order to provide users with additional functionalities. Unlike web applications, extensions are not subject to the Same Origin Policy (SOP) and therefore can read and write user data on any web application. They also have access to sensitive user information including browsing history, bookmarks, credentials (cookies) and list of installed extensions. They have access to a permanent storage in which they can store data as long as they are installed in the user's browser. They can trigger the download of arbitrary files and save them on the user's device. For security reasons, browser extensions and web applications are executed in separate contexts. Nonetheless, in all major browsers, extensions and web applications can interact by exchanging messages. Through these communication channels, a web application can exploit extension privileged capabilities and thereby access and exfiltrate sensitive user information. In this work, we analyzed the communication interfaces exposed to web applications by Chrome, Firefox and Opera browser extensions. As a result, we identified many extensions that web applications can exploit to access privileged capabilities. Through extensions' APIS, web applications can bypass SOP and access user data on any other web application, access user credentials (cookies), browsing history, bookmarks, list of installed extensions, extensions storage, and download and save arbitrary files in the user's device. Our results demonstrate that the communications between browser extensions and web applications pose serious security and privacy threats to browsers, web applications and more importantly to users. We discuss countermeasures and proposals, and believe that our study and in particular the tool we used to detect and exploit these threats, can be used as part of extensions review process by browser vendors to help them identify and fix the aforementioned problems in extensions
Enhancing Privacy and Fairness in Search Systems
Following a period of expedited progress in the capabilities of digital systems, the society begins to realize that systems designed to assist people in various tasks can also harm individuals and society. Mediating access to information and explicitly or implicitly ranking people in increasingly many applications, search systems have a substantial potential to contribute to such unwanted outcomes. Since they collect vast amounts of data about both searchers and search subjects, they have the potential to violate the privacy of both of these groups of users. Moreover, in applications where rankings influence people's economic livelihood outside of the platform, such as sharing economy or hiring support websites, search engines have an immense economic power over their users in that they control user exposure in ranked results.
This thesis develops new models and methods broadly covering different aspects of privacy and fairness in search systems for both searchers and search subjects. Specifically, it makes the following contributions:
(1) We propose a model for computing individually fair rankings where search subjects get exposure proportional to their relevance. The exposure is amortized over time using constrained optimization to overcome searcher attention biases while preserving ranking utility.
(2) We propose a model for computing sensitive search exposure where each subject gets to know the sensitive queries that lead to her profile in the top-k search results. The problem of finding exposing queries is technically modeled as reverse nearest neighbor search, followed by a weekly-supervised learning to rank model ordering the queries by privacy-sensitivity.
(3) We propose a model for quantifying privacy risks from textual data in online communities. The method builds on a topic model where each topic is annotated by a crowdsourced sensitivity score, and privacy risks are associated with a user's relevance to sensitive topics. We propose relevance measures capturing different dimensions of user interest in a topic and show how they correlate with human risk perceptions.
(4) We propose a model for privacy-preserving personalized search where search queries of different users are split and merged into synthetic profiles. The model mediates the privacy-utility trade-off by keeping semantically coherent fragments of search histories within individual profiles, while trying to minimize the similarity of any of the synthetic profiles to the original user profiles.
The models are evaluated using information retrieval techniques and user studies over a variety of datasets, ranging from query logs, through social media and community question answering postings, to item listings from sharing economy platforms.Nach einer Zeit schneller Fortschritte in den FĂ€higkeiten digitaler Systeme beginnt die Gesellschaft zu erkennen, dass Systeme, die Menschen bei verschiedenen Aufgaben unterstĂŒtzen sollen, den Einzelnen und die Gesellschaft auch schĂ€digen können. Suchsysteme haben ein erhebliches Potenzial, um zu solchen unerwĂŒnschten Ergebnissen beizutragen, weil sie den Zugang zu Informationen vermitteln und explizit oder implizit Menschen in immer mehr Anwendungen in Ranglisten anordnen. Da sie riesige Datenmengen sowohl ĂŒber Suchende als auch ĂŒber Gesuchte sammeln, können sie die PrivatsphĂ€re dieser beiden Benutzergruppen verletzen. In Anwendungen, in denen Ranglisten einen Einfluss auf den finanziellen Lebensunterhalt der Menschen auĂerhalb der Plattform haben, z. B. auf Sharing-Economy-Plattformen oder Jobbörsen, haben Suchmaschinen eine immense wirtschaftliche Macht ĂŒber ihre Nutzer, indem sie die Sichtbarkeit von Personen in Suchergebnissen kontrollieren.
In dieser Dissertation werden neue Modelle und Methoden entwickelt, die verschiedene Aspekte der PrivatsphĂ€re und der Fairness in Suchsystemen, sowohl fĂŒr Suchende als auch fĂŒr Gesuchte, abdecken. Insbesondere leistet die Arbeit folgende BeitrĂ€ge:
(1) Wir schlagen ein Modell fĂŒr die Berechnung von fairen Rankings vor, bei denen Suchsubjekte entsprechend ihrer Relevanz angezeigt werden. Die Sichtbarkeit wird im Laufe der Zeit durch ein Optimierungsmodell adjustiert, um die Verzerrungen der Sichtbarkeit fĂŒr Sucher zu kompensieren, wĂ€hrend die NĂŒtzlichkeit des Rankings beibehalten bleibt.
(2) Wir schlagen ein Modell fĂŒr die Bestimmung kritischer Suchanfragen vor, in dem fĂŒr jeden Nutzer Aanfragen, die zu seinem Nutzerprofil in den Top-k-Suchergebnissen fĂŒhren, herausgefunden werden. Das Problem der Berechnung von exponierenden Suchanfragen wird als Reverse-Nearest-Neighbor-Suche modelliert. Solche kritischen Suchanfragen werden dann von einem Learning-to-Rank-Modell geordnet, um die sensitiven Suchanfragen herauszufinden.
(3) Wir schlagen ein Modell zur Quantifizierung von Risiken fĂŒr die PrivatsphĂ€re aus Textdaten in Online Communities vor. Die Methode baut auf einem Themenmodell auf, bei dem jedes Thema durch einen Crowdsourcing-SensitivitĂ€tswert annotiert wird. Die Risiko-Scores sind mit der Relevanz eines Benutzers mit kritischen Themen verbunden. Wir schlagen RelevanzmaĂe vor, die unterschiedliche Dimensionen des Benutzerinteresses an einem Thema erfassen, und wir zeigen, wie diese MaĂe mit der Risikowahrnehmung von Menschen korrelieren.
(4) Wir schlagen ein Modell fĂŒr personalisierte Suche vor, in dem die PrivatsphĂ€re geschĂŒtzt wird. In dem Modell werden Suchanfragen von Nutzer partitioniert und in synthetische Profile eingefĂŒgt. Das Modell erreicht einen guten Kompromiss zwischen der SuchsystemnĂŒtzlichkeit und der PrivatsphĂ€re, indem semantisch kohĂ€rente Fragmente der Suchhistorie innerhalb einzelner Profile beibehalten werden, wobei gleichzeitig angestrebt wird, die Ăhnlichkeit der synthetischen Profile mit den ursprĂŒnglichen
Nutzerprofilen zu minimieren.
Die Modelle werden mithilfe von Informationssuchtechniken und Nutzerstudien ausgewertet. Wir benutzen eine Vielzahl von DatensĂ€tzen, die von Abfrageprotokollen ĂŒber soziale Medien Postings und die Fragen vom Q&A Forums bis hin zu Artikellistungen von Sharing-Economy-Plattformen reichen
Deconstructing the right to privacy considering the impact of fashion recommender systems on an individualâs autonomy and identity
Computing âfashionâ into a system of algorithms that personalise an individualâs shopping journey is not without risks to the way we express, assess, and develop aspects of our identity. This study uses an interdisciplinary research approach to examine how an individualâs interaction with algorithms in the fashion domain shapes our understanding of an individualâs privacy, autonomy, and identity. Using fashion theory and psychology, I make two contributions to the meaning of privacy to protect notions of identity and autonomy, and develop a more nuanced perspective on this concept using âfashion identityâ. One, a more varied outlook on privacy allows us to examine how algorithmic constructions impose inherent reductions on individual sense-making in developing and reinventing personal fashion choices. A âright to not be reducedâ allows us to focus on the individualâs practice of identity and choice with regard to the algorithmic entities incorporating imperfect semblances on the personal and social aspects of fashion. Second, I submit that we need a new perspective on the right to privacy to address the risks of algorithmic personalisation systems in fashion. There are gaps in the law regarding capturing the impact of algorithmic personalisation systems on an individualâs inference of knowledge about fashion, as well as the associations of fashion applied to individual circumstances. Focusing on the case law of the European Court of Human Rights (ECtHR) and the General Data Protection Regulation (GDPR), as well as aspects of EU non-discrimination and consumer law, I underline that we need to develop a proactive approach to the right to privacy entailing the incorporation of new values. I define these values to include an individualâs perception and self-relationality, describing the impact of algorithmic personalisation systems on an individualâs inference of knowledge about fashion, as well as the associations of fashion applied to individual circumstances.
The study concludes with recommendations regarding the use of AI techniques in fashion using an international human rights approach. I argue that the âright to not be reducedâ requires new interpretative guidance informing international human rights standards, including Article 17 of the International Covenant on Civil and Political Rights (ICCPR). Moreover, I consider that the âright to not be reducedâ requires us to consider novel choices that inform the design and deployment of algorithmic personalisation systems in fashion, considering the UN Guiding Principles on Business and Human Rights and the EU Commissionâs Proposal for an AI Act