10 research outputs found

    Privacy-Preserving Group Discounts

    Get PDF
    How can a buyer legitimately benefit from group discounts while preserving his privacy? We show how this can be achieved when buyers can use their own computing device (e.g. smartphone or computer) to perform a purchase. Specifically, we present a protocol for privacy-preserving group discounts. The protocol allows a group of buyers to prove how many they are without disclosing their identities. Coupled with an anonymous payment system, this allows group discounts to be compatible with buyer privacy.This work was partly funded by Google through a Faculty Research Award to the first author, who is also partially supported by the Government of Catalonia through an ICREA Acadèmia Prize. The following partial supports are also gratefully acknowledged: the Spanish Government under projects TIN2011-27076-C03-01 “CO-PRIVACY” and CONSOLIDER INGENIO 2010 CSD2007-00004 “ARES”, and the European Commission under FP7 projects “DwB” and “Inter-Trust”

    Cryptographic protocols for privacy-aware and secure e-commerce

    Get PDF
    Aquesta tesi tracta sobre la investigació i el desenvolupament de tecnologies de millora de la privadesa per a proporcionar als consumidors de serveis de comerç electrònic el control sobre quanta informació privada volen compartir amb els proveïdors de serveis. Fem servir tecnologies existents, així com tecnologies desenvolupades durant aquesta tesi, per a protegir als usuaris de la recoleció excessiva de dades per part dels proveïdors de serveis en aplicacions específiques. En particular, fem servir un nou esquema de signatura digital amb llindar dinàmic i basat en la identitat per a implementar un mecanisme d'acreditació de la mida d'un grup d'usuaris, que només revela el nombre d'integrants del grup, per a implementar descomptes de grup. A continuació, fem servir una nova construcció basada en signatures cegues, proves de coneixement nul i tècniques de generalització per implementar un sistema de descomptes de fidelitat que protegeix la privadesa dels consumidors. Per últim, fem servir protocols de computació multipart per a implementar dos mecanismes d'autenticació implícita que no revelen informació privada de l'usuari al proveïdor de serveis.Esta tesis trata sobre la investigación y desarrollo de tecnologías de mejora de la privacidad para proporcionar a los consumidores de servicios de comercio electrónico el control sobre cuanta información privada quieren compartir con los proveedores de servicio. Utilizamos tecnologías existentes y desarrolladas en esta tesis para proteger a los usuarios de la recolección excesiva de datos por parte de los proveedores de servicio en aplicaciones especfíficas. En particular, utilizamos un nuevo esquema de firma digital basado en la identidad y con umbral dinámico para implementar un sistema de acreditación del tamaño de un grupo, que no desvela ninguna información de los miembros del grupo excepto el número de integrantes, para construir un sistema de descuentos de grupo. A continuación, utilizamos una nueva construcción basada en firmas ciegas, pruebas de conocimiento nulo y técnicas de generalización para implementar un sistema de descuentos de fidelidad que protege la privacidad de los consumidores. Por último, hacemos uso de protocolos de computación multiparte para implementar dos mecanismos de autenticación implícita que no revelan información privada del usuario al proveedor de servicios.This thesis is about the research and development of privacy enhancing techniques to empower consumers of electronic commerce services with the control on how much private information they want to share with the service providers. We make use of known and newly developed technologies to protect users against excessive data collection by service providers in specific applications. Namely, we use a novel identity-based dynamic threshold signature scheme and a novel key management scheme to implement a group size accreditation mechanism, that does not reveal anything about group members but the size of the group, to support group discounts. Next, we use a novel construction based on blind signatures, zero-knowledge proofs and generalization techniques to implement a privacy-preserving loyalty programs construction. Finally, we use multiparty computation protocols to implement implicit authentication mechanisms that do not disclose private information about the users to the service providers

    GRAIMATTER Green Paper:Recommendations for disclosure control of trained Machine Learning (ML) models from Trusted Research Environments (TREs)

    Get PDF
    TREs are widely, and increasingly used to support statistical analysis of sensitive data across a range of sectors (e.g., health, police, tax and education) as they enable secure and transparent research whilst protecting data confidentiality.There is an increasing desire from academia and industry to train AI models in TREs. The field of AI is developing quickly with applications including spotting human errors, streamlining processes, task automation and decision support. These complex AI models require more information to describe and reproduce, increasing the possibility that sensitive personal data can be inferred from such descriptions. TREs do not have mature processes and controls against these risks. This is a complex topic, and it is unreasonable to expect all TREs to be aware of all risks or that TRE researchers have addressed these risks in AI-specific training.GRAIMATTER has developed a draft set of usable recommendations for TREs to guard against the additional risks when disclosing trained AI models from TREs. The development of these recommendations has been funded by the GRAIMATTER UKRI DARE UK sprint research project. This version of our recommendations was published at the end of the project in September 2022. During the course of the project, we have identified many areas for future investigations to expand and test these recommendations in practice. Therefore, we expect that this document will evolve over time. The GRAIMATTER DARE UK sprint project has also developed a minimal viable product (MVP) as a suite of attack simulations that can be applied by TREs and can be accessed here (https://github.com/AI-SDC/AI-SDC).If you would like to provide feedback or would like to learn more, please contact Smarti Reel ([email protected]) and Emily Jefferson ([email protected]).The summary of our recommendations for a general public audience can be found at DOI: 10.5281/zenodo.708951

    Dynamic group size accreditation and group discounts preserving anonymity

    No full text
    Group discounts are used by vendors and authorities to encourage certain behaviors. For example, group discounts can be applied to highway tolls to encourage ride sharing, or by museum managers to ensure a minimum number of visitors and plan guided tours more efficiently. We show how group discounts can be offered without forcing customers to surrender their anonymity, as long as customers are equipped with some form of autonomous computing device (e.g. smartphone, tablet or computer). Specifically, we present a protocol suite for privacy-aware group discounts that allows a group of customers to prove how many they are without disclosing their identities. The group does not need to be a stable one, but can have been formed on the fly. Coupled with an anonymous payment system, this makes group discounts compatible with buyer privacy (in this case, buyer anonymity). We present a detailed complexity analysis, we give simulation results, and we report on a pilot implementation

    Cálculo privado de distancias entre funciones de preferencia

    No full text
    Consideremos el siguiente escenario: dos entidades quieren saber el grado de semejanza que hay entre ellas. Sus perfiles se pueden describir a través de funciones de preferencia, y querrían calcular la distancia entre estas funciones sin tener que revelarlas. Este escenario parece de especial relevancia en el contexto de las redes sociales, políticas o empresariales, cuando uno desea encontrar amigos o socios con intereses parecidos sin tener que revelar sus intereses a nadie. En este trabajo, proporcionamos protocolos que resuelven el problema anterior para distintos tipos de funciones. Los experimentos, además, demuestran que es posible realizar estos cálculos de manera privada, eficiente y sin causar reducciones significativas en la precisión de las distancias calculadas manteniendo, por tanto, su utilidad.Este trabajo ha sido financiado parcialmente por la Generalitat de Catalunya bajo la beca 2009 SGR 1135, por el Gobierno Español a través de los proyectos TIN2011-27076-C01-01 “CO-PRIVACY”, TIN2012-32757 “ICWT”, IPT-2012-0603-430000 “BallotNext” and CONSOLIDER INGENIO 2010 CSD2007-00004 “ARES”, y por la Comisión Europea bajo los proyectos FP7 “DwB” e “Inter-Trust”. J. Domingo-Ferrer está financiado parcialmente como investigador ICREA Acadèmia por la Generalitat de Catalunya

    Defending against the Label-flipping Attack in Federated Learning

    Full text link
    Federated learning (FL) provides autonomy and privacy by design to participating peers, who cooperatively build a machine learning (ML) model while keeping their private data in their devices. However, that same autonomy opens the door for malicious peers to poison the model by conducting either untargeted or targeted poisoning attacks. The label-flipping (LF) attack is a targeted poisoning attack where the attackers poison their training data by flipping the labels of some examples from one class (i.e., the source class) to another (i.e., the target class). Unfortunately, this attack is easy to perform and hard to detect and it negatively impacts on the performance of the global model. Existing defenses against LF are limited by assumptions on the distribution of the peers' data and/or do not perform well with high-dimensional models. In this paper, we deeply investigate the LF attack behavior and find that the contradicting objectives of attackers and honest peers on the source class examples are reflected in the parameter gradients corresponding to the neurons of the source and target classes in the output layer, making those gradients good discriminative features for the attack detection. Accordingly, we propose a novel defense that first dynamically extracts those gradients from the peers' local updates, and then clusters the extracted gradients, analyzes the resulting clusters and filters out potential bad updates before model aggregation. Extensive empirical analysis on three data sets shows the proposed defense's effectiveness against the LF attack regardless of the data distribution or model dimensionality. Also, the proposed defense outperforms several state-of-the-art defenses by offering lower test error, higher overall accuracy, higher source class accuracy, lower attack success rate, and higher stability of the source class accuracy

    Literatur

    No full text
    corecore