13,765 research outputs found

    FTA: Stealthy and Robust Backdoor Attack with Flexible Trigger on Federated Learning

    Full text link
    Current backdoor attacks against federated learning (FL) strongly rely on universal triggers or semantic patterns, which can be easily detected and filtered by certain defense mechanisms such as norm clipping, comparing parameter divergences among local updates. In this work, we propose a new stealthy and robust backdoor attack with flexible triggers against FL defenses. To achieve this, we build a generative trigger function that can learn to manipulate the benign samples with an imperceptible flexible trigger pattern and simultaneously make the trigger pattern include the most significant hidden features of the attacker-chosen label. Moreover, our trigger generator can keep learning and adapt across different rounds, allowing it to adjust to changes in the global model. By filling the distinguishable difference (the mapping between the trigger pattern and target label), we make our attack naturally stealthy. Extensive experiments on real-world datasets verify the effectiveness and stealthiness of our attack compared to prior attacks on decentralized learning framework with eight well-studied defenses

    Guided self-organisation in open distributed systems

    Get PDF
    [no abstract

    ENHANCING PRIVACY IN MULTI-AGENT SYSTEMS

    Full text link
    La pérdida de privacidad se está convirtiendo en uno de los mayores problemas en el mundo de la informática. De hecho, la mayoría de los usuarios de Internet (que hoy en día alcanzan la cantidad de 2 billones de usuarios en todo el mundo) están preocupados por su privacidad. Estas preocupaciones también se trasladan a las nuevas ramas de la informática que están emergiendo en los ultimos años. En concreto, en esta tesis nos centramos en la privacidad en los Sistemas Multiagente. En estos sistemas, varios agentes (que pueden ser inteligentes y/o autónomos) interactúan para resolver problemas. Estos agentes suelen encapsular información personal de los usuarios a los que representan (nombres, preferencias, tarjetas de crédito, roles, etc.). Además, estos agentes suelen intercambiar dicha información cuando interactúan entre ellos. Todo esto puede resultar en pérdida de privacidad para los usuarios, y por tanto, provocar que los usuarios se muestren adversos a utilizar estas tecnologías. En esta tesis nos centramos en evitar la colección y el procesado de información personal en Sistemas Multiagente. Para evitar la colección de información, proponemos un modelo para que un agente sea capaz de decidir qué atributos (de la información personal que tiene sobre el usuario al que representa) revelar a otros agentes. Además, proporcionamos una infraestructura de agentes segura, para que una vez que un agente decide revelar un atributo a otro, sólo este último sea capaz de tener acceso a ese atributo, evitando que terceras partes puedan acceder a dicho atributo. Para evitar el procesado de información personal proponemos un modelo de gestión de las identidades de los agentes. Este modelo permite a los agentes la utilización de diferentes identidades para reducir el riesgo del procesado de información. Además, también describimos en esta tesis la implementación de dicho modelo en una plataforma de agentes.Such Aparicio, JM. (2011). ENHANCING PRIVACY IN MULTI-AGENT SYSTEMS [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/13023Palanci

    Argument-based agreements in agent societies

    Full text link
    In this paper, we present an abstract argumentation framework for the support of agreement processes in agent societies. It takes into account arguments, attacks among them, and the social context of the agents that put forward arguments. Then, we de¿ne the semantics of the framework, providing a mechanism to evaluate arguments in view of other arguments posed in the argumentation process. We also provide a translation of the framework into a neural network that computes the set of acceptable arguments and can be tuned to give more or less importance to argument attacks. Finally, the framework is illustrated with an example in a real domain of a water-rights transfer market. & 2011 Elsevier B.V. All rights reservedThis work is supported by the Spanish government Grants CONSOLIDER INGENIO 2010 CSD2007-00022, TIN2008-04446 and TIN2009-13839-C03-01 and by the GVA project PROMETEO 2008/051.Heras Barberá, SM.; Botti Navarro, VJ.; Julian Inglada, VJ. (2012). Argument-based agreements in agent societies. Neurocomputing. 75(1):156-162. doi:10.1016/j.neucom.2011.02.022S15616275

    Agents for educational games and simulations

    Get PDF
    This book consists mainly of revised papers that were presented at the Agents for Educational Games and Simulation (AEGS) workshop held on May 2, 2011, as part of the Autonomous Agents and MultiAgent Systems (AAMAS) conference in Taipei, Taiwan. The 12 full papers presented were carefully reviewed and selected from various submissions. The papers are organized topical sections on middleware applications, dialogues and learning, adaption and convergence, and agent applications

    Punishing Artificial Intelligence: Legal Fiction or Science Fiction

    Get PDF
    Whether causing flash crashes in financial markets, purchasing illegal drugs, or running over pedestrians, AI is increasingly engaging in activity that would be criminal for a natural person, or even an artificial person like a corporation. We argue that criminal law falls short in cases where an AI causes certain types of harm and there are no practically or legally identifiable upstream criminal actors. This Article explores potential solutions to this problem, focusing on holding AI directly criminally liable where it is acting autonomously and irreducibly. Conventional wisdom holds that punishing AI is incongruous with basic criminal law principles such as the capacity for culpability and the requirement of a guilty mind. Drawing on analogies to corporate and strict criminal liability, as well as familiar imputation principles, we show how a coherent theoretical case can be constructed for AI punishment. AI punishment could result in general deterrence and expressive benefits, and it need not run afoul of negative limitations such as punishing in excess of culpability. Ultimately, however, punishing AI is not justified, because it might entail significant costs and it would certainly require radical legal changes. Modest changes to existing criminal laws that target persons, together with potentially expanded civil liability, are a better solution to AI crime

    Stereopagnosia: Fooling Stereo Networks with Adversarial Perturbations

    Full text link
    We study the effect of adversarial perturbations of images on the estimates of disparity by deep learning models trained for stereo. We show that imperceptible additive perturbations can significantly alter the disparity map, and correspondingly the perceived geometry of the scene. These perturbations not only affect the specific model they are crafted for, but transfer to models with different architecture, trained with different loss functions. We show that, when used for adversarial data augmentation, our perturbations result in trained models that are more robust, without sacrificing overall accuracy of the model. This is unlike what has been observed in image classification, where adding the perturbed images to the training set makes the model less vulnerable to adversarial perturbations, but to the detriment of overall accuracy. We test our method using the most recent stereo networks and evaluate their performance on public benchmark datasets

    A survey on vulnerability of federated learning: A learning algorithm perspective

    Get PDF
    Federated Learning (FL) has emerged as a powerful paradigm for training Machine Learning (ML), particularly Deep Learning (DL) models on multiple devices or servers while maintaining data localized at owners’ sites. Without centralizing data, FL holds promise for scenarios where data integrity, privacy and security and are critical. However, this decentralized training process also opens up new avenues for opponents to launch unique attacks, where it has been becoming an urgent need to understand the vulnerabilities and corresponding defense mechanisms from a learning algorithm perspective. This review paper takes a comprehensive look at malicious attacks against FL, categorizing them from new perspectives on attack origins and targets, and providing insights into their methodology and impact. In this survey, we focus on threat models targeting the learning process of FL systems. Based on the source and target of the attack, we categorize existing threat models into four types, Data to Model (D2M), Model to Data (M2D), Model to Model (M2M) and composite attacks. For each attack type, we discuss the defense strategies proposed, highlighting their effectiveness, assumptions and potential areas for improvement. Defense strategies have evolved from using a singular metric to excluding malicious clients, to employing a multifaceted approach examining client models at various phases. In this survey paper, our research indicates that the to-learn data, the learning gradients, and the learned model at different stages all can be manipulated to initiate malicious attacks that range from undermining model performance, reconstructing private local data, and to inserting backdoors. We have also seen these threat are becoming more insidious. While earlier studies typically amplified malicious gradients, recent endeavors subtly alter the least significant weights in local models to bypass defense measures. This literature review provides a holistic understanding of the current FL threat landscape and highlights the importance of developing robust, efficient, and privacy-preserving defenses to ensure the safe and trusted adoption of FL in real-world applications. The categorized bibliography can be found at: https://github.com/Rand2AI/Awesome-Vulnerability-of-Federated-Learning
    • …
    corecore