24 research outputs found

    FedReview: A Review Mechanism for Rejecting Poisoned Updates in Federated Learning

    Full text link
    Federated learning has recently emerged as a decentralized approach to learn a high-performance model without access to user data. Despite its effectiveness, federated learning gives malicious users opportunities to manipulate the model by uploading poisoned model updates to the server. In this paper, we propose a review mechanism called FedReview to identify and decline the potential poisoned updates in federated learning. Under our mechanism, the server randomly assigns a subset of clients as reviewers to evaluate the model updates on their training datasets in each round. The reviewers rank the model updates based on the evaluation results and count the number of the updates with relatively low quality as the estimated number of poisoned updates. Based on review reports, the server employs a majority voting mechanism to integrate the rankings and remove the potential poisoned updates in the model aggregation process. Extensive evaluation on multiple datasets demonstrate that FedReview can assist the server to learn a well-performed global model in an adversarial environment

    FID: Function Modeling-based Data-Independent and Channel-Robust Physical-Layer Identification

    Full text link
    Trusted identification is critical to secure IoT devices. However, the limited memory and computation power of low-end IoT devices prevent the direct usage of conventional identification systems. RF fingerprinting is a promising technique to identify low-end IoT devices since it only requires the RF signals that most IoT devices can produce for communication. However, most existing RF fingerprinting systems are data-dependent and/or not robust to impacts from wireless channels. To address the above problems, we propose to exploit the mathematical expression of the physical-layer process, regarded as a function F()\mathbf{\mathcal{F}(\cdot)}, for device identification. F()\mathbf{\mathcal{F}(\cdot)} is not directly derivable, so we further propose a model to learn it and employ this function model as the device fingerprint in our system, namely F\mathcal{F}ID. Our proposed function model characterizes the unique physical-layer process of a device that is independent of the transmitted data, and hence, our system F\mathcal{F}ID is data-independent and thus resilient against signal replay attacks. Modeling and further separating channel effects from the function model makes F\mathcal{F}ID channel-robust. We evaluate F\mathcal{F}ID on thousands of random signal packets from 3333 different devices in different environments and scenarios, and the overall identification accuracy is over 99%99\%.Comment: Accepted to INFOCOM201

    Distributionally Adversarial Attack

    Full text link
    Recent work on adversarial attack has shown that Projected Gradient Descent (PGD) Adversary is a universal first-order adversary, and the classifier adversarially trained by PGD is robust against a wide range of first-order attacks. It is worth noting that the original objective of an attack/defense model relies on a data distribution p(x)p(\mathbf{x}), typically in the form of risk maximization/minimization, e.g., max/minEp((x))L(x)\max/\min\mathbb{E}_{p(\mathbf(x))}\mathcal{L}(\mathbf{x}) with p(x)p(\mathbf{x}) some unknown data distribution and L()\mathcal{L}(\cdot) a loss function. However, since PGD generates attack samples independently for each data sample based on L()\mathcal{L}(\cdot), the procedure does not necessarily lead to good generalization in terms of risk optimization. In this paper, we achieve the goal by proposing distributionally adversarial attack (DAA), a framework to solve an optimal {\em adversarial-data distribution}, a perturbed distribution that satisfies the LL_\infty constraint but deviates from the original data distribution to increase the generalization risk maximally. Algorithmically, DAA performs optimization on the space of potential data distributions, which introduces direct dependency between all data points when generating adversarial samples. DAA is evaluated by attacking state-of-the-art defense models, including the adversarially-trained models provided by {\em MIT MadryLab}. Notably, DAA ranks {\em the first place} on MadryLab's white-box leaderboards, reducing the accuracy of their secret MNIST model to 88.79%88.79\% (with ll_\infty perturbations of ϵ=0.3\epsilon = 0.3) and the accuracy of their secret CIFAR model to 44.71%44.71\% (with ll_\infty perturbations of ϵ=8.0\epsilon = 8.0). Code for the experiments is released on \url{https://github.com/tianzheng4/Distributionally-Adversarial-Attack}.Comment: accepted to AAAI-1

    Fair Text-to-Image Diffusion via Fair Mapping

    Full text link
    In this paper, we address the limitations of existing text-to-image diffusion models in generating demographically fair results when given human-related descriptions. These models often struggle to disentangle the target language context from sociocultural biases, resulting in biased image generation. To overcome this challenge, we propose Fair Mapping, a flexible, model-agnostic, and lightweight approach that modifies a pre-trained text-to-image diffusion model by controlling the prompt to achieve fair image generation. One key advantage of our approach is its high efficiency. It only requires updating an additional linear network with few parameters at a low computational cost. By developing a linear network that maps conditioning embeddings into a debiased space, we enable the generation of relatively balanced demographic results based on the specified text condition. With comprehensive experiments on face image generation, we show that our method significantly improves image generation fairness with almost the same image quality compared to conventional diffusion models when prompted with descriptions related to humans. By effectively addressing the issue of implicit language bias, our method produces more fair and diverse image outputs

    A63: Exercise Improves Appetite and Heart Function in High Fat Drosophila

    Get PDF
    Purpose: High-fat diets cause obesity and disease leading to excess appetite and cardiovascular disease. At present, there is literature showing the improvement effect of exercise on obesity and related diseases. To explore more deeply the mechanism of action of exercise on appetite improvement and heart function in high-fat diets, we used fruit fly motility models to reveal this aspect of function. Methods: A total of 300 wild-type W1118 virgin flies that matured within 12 hours were collected. They were randomly divided into 100 animals in the normal diet control group (NFD), 100 in the high-fat diet group (HFD), and 100 in the high-fat diet exercise group (HFD+E). Exercise intervention for 7-day-old fruit flies for 5 consecutive days. An EM-CCD high-speed camera was used to record the heartbeat of fruit flies (video at 130 fps, the 30 s), and HC Image software was used to record the cardiogram data. Semi-Automated Optical Heartbeat Analysis (SOHA) quantifies Heart Rate (HR), Heart Period (HP), Diastolic Intervals (DI), Systolic Intervals (SI), Arrhythmia Index (AI), Diastolic Diameter (DD), Systolic Diameter (SD), Fractional Shortening (FS), and Fibrillations (FL). Fruit fly uptake was measured using the FlyPAD high-throughput Drosophila quantitative feeding system. All fruit flies needed to be fed on normal medium for 5 days first and then transferred to fresh normal medium or the high-fat medium on the 6th day for another 2 days. NFD flies are placed in a constant temperature and humidity incubator (25 ℃, 50% humidity, 12 h day and night cycle), HFD flies are housed in incubators at 22-24 ℃ and 50% relative humidity to make high-fat medium by mixing 30% coconut oil and 70% standard medium. Results: The HFD group had an increase in AI, HR and HP, constant SD, decreased DD, and decreased FS. After exercise, HFD+E group had a decrease in HR, an increase in HP, an improvement in AI, an improvement in DD, and no change in SD. The food intake and sipping frequency of fruit flies in the HFD group were significantly higher than those in the NFD group, and during the same time period, the food intake and number of sipping times in the HFD+E group and the HFD group decreased significantly after exercise. Conclusion: Exercise improved excess appetite and cardiac dysfunction in high-fat diets

    Enhancing Uncertainty-Based Hallucination Detection with Stronger Focus

    Full text link
    Large Language Models (LLMs) have gained significant popularity for their impressive performance across diverse fields. However, LLMs are prone to hallucinate untruthful or nonsensical outputs that fail to meet user expectations in many real-world applications. Existing works for detecting hallucinations in LLMs either rely on external knowledge for reference retrieval or require sampling multiple responses from the LLM for consistency verification, making these methods costly and inefficient. In this paper, we propose a novel reference-free, uncertainty-based method for detecting hallucinations in LLMs. Our approach imitates human focus in factuality checking from three aspects: 1) focus on the most informative and important keywords in the given text; 2) focus on the unreliable tokens in historical context which may lead to a cascade of hallucinations; and 3) focus on the token properties such as token type and token frequency. Experimental results on relevant datasets demonstrate the effectiveness of our proposed method, which achieves state-of-the-art performance across all the evaluation metrics and eliminates the need for additional information.Comment: Accepted by EMNLP 2023 (main conference
    corecore