157 research outputs found

    Offset quantum-well method for tunable distributed Bragg reflector lasers and electro-absorption modulated distributed feedback lasers

    Get PDF
    A two-section offset quantum-well structure tunable laser with a tuning range of 7 nm was fabricated using offset quantum-well method. The distributed Bragg reflector (DBR) was realized just by selectively wet etching the multiquantum-well (MQW) layer above the quaternary lower waveguide. A threshold current of 32 mA and an output power of 9 mW at 100 mA were achieved. Furthermore, with this offset structure method, a distributed feedback (DFB) laser was integrated with an electro-absorption modulator (EAM), which was capable of producing 20 dB of optical extinction

    Towards Adversarially Robust Continual Learning

    Full text link
    Recent studies show that models trained by continual learning can achieve the comparable performances as the standard supervised learning and the learning flexibility of continual learning models enables their wide applications in the real world. Deep learning models, however, are shown to be vulnerable to adversarial attacks. Though there are many studies on the model robustness in the context of standard supervised learning, protecting continual learning from adversarial attacks has not yet been investigated. To fill in this research gap, we are the first to study adversarial robustness in continual learning and propose a novel method called \textbf{T}ask-\textbf{A}ware \textbf{B}oundary \textbf{A}ugmentation (TABA) to boost the robustness of continual learning models. With extensive experiments on CIFAR-10 and CIFAR-100, we show the efficacy of adversarial training and TABA in defending adversarial attacks.Comment: ICASSP 202

    Privacy-Preserving Blockchain-Based Federated Learning for IoT Devices

    Full text link
    Home appliance manufacturers strive to obtain feedback from users to improve their products and services to build a smart home system. To help manufacturers develop a smart home system, we design a federated learning (FL) system leveraging the reputation mechanism to assist home appliance manufacturers to train a machine learning model based on customers' data. Then, manufacturers can predict customers' requirements and consumption behaviors in the future. The working flow of the system includes two stages: in the first stage, customers train the initial model provided by the manufacturer using both the mobile phone and the mobile edge computing (MEC) server. Customers collect data from various home appliances using phones, and then they download and train the initial model with their local data. After deriving local models, customers sign on their models and send them to the blockchain. In case customers or manufacturers are malicious, we use the blockchain to replace the centralized aggregator in the traditional FL system. Since records on the blockchain are untampered, malicious customers or manufacturers' activities are traceable. In the second stage, manufacturers select customers or organizations as miners for calculating the averaged model using received models from customers. By the end of the crowdsourcing task, one of the miners, who is selected as the temporary leader, uploads the model to the blockchain. To protect customers' privacy and improve the test accuracy, we enforce differential privacy on the extracted features and propose a new normalization technique. We experimentally demonstrate that our normalization technique outperforms batch normalization when features are under differential privacy protection. In addition, to attract more customers to participate in the crowdsourcing FL task, we design an incentive mechanism to award participants.Comment: This paper appears in IEEE Internet of Things Journal (IoT-J

    Privacy and Robustness in Federated Learning: Attacks and Defenses

    Full text link
    As data are increasingly being stored in different silos and societies becoming more aware of data privacy issues, the traditional centralized training of artificial intelligence (AI) models is facing efficiency and privacy challenges. Recently, federated learning (FL) has emerged as an alternative solution and continue to thrive in this new reality. Existing FL protocol design has been shown to be vulnerable to adversaries within or outside of the system, compromising data privacy and system robustness. Besides training powerful global models, it is of paramount importance to design FL systems that have privacy guarantees and are resistant to different types of adversaries. In this paper, we conduct the first comprehensive survey on this topic. Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic. We highlight the intuitions, key techniques as well as fundamental assumptions adopted by various attacks and defenses. Finally, we discuss promising future research directions towards robust and privacy-preserving federated learning.Comment: arXiv admin note: text overlap with arXiv:2003.02133; text overlap with arXiv:1911.11815 by other author

    Privacy-preserving Anomaly Detection in Cloud Manufacturing via Federated Transformer

    Full text link
    With the rapid development of cloud manufacturing, industrial production with edge computing as the core architecture has been greatly developed. However, edge devices often suffer from abnormalities and failures in industrial production. Therefore, detecting these abnormal situations timely and accurately is crucial for cloud manufacturing. As such, a straightforward solution is that the edge device uploads the data to the cloud for anomaly detection. However, Industry 4.0 puts forward higher requirements for data privacy and security so that it is unrealistic to upload data from edge devices directly to the cloud. Considering the above-mentioned severe challenges, this paper customizes a weakly-supervised edge computing anomaly detection framework, i.e., Federated Learning-based Transformer framework (\textit{FedAnomaly}), to deal with the anomaly detection problem in cloud manufacturing. Specifically, we introduce federated learning (FL) framework that allows edge devices to train an anomaly detection model in collaboration with the cloud without compromising privacy. To boost the privacy performance of the framework, we add differential privacy noise to the uploaded features. To further improve the ability of edge devices to extract abnormal features, we use the Transformer to extract the feature representation of abnormal data. In this context, we design a novel collaborative learning protocol to promote efficient collaboration between FL and Transformer. Furthermore, extensive case studies on four benchmark data sets verify the effectiveness of the proposed framework. To the best of our knowledge, this is the first time integrating FL and Transformer to deal with anomaly detection problems in cloud manufacturing

    Local Differential Privacy based Federated Learning for Internet of Things

    Full text link
    Internet of Vehicles (IoV) is a promising branch of the Internet of Things. IoV simulates a large variety of crowdsourcing applications such as Waze, Uber, and Amazon Mechanical Turk, etc. Users of these applications report the real-time traffic information to the cloud server which trains a machine learning model based on traffic information reported by users for intelligent traffic management. However, crowdsourcing application owners can easily infer users' location information, which raises severe location privacy concerns of the users. In addition, as the number of vehicles increases, the frequent communication between vehicles and the cloud server incurs unexpected amount of communication cost. To avoid the privacy threat and reduce the communication cost, in this paper, we propose to integrate federated learning and local differential privacy (LDP) to facilitate the crowdsourcing applications to achieve the machine learning model. Specifically, we propose four LDP mechanisms to perturb gradients generated by vehicles. The Three-Outputs mechanism is proposed which introduces three different output possibilities to deliver a high accuracy when the privacy budget is small. The output possibilities of Three-Outputs can be encoded with two bits to reduce the communication cost. Besides, to maximize the performance when the privacy budget is large, an optimal piecewise mechanism (PM-OPT) is proposed. We further propose a suboptimal mechanism (PM-SUB) with a simple formula and comparable utility to PM-OPT. Then, we build a novel hybrid mechanism by combining Three-Outputs and PM-SUB.Comment: This paper appears in IEEE Internet of Things Journal (IoT-J

    Reduced Annexin A1 Secretion by ABCA1 Causes Retinal Inflammation and Ganglion Cell Apoptosis in a Murine Glaucoma Model

    Get PDF
    Variants near the ATP-binding cassette transporter A1 (ABCA1) gene are associated with elevated intraocular pressure and newly discovered risk factors for glaucoma. Previous studies have shown an association between ABCA1 deficiency and retinal inflammation. Using a mouse model of ischemia-reperfusion (IR) induced by acute intraocular pressure elevation, we found that the retinal expression of ABCA1 protein was decreased. An induction of ABCA1 expression by liver X receptor agonist TO901317 reduced retinal ganglion cell (RGC) apoptosis after IR and promoted membrane translocation and secretion of the anti-inflammatory factor annexin A1 (ANXA1). Moreover, ABCA1 and ANXA1 co-localized in cell membranes, and the interaction domain is amino acid 196 to 274 of ANXA1 fragment. TO901317 also reduced microglia migration and activation and decreased the expression of pro-inflammatory cytokines interleukin (IL)-17A and IL-1β, which could be reversed by the ANXA1 receptor blocker Boc2. Overexpression of TANK-binding kinase 1 (TBK1) increased ABCA1 degradation, which was reversed by the proteasome inhibitor carbobenzoxy-L-leucyl-L-leucyl-L-leucinal (MG132). Silencing Tbk1 with siRNA increased ABCA1 expression and promoted ANXA1 membrane translocation. These results indicate a novel IR mechanism, that leads via TBK1 activation to ABCA1 ubiquitination. This degradation decreases ANXA1 secretion, thus facilitating retinal inflammation and RGC apoptosis. Our findings suggest a potential treatment strategy to prevent RGC apoptosis in retinal ischemia and glaucoma
    • …
    corecore