35 research outputs found

    Attacks on Robust Distributed Learning Schemes via Sensitivity Curve Maximization

    Full text link
    Distributed learning paradigms, such as federated or decentralized learning, allow a collection of agents to solve global learning and optimization problems through limited local interactions. Most such strategies rely on a mixture of local adaptation and aggregation steps, either among peers or at a central fusion center. Classically, aggregation in distributed learning is based on averaging, which is statistically efficient, but susceptible to attacks by even a small number of malicious agents. This observation has motivated a number of recent works, which develop robust aggregation schemes by employing robust variations of the mean. We present a new attack based on sensitivity curve maximization (SCM), and demonstrate that it is able to disrupt existing robust aggregation schemes by injecting small, but effective perturbations

    Dif-MAML: Decentralized multi-agent meta-learning

    Get PDF
    The objective of meta-learning is to exploit knowledge obtained from observed tasks to improve adaptation to unseen tasks. Meta-learners are able to generalize better when they are trained with a larger number of observed tasks and with a larger amount of data per task. Given the amount of resources that are needed, it is generally difficult to expect the tasks, their respective data, and the necessary computational capacity to be available at a single central location. It is more natural to encounter situations where these resources are spread across several agents connected by some graph topology. The formalism of meta-learning is actually well-suited for this decentralized setting, where the learner benefits from information and computational power spread across the agents. Motivated by this observation, we propose a cooperative fully-decentralized multi-agent meta-learning algorithm, referred to as Diffusion-based MAML or Dif-MAML. Decentralized optimization algorithms are superior to centralized implementations in terms of scalability, robustness, avoidance of communication bottlenecks, and privacy guarantees. The work provides a detailed theoretical analysis to show that the proposed strategy allows a collection of agents to attain agreement at a linear rate and to converge to a stationary point of the aggregate MAML objective even in non-convex environments. Simulation results illustrate the theoretical findings and the superior performance relative to the traditional non-cooperative setting

    Essential Medicines at the National Level : The Global Asthma Network's Essential Asthma Medicines Survey 2014

    Get PDF
    Patients with asthma need uninterrupted supplies of affordable, quality-assured essential medicines. However, access in many low- and middle-income countries (LMICs) is limited. The World Health Organization (WHO) Non-Communicable Disease (NCD) Global Action Plan 2013-2020 sets an 80% target for essential NCD medicines' availability. Poor access is partly due to medicines not being included on the national Essential Medicines Lists (EML) and/or National Reimbursement Lists (NRL) which guide the provision of free/subsidised medicines. We aimed to determine how many countries have essential asthma medicines on their EML and NRL, which essential asthma medicines, and whether surveys might monitor progress. A cross-sectional survey in 2013-2015 of Global Asthma Network principal investigators generated 111/120 (93%) responses41 high-income countries and territories (HICs); 70 LMICs. Patients in HICs with NRL are best served (91% HICs included ICS (inhaled corticosteroids) and salbutamol). Patients in the 24 (34%) LMICs with no NRL and the 14 (30%) LMICs with an NRL, however no ICS are likely to have very poor access to affordable, quality-assured ICS. Many LMICs do not have essential asthma medicines on their EML or NRL. Technical guidance and advocacy for policy change is required. Improving access to these medicines will improve the health system's capacity to address NCDs.Peer reviewe
    corecore