4,307 research outputs found

    A survey on vulnerability of federated learning: A learning algorithm perspective

    Get PDF
    Federated Learning (FL) has emerged as a powerful paradigm for training Machine Learning (ML), particularly Deep Learning (DL) models on multiple devices or servers while maintaining data localized at owners’ sites. Without centralizing data, FL holds promise for scenarios where data integrity, privacy and security and are critical. However, this decentralized training process also opens up new avenues for opponents to launch unique attacks, where it has been becoming an urgent need to understand the vulnerabilities and corresponding defense mechanisms from a learning algorithm perspective. This review paper takes a comprehensive look at malicious attacks against FL, categorizing them from new perspectives on attack origins and targets, and providing insights into their methodology and impact. In this survey, we focus on threat models targeting the learning process of FL systems. Based on the source and target of the attack, we categorize existing threat models into four types, Data to Model (D2M), Model to Data (M2D), Model to Model (M2M) and composite attacks. For each attack type, we discuss the defense strategies proposed, highlighting their effectiveness, assumptions and potential areas for improvement. Defense strategies have evolved from using a singular metric to excluding malicious clients, to employing a multifaceted approach examining client models at various phases. In this survey paper, our research indicates that the to-learn data, the learning gradients, and the learned model at different stages all can be manipulated to initiate malicious attacks that range from undermining model performance, reconstructing private local data, and to inserting backdoors. We have also seen these threat are becoming more insidious. While earlier studies typically amplified malicious gradients, recent endeavors subtly alter the least significant weights in local models to bypass defense measures. This literature review provides a holistic understanding of the current FL threat landscape and highlights the importance of developing robust, efficient, and privacy-preserving defenses to ensure the safe and trusted adoption of FL in real-world applications. The categorized bibliography can be found at: https://github.com/Rand2AI/Awesome-Vulnerability-of-Federated-Learning

    A survey on vulnerability of federated learning: A learning algorithm perspective

    Get PDF
    Federated Learning (FL) has emerged as a powerful paradigm for training Machine Learning (ML), particularly Deep Learning (DL) models on multiple devices or servers while maintaining data localized at owners’ sites. Without centralizing data, FL holds promise for scenarios where data integrity, privacy and security and are critical. However, this decentralized training process also opens up new avenues for opponents to launch unique attacks, where it has been becoming an urgent need to understand the vulnerabilities and corresponding defense mechanisms from a learning algorithm perspective. This review paper takes a comprehensive look at malicious attacks against FL, categorizing them from new perspectives on attack origins and targets, and providing insights into their methodology and impact. In this survey, we focus on threat models targeting the learning process of FL systems. Based on the source and target of the attack, we categorize existing threat models into four types, Data to Model (D2M), Model to Data (M2D), Model to Model (M2M) and composite attacks. For each attack type, we discuss the defense strategies proposed, highlighting their effectiveness, assumptions and potential areas for improvement. Defense strategies have evolved from using a singular metric to excluding malicious clients, to employing a multifaceted approach examining client models at various phases. In this survey paper, our research indicates that the to-learn data, the learning gradients, and the learned model at different stages all can be manipulated to initiate malicious attacks that range from undermining model performance, reconstructing private local data, and to inserting backdoors. We have also seen these threat are becoming more insidious. While earlier studies typically amplified malicious gradients, recent endeavors subtly alter the least significant weights in local models to bypass defense measures. This literature review provides a holistic understanding of the current FL threat landscape and highlights the importance of developing robust, efficient, and privacy-preserving defenses to ensure the safe and trusted adoption of FL in real-world applications. The categorized bibliography can be found at: https://github.com/Rand2AI/Awesome-Vulnerability-of-Federated-Learning

    Meta-learning algorithms and applications

    Get PDF
    Meta-learning in the broader context concerns how an agent learns about their own learning, allowing them to improve their learning process. Learning how to learn is not only beneficial for humans, but it has also shown vast benefits for improving how machines learn. In the context of machine learning, meta-learning enables models to improve their learning process by selecting suitable meta-parameters that influence the learning. For deep learning specifically, the meta-parameters typically describe details of the training of the model but can also include description of the model itself - the architecture. Meta-learning is usually done with specific goals in mind, for example trying to improve ability to generalize or learn new concepts from only a few examples. Meta-learning can be powerful, but it comes with a key downside: it is often computationally costly. If the costs would be alleviated, meta-learning could be more accessible to developers of new artificial intelligence models, allowing them to achieve greater goals or save resources. As a result, one key focus of our research is on significantly improving the efficiency of meta-learning. We develop two approaches: EvoGrad and PASHA, both of which significantly improve meta-learning efficiency in two common scenarios. EvoGrad allows us to efficiently optimize the value of a large number of differentiable meta-parameters, while PASHA enables us to efficiently optimize any type of meta-parameters but fewer in number. Meta-learning is a tool that can be applied to solve various problems. Most commonly it is applied for learning new concepts from only a small number of examples (few-shot learning), but other applications exist too. To showcase the practical impact that meta-learning can make in the context of neural networks, we use meta-learning as a novel solution for two selected problems: more accurate uncertainty quantification (calibration) and general-purpose few-shot learning. Both are practically important problems and using meta-learning approaches we can obtain better solutions than the ones obtained using existing approaches. Calibration is important for safety-critical applications of neural networks, while general-purpose few-shot learning tests model's ability to generalize few-shot learning abilities across diverse tasks such as recognition, segmentation and keypoint estimation. More efficient algorithms as well as novel applications enable the field of meta-learning to make more significant impact on the broader area of deep learning and potentially solve problems that were too challenging before. Ultimately both of them allow us to better utilize the opportunities that artificial intelligence presents

    Linking language and emotion: how emotion is understood in language comprehension, production and prediction using psycholinguistic methods

    Get PDF
    Emotions are an integral part of why and how we use language in everyday life. We communicate our concerns, express our woes, and share our joy through the use of non-verbal and verbal language. Yet there is a limited understanding of when and how emotional language is processed differently to neutral language, or of how emotional information facilitates or inhibits language processing. Indeed, various efforts have been made to bring back emotions into the discipline of psycholinguistics in the last decade. This can be seen in many interdisciplinary models focusing on the role played by emotion in each aspect of linguistic experience. In this thesis, I answer this call and pursue questions that remain unanswered in psycholinguistics regarding its interaction with emotion. The general trend that I am using to bring emotion into psycholinguistic research is straightforward. Where applicable and relevant, I use well-established tasks or paradigms to investigate the effects of emotional content in language processing. Hence, I focused on three main areas of language processing: comprehension, production and prediction. The first experimental chapter includes a series of experiments utilising the Modality Switching Paradigm to investigate whether sentences describing emotional states are processed differently from sentences describing cognitive states. No switching effects were found consistently in my 3 experiments. My results suggest that these distinct classes of interoceptive concepts, such as ‘thinking’ or ‘being happy’, are not processed differently from each other, suggesting that people do not switch attention between different interoceptive systems when comprehending emotional or cognitive sentences. I discuss the implications for grounded cognition theory in the embodiment literature. In my second experimental chapter, I used the Cumulative Semantic Interference Paradigm to investigate these two questions: (1) whether emotion concepts interfere with one another when repeatedly retrieved (emotion label objects), and (2) whether similar interference occurs for concrete objects that share similar valence association (emotion-laden objects). This could indicate that people use information such as valence and arousal to group objects in semantic memory. I found that interference occurs when people retrieve direct emotion labels repeatedly (e.g., “happy” and “sad”) but not when they retrieve the names of concrete objects that have similar emotion connotations (e.g., “puppy” and “rainbow”). I discuss my findings in terms of the different types of information that support representation of abstract vs. concrete concepts. In my final experimental chapter, I used the Visual World Paradigm to investigate whether the emotional state of an agent is used to inform predictions during sentence processing. I found that people do use the description of emotional state of an agent (e.g., “The boy is happy”) to predict the cause of that affective state during sentence processing (e.g., “because he was given an ice-cream”). A key result here is that people were more likely to fixate on the emotionally congruent objects (e.g., ice-cream) compared to incongruent objects (e.g., broccoli). This suggests that people rapidly and automatically inform predictions about upcoming sentence information based on the emotional state of the agent. I discuss our findings as a novel contribution to the Visual World literature. I conducted a diverse set of experiments using a range of established psycholinguistic methods to investigate the roles of emotional information in language processing. I found clear results in the eye-tracking study but inconsistent effects in both switching and interference studies. I interpret these mixed findings in the following way: emotional content does not always have effects in language processing and that effect are most likely in tasks that explicitly require participants to simulate emotion states in some way. Regardless, not only was I successful in finding some novel results by extending previous tasks, but I was also able to show that this is an avenue that can be explored more to advance the affective psycholinguistic field

    An examination of the verbal behaviour of intergroup discrimination

    Get PDF
    This thesis examined relationships between psychological flexibility, psychological inflexibility, prejudicial attitudes, and dehumanization across three cross-sectional studies with an additional proposed experimental study. Psychological flexibility refers to mindful attention to the present moment, willing acceptance of private experiences, and engaging in behaviours congruent with one’s freely chosen values. Inflexibility, on the other hand, indicates a tendency to suppress unwanted thoughts and emotions, entanglement with one’s thoughts, and rigid behavioural patterns. Study 1 found limited correlations between inflexibility and sexism, racism, homonegativity, and dehumanization. Study 2 demonstrated more consistent positive associations between inflexibility and prejudice. And Study 3 controlled for right-wing authoritarianism and social dominance orientation, finding inflexibility predicted hostile sexism and racism beyond these factors. While showing some relationships, particularly with sexism and racism, psychological inflexibility did not consistently correlate with varied prejudices across studies. The proposed randomized controlled trial aims to evaluate an Acceptance and Commitment Therapy intervention to reduce sexism through enhanced psychological flexibility. Overall, findings provide mixed support for the utility of flexibility-based skills in addressing complex societal prejudices. Research should continue examining flexibility integrated with socio-cultural approaches to promote equity

    Improving Cross-Lingual Transfer Learning for Event Detection

    Get PDF
    The widespread adoption of applications powered by Artificial Intelligence (AI) backbones has unquestionably changed the way we interact with the world around us. Applications such as automated personal assistants, automatic question answering, and machine-based translation systems have become mainstays of modern culture thanks to the recent considerable advances in Natural Language Processing (NLP) research. Nonetheless, with over 7000 spoken languages in the world, there still remain a considerable number of marginalized communities that are unable to benefit from these technological advancements largely due to the language they speak. Cross-Lingual Learning (CLL) looks to address this issue by transferring the knowledge acquired from a popular, high-resource source language (e.g., English, Chinese, or Spanish) to a less favored, lower-resourced target language (e.g., Urdu or Swahili). This dissertation leverages the Event Detection (ED) sub-task of Information Extraction (IE) as a testbed and presents three novel approaches that improve cross-lingual transfer learning from distinct perspectives: (1) direct knowledge transfer, (2) hybrid knowledge transfer, and (3) few-shot learning

    Self-supervised learning for transferable representations

    Get PDF
    Machine learning has undeniably achieved remarkable advances thanks to large labelled datasets and supervised learning. However, this progress is constrained by the labour-intensive annotation process. It is not feasible to generate extensive labelled datasets for every problem we aim to address. Consequently, there has been a notable shift in recent times toward approaches that solely leverage raw data. Among these, self-supervised learning has emerged as a particularly powerful approach, offering scalability to massive datasets and showcasing considerable potential for effective knowledge transfer. This thesis investigates self-supervised representation learning with a strong focus on computer vision applications. We provide a comprehensive survey of self-supervised methods across various modalities, introducing a taxonomy that categorises them into four distinct families while also highlighting practical considerations for real-world implementation. Our focus thenceforth is on the computer vision modality, where we perform a comprehensive benchmark evaluation of state-of-the-art self supervised models against many diverse downstream transfer tasks. Our findings reveal that self-supervised models often outperform supervised learning across a spectrum of tasks, albeit with correlations weakening as tasks transition beyond classification, particularly for datasets with distribution shifts. Digging deeper, we investigate the influence of data augmentation on the transferability of contrastive learners, uncovering a trade-off between spatial and appearance-based invariances that generalise to real-world transformations. This begins to explain the differing empirical performances achieved by self-supervised learners on different downstream tasks, and it showcases the advantages of specialised representations produced with tailored augmentation. Finally, we introduce a novel self-supervised pre-training algorithm for object detection, aligning pre-training with downstream architecture and objectives, leading to reduced localisation errors and improved label efficiency. In conclusion, this thesis contributes a comprehensive understanding of self-supervised representation learning and its role in enabling effective transfer across computer vision tasks

    Location Reference Recognition from Texts: A Survey and Comparison

    Full text link
    A vast amount of location information exists in unstructured texts, such as social media posts, news stories, scientific articles, web pages, travel blogs, and historical archives. Geoparsing refers to recognizing location references from texts and identifying their geospatial representations. While geoparsing can benefit many domains, a summary of its specific applications is still missing. Further, there is a lack of a comprehensive review and comparison of existing approaches for location reference recognition, which is the first and core step of geoparsing. To fill these research gaps, this review first summarizes seven typical application domains of geoparsing: geographic information retrieval, disaster management, disease surveillance, traffic management, spatial humanities, tourism management, and crime management. We then review existing approaches for location reference recognition by categorizing these approaches into four groups based on their underlying functional principle: rule-based, gazetteer matching–based, statistical learning-–based, and hybrid approaches. Next, we thoroughly evaluate the correctness and computational efficiency of the 27 most widely used approaches for location reference recognition based on 26 public datasets with different types of texts (e.g., social media posts and news stories) containing 39,736 location references worldwide. Results from this thorough evaluation can help inform future methodological developments and can help guide the selection of proper approaches based on application needs

    A cost focused framework for optimizing collection and annotation of ultrasound datasets

    Get PDF
    Machine learning for medical ultrasound imaging encounters a major challenge: the prohibitive costs of producing and annotating clinical data. The issue of cost vs size is well understood in the context of clinical trials. These same methods can be applied to optimize the data collection and annotation process, ultimately reducing machine learning project cost and times in feasibility studies. This paper presents a two-phase framework for quantifying the cost of data collection using iterative accuracy/sample size predictions and active learning to guide/optimize full human annotation in medical ultrasound imaging for machine learning purposes. The paper demonstrated potential cost reductions using public breast, fetal, and lung ultrasound datasets and a practical case study on Breast Ultrasound. The results show that just as with clinical trials, the relationship between dataset size and final accuracy can be predicted, with the majority of accuracy improvements occurring using only 40-50% of the data dependent on tolerance measure. Manual annotation can be reduced further using active learning, resulting in a representative cost reduction of 66% with a tolerance measure of around 4% accuracy drop from theoretical maximums. The significance of this work lies in its ability to quantify how much additional data and annotation will be required to achieve a specific research objective. These methods are already well understood by clinical funders and so provide a valuable and effective framework for feasibility and pilot studies where machine learning will be applied within a fixed budget to maximize predictive gains, informing resourcing and further clinical study
    • 

    corecore