14 research outputs found

    Negational symmetry of quantum neural networks for binary pattern classification

    Get PDF
    Although quantum neural networks (QNNs) have shown promising results in solving simple machine learning tasks recently, the behavior of QNNs in binary pattern classification is still underexplored. In this work, we find that QNNs have an Achilles’ heel in binary pattern classification. To illustrate this point, we provide a theoretical insight into the properties of QNNs by presenting and analyzing a new form of symmetry embedded in a family of QNNs with full entanglement, which we term negational symmetry. Due to negational symmetry, QNNs can not differentiate between a quantum binary signal and its negational counterpart. We empirically evaluate the negational symmetry of QNNs in binary pattern classification tasks using Google’s quantum computing framework. Both theoretical and experimental results suggest that negational symmetry is a fundamental property of QNNs, which is not shared by classical models. Our findings also imply that negational symmetry is a double-edged sword in practical quantum applications

    zkFL: Zero-Knowledge Proof-based Gradient Aggregation for Federated Learning

    Full text link
    Federated Learning (FL) is a machine learning paradigm, which enables multiple and decentralized clients to collaboratively train a model under the orchestration of a central aggregator. Traditional FL solutions rely on the trust assumption of the centralized aggregator, which forms cohorts of clients in a fair and honest manner. However, a malicious aggregator, in reality, could abandon and replace the client's training models, or launch Sybil attacks to insert fake clients. Such malicious behaviors give the aggregator more power to control clients in the FL setting and determine the final training results. In this work, we introduce zkFL, which leverages zero-knowledge proofs (ZKPs) to tackle the issue of a malicious aggregator during the training model aggregation process. To guarantee the correct aggregation results, the aggregator needs to provide a proof per round. The proof can demonstrate to the clients that the aggregator executes the intended behavior faithfully. To further reduce the verification cost of clients, we employ a blockchain to handle the proof in a zero-knowledge way, where miners (i.e., the nodes validating and maintaining the blockchain data) can verify the proof without knowing the clients' local and aggregated models. The theoretical analysis and empirical results show that zkFL can achieve better security and privacy than traditional FL, without modifying the underlying FL network structure or heavily compromising the training speed

    FLock: Defending Malicious Behaviors in Federated Learning with Blockchain

    Full text link
    Federated learning (FL) is a promising way to allow multiple data owners (clients) to collaboratively train machine learning models without compromising data privacy. Yet, existing FL solutions usually rely on a centralized aggregator for model weight aggregation, while assuming clients are honest. Even if data privacy can still be preserved, the problem of single-point failure and data poisoning attack from malicious clients remains unresolved. To tackle this challenge, we propose to use distributed ledger technology (DLT) to achieve FLock, a secure and reliable decentralized Federated Learning system built on blockchain. To guarantee model quality, we design a novel peer-to-peer (P2P) review and reward/slash mechanism to detect and deter malicious clients, powered by on-chain smart contracts. The reward/slash mechanism, in addition, serves as incentives for participants to honestly upload and review model parameters in the FLock system. FLock thus improves the performance and the robustness of FL systems in a fully P2P manner.Comment: Accepted by NeurIPS 2022 Worksho

    Unsupervised Domain Adaptation for Automatic Estimation of Cardiothoracic Ratio

    Get PDF
    The cardiothoracic ratio (CTR), a clinical metric of heart size in chest X-rays (CXRs), is a key indicator of cardiomegaly. Manual measurement of CTR is time-consuming and can be affected by human subjectivity, making it desirable to design computer-aided systems that assist clinicians in the diagnosis process. Automatic CTR estimation through chest organ segmentation, however, requires large amounts of pixel-level annotated data, which is often unavailable. To alleviate this problem, we propose an unsupervised domain adaptation framework based on adversarial networks. The framework learns domain invariant feature representations from openly available data sources to produce accurate chest organ segmentation for unlabeled datasets. Specifically, we propose a model that enforces our intuition that prediction masks should be domain independent. Hence, we introduce a discriminator that distinguishes segmentation predictions from ground truth masks. We evaluate our system's prediction based on the assessment of radiologists and demonstrate the clinical practicability for the diagnosis of cardiomegaly. We finally illustrate on the JSRT dataset that the semi-supervised performance of our model is also very promising.Comment: Accepted by MICCAI 201

    Towards Robust Partially Supervised Multi-Structure Medical Image Segmentation on Small-Scale Data

    Get PDF
    The data-driven nature of deep learning (DL) models for semantic segmentation requires a large number of pixel-level annotations. However, large-scale and fully labeled medical datasets are often unavailable for practical tasks. Recently, partially supervised methods have been proposed to utilize images with incomplete labels in the medical domain. To bridge the methodological gaps in partially supervised learning (PSL) under data scarcity, we propose Vicinal Labels Under Uncertainty (VLUU), a simple yet efficient framework utilizing the human structure similarity for partially supervised medical image segmentation. Motivated by multi-task learning and vicinal risk minimization, VLUU transforms the partially supervised problem into a fully supervised problem by generating vicinal labels. We systematically evaluate VLUU under the challenges of small-scale data, dataset shift, and class imbalance on two commonly used segmentation datasets for the tasks of chest organ segmentation and optic disc-and-cup segmentation. The experimental results show that VLUU can consistently outperform previous partially supervised models in these settings. Our research suggests a new research direction in label-efficient deep learning with partial supervision.Comment: Accepted by Applied Soft Computin

    Learning with partial labels: an exploratory study of data scarcity for medical images

    No full text
    The data-driven nature of deep learning models is often associated with the use of large-scale fully labeled datasets. However, the high cost of annotation often makes these datasets unavailable for standard supervised learning. This thesis explores an emerging learning paradigm called partially supervised learning (PSL), which deals with partially labeled examples. Unlike semi-supervised learning, where an example is either fully labeled or unlabeled, PSL addresses a similar but distinct problem. In PSL, each example is labeled for a subset of sub-tasks within the context of multi-task learning. The utilization of multiple small-scale partially labeled datasets in PSL is still a challenging research question. Therefore, in this thesis, I provide a formal definition of PSL for the first time. Partially labeled datasets are frequently encountered in the medical field due to the high cost of annotation and the specific clinical objectives they serve. In this regard, I leverage partially supervised multi-label classification (PSMLC) on medical images as the task of interest to exemplify the aforementioned concept. This thesis aims to offer empirical insights into enhancing the performance of deep learning models trained on smallscale partially labeled datasets by presenting the following methodological advancements under four difference PSL setups. This thesis first proposes a practical solution to PSL under data scarcity based on vicinal risk minimization (VRM). While VRM was originally designed as a simple technique to enhance the generalization ability of deep learning models in supervised learning, I demonstrate that, by leveraging principle of maximum entropy, VRM can be successfully applied to PSL. Through experimental evidence, I illustrate that VRM shows advantages over semi-supervised learning methods in both accuracy and computational complexity, particularly in practical and challenging scenarios. This thesis represents the first application of VRM to PSL, offering both theoretical insights and empirical findings. Furthermore, I provide an explanation for the limitations of semi-supervised learning methods in this context. Fueled by the recent renaissance of self-supervised learning, this thesis aims to explore the potential of utilizing unlabeled data to enhance the performance of deep learning models trained on partially labeled data. While self-supervised learning has played a crucial role in pre-training deep learning models for fully-supervised downstream tasks, its impact on downstream tasks involving only partially labeled data remains unclear. Therefore, this thesis presents the first empirical study to investigate the contributions of self-supervised learning in addressing PSL problems and proposes a novel pretext task based on VRM. While previous studies have focused on situations where partially labeled datasets are collected in a centralized fashion or can be remotely accessed without communication constraints, this thesis takes a step further to discuss situations where partially labeled datasets are decentralized under data governance constraints. Specifically, the partially labeled datasets are stored separately in different clients within a federated system, and the raw data cannot be exchanged between the clients or uploaded to the central server. This motivates the definition of a new research problem called federated partially supervised learning (FPSL). In this thesis, I theoretically discuss the challenges of FPSL and provide a robust federated solution to address this problem. Moreover, I observe that in practical scenarios, classes often follow a longtailed distribution, which means that, in addition to common classes, there are also rare classes. Consequently, I discuss a special case of FPSL known as federated partially supervised learning with underrepresented classes. In this case, a few clients store a limited number of partially labeled examples related to the rare classes, while the other clients have a much larger number of partially labeled examples related to common classes. To address this extreme class imbalance, I propose, for the first time, an energy-based federated solution to effectively learn and recognize the rare classes without compromising the performance of common classes. In summary, this thesis explores four unexplored research directions of PSL under data scarcity. For each of four research directions, a novel solution is proposed to address the problem of interest and evaluated with simulated experiments. The empirical results not only provide empirical insights to understand PSL but also pose new research directions on PSL
    corecore