31 research outputs found
FedABC: Targeting Fair Competition in Personalized Federated Learning
Federated learning aims to collaboratively train models without accessing
their client's local private data. The data may be Non-IID for different
clients and thus resulting in poor performance. Recently, personalized
federated learning (PFL) has achieved great success in handling Non-IID data by
enforcing regularization in local optimization or improving the model
aggregation scheme on the server. However, most of the PFL approaches do not
take into account the unfair competition issue caused by the imbalanced data
distribution and lack of positive samples for some classes in each client. To
address this issue, we propose a novel and generic PFL framework termed
Federated Averaging via Binary Classification, dubbed FedABC. In particular, we
adopt the ``one-vs-all'' training strategy in each client to alleviate the
unfair competition between classes by constructing a personalized binary
classification problem for each class. This may aggravate the class imbalance
challenge and thus a novel personalized binary classification loss that
incorporates both the under-sampling and hard sample mining strategies is
designed. Extensive experiments are conducted on two popular datasets under
different settings, and the results demonstrate that our FedABC can
significantly outperform the existing counterparts.Comment: 9 pages,5 figure
Improving Heterogeneous Model Reuse by Density Estimation
This paper studies multiparty learning, aiming to learn a model using the
private data of different participants. Model reuse is a promising solution for
multiparty learning, assuming that a local model has been trained for each
party. Considering the potential sample selection bias among different parties,
some heterogeneous model reuse approaches have been developed. However,
although pre-trained local classifiers are utilized in these approaches, the
characteristics of the local data are not well exploited. This motivates us to
estimate the density of local data and design an auxiliary model together with
the local classifiers for reuse. To address the scenarios where some local
models are not well pre-trained, we further design a multiparty cross-entropy
loss for calibration. Upon existing works, we address a challenging problem of
heterogeneous model reuse from a decision theory perspective and take advantage
of recent advances in density estimation. Experimental results on both
synthetic and benchmark data demonstrate the superiority of the proposed
method.Comment: 9 pages, 5 figues. Accepted by IJCAI 202
Focus+Context visualization based on optimal mass transportation(基于最优质量传输的Focus+Context可视化)
在分辨率有限的显示设备上,Focus+Context技术可用于大型复杂模型的可视化。提出了一种基于最优质量传输的Focus+Context可视化方法。通过最优质量传输映射,对自身进行体积变形,将源测度(体素)转换为传输成本最小的目标测度;将求解最优质量传输问题等价于凸优化过程,转换为计算几何中经典的幂Voronoi图计算。与现有方法相比,本文方法具有坚实的理论基础,保证了解的存在性、唯一性和平滑性;允许用户精确控制目标测度,选择多个形状不规则的聚焦区域,使产生的变形是全局平滑的,并可自由翻转。用于自医学应用和科学仿真的几项体数据集,证明了所提方法是有效和高效的
Dual Mutual Information Constraints for Discriminative Clustering
Deep clustering is a fundamental task in machine learning and data mining that aims at learning clustering-oriented feature representations. In previous studies, most of deep clustering methods follow the idea of self-supervised representation learning by maximizing the consistency of all similar instance pairs while ignoring the effect of feature redundancy on clustering performance. In this paper, to address the above issue, we design a dual mutual information constrained clustering method named DMICC which is based on deep contrastive clustering architecture, in which the dual mutual information constraints are particularly employed with solid theoretical guarantees and experimental validations. Specifically, at the feature level, we reduce the redundancy among features by minimizing the mutual information across all the dimensionalities to encourage the neural network to extract more discriminative features. At the instance level, we maximize the mutual information of the similar instance pairs to obtain more unbiased and robust representations. The dual mutual information constraints happen simultaneously and thus complement each other to jointly optimize better features that are suitable for the clustering task. We also prove that our adopted mutual information constraints are superior in feature extraction, and the proposed dual mutual information constraints are clearly bounded and thus solvable. Extensive experiments on five benchmark datasets show that our proposed approach outperforms most other clustering algorithms. The code is available at https://github.com/Li-Hyn/DMICC
Efficient Interaction Recognition through Positive Action Representation
This paper proposes a novel approach to decompose two-person interaction into a Positive Action and a Negative Action for more efficient behavior recognition. A Positive Action plays the decisive role in a two-person exchange. Thus, interaction recognition can be simplified to Positive Action-based recognition, focusing on an action representation of just one person. Recently, a new depth sensor has become widely available, the Microsoft Kinect camera, which provides RGB-D data with 3D spatial information for quantitative analysis. However, there are few publicly accessible test datasets using this camera, to assess two-person interaction recognition approaches. Therefore, we created a new dataset with six types of complex human interactions (i.e., named K3HI), including kicking, pointing, punching, pushing, exchanging an object, and shaking hands. Three types of features were extracted for each Positive Action: joint, plane, and velocity features. We used continuous Hidden Markov Models (HMMs) to evaluate the Positive Action-based interaction recognition method and the traditional two-person interaction recognition approach with our test dataset. Experimental results showed that the proposed recognition technique is more accurate than the traditional method, shortens the sample training time, and therefore achieves comprehensive superiority
Burn image segmentation based on Mask Regions with Convolutional Neural Network deep learning framework: more accurate and more convenient
Abstract Background Burns are life-threatening with high morbidity and mortality. Reliable diagnosis supported by accurate burn area and depth assessment is critical to the success of the treatment decision and, in some cases, can save the patient’s life. Current techniques such as straight-ruler method, aseptic film trimming method, and digital camera photography method are not repeatable and comparable, which lead to a great difference in the judgment of burn wounds and impede the establishment of the same evaluation criteria. Hence, in order to semi-automate the burn diagnosis process, reduce the impact of human error, and improve the accuracy of burn diagnosis, we include the deep learning technology into the diagnosis of burns. Method This article proposes a novel method employing a state-of-the-art deep learning technique to segment the burn wounds in the images. We designed this deep learning segmentation framework based on the Mask Regions with Convolutional Neural Network (Mask R-CNN). For training our framework, we labeled 1150 pictures with the format of the Common Objects in Context (COCO) data set and trained our model on 1000 pictures. In the evaluation, we compared the different backbone networks in our framework. These backbone networks are Residual Network-101 with Atrous Convolution in Feature Pyramid Network (R101FA), Residual Network-101 with Atrous Convolution (R101A), and InceptionV2-Residual Network with Atrous Convolution (IV2RA). Finally, we used the Dice coefficient (DC) value to assess the model accuracy. Result The R101FA backbone network gains the highest accuracy 84.51% in 150 pictures. Moreover, we chose different burn depth pictures to evaluate these three backbone networks. The R101FA backbone network gains the best segmentation effect in superficial, superficial thickness, and deep partial thickness. The R101A backbone network gains the best segmentation effect in full-thickness burn. Conclusion This deep learning framework shows excellent segmentation in burn wound and extremely robust in different burn wound depths. Moreover, this framework just needs a suitable burn wound image when analyzing the burn wound. It is more convenient and more suitable when using in clinics compared with the traditional methods. And it also contributes more to the calculation of total body surface area (TBSA) burned
An MPCA/LDA Based Dimensionality Reduction Algorithm for Face Recognition
We proposed a face recognition algorithm based on both the multilinear principal component analysis (MPCA) and linear discriminant analysis (LDA). Compared with current traditional existing face recognition methods, our approach treats face images as multidimensional tensor in order to find the optimal tensor subspace for accomplishing dimension reduction. The LDA is used to project samples to a new discriminant feature space, while the K nearest neighbor (KNN) is adopted for sample set classification. The results of our study and the developed algorithm are validated with face databases ORL, FERET, and YALE and compared with PCA, MPCA, and PCA + LDA methods, which demonstrates an improvement in face recognition accuracy