202 research outputs found

    Rigidity of 3D spherical caps via μ\mu-bubbles

    Full text link
    By using Gromov's μ\mu-bubble technique, we show that the 33-dimensional spherical caps are rigid under perturbations that do not reduce the metric, the scalar curvature, and the mean curvature along its boundary. Several generalizations of this result will be discussed.Comment: 20 pages, 1 figure, All comments are welcom

    Rigidity and non-rigidity of \H^n/\Z^{n-2} with scalar curvature bounded from below

    Full text link
    We show that the hyperbolic manifold \H^n/\Z^{n-2} is not rigid under all compactly supported deformations that preserve the scalar curvature lower bound −n(n−1)-n(n-1), and that it is rigid under deformations that are further constrained by certain topological conditions. In addition, we prove two related splitting results.Comment: 29 pages, 3 figures, all comments are welcome

    The Anneal Temperature Effect on the BTO and NZFO Flims and Their Capacitance - Inductance Integrated Device

    Get PDF
    In this paper, a novel capacitor-inductor integrated structure was proposed. The dielectric material BaTiO3 (BTO) and ferromagnetic material Ni0.5Zn0.5Fe2O4 (NZFO) was prepared by sol-gel method. Phase composition and morphology of the thin films were characterized by XRD, SEM and AFM. The effect of annealing temperature on film crystallinity, surface morphology, dielectric properties and ferromagnetism were investigated. When the annealing temperature was 700 °C, the BTO film and the NZFO film got the better dielectric properties and ferromagnetic properties. Then the BTO thin film was spin-coated on the substrate, and the NZFO thin film was in-situ sintered on the BTO thin film. The composite film possessed both ferromagnetism and dielectric properties. Finally, an inductive coil was fabricated on the BTO/NZFO composite film to produce a capacitance and inductance integrated device

    Personalized Federated Learning with Hidden Information on Personalized Prior

    Full text link
    Federated learning (FL for simplification) is a distributed machine learning technique that utilizes global servers and collaborative clients to achieve privacy-preserving global model training without direct data sharing. However, heterogeneous data problem, as one of FL's main problems, makes it difficult for the global model to perform effectively on each client's local data. Thus, personalized federated learning (PFL for simplification) aims to improve the performance of the model on local data as much as possible. Bayesian learning, where the parameters of the model are seen as random variables with a prior assumption, is a feasible solution to the heterogeneous data problem due to the tendency that the more local data the model use, the more it focuses on the local data, otherwise focuses on the prior. When Bayesian learning is applied to PFL, the global model provides global knowledge as a prior to the local training process. In this paper, we employ Bayesian learning to model PFL by assuming a prior in the scaled exponential family, and therefore propose pFedBreD, a framework to solve the problem we model using Bregman divergence regularization. Empirically, our experiments show that, under the prior assumption of the spherical Gaussian and the first order strategy of mean selection, our proposal significantly outcompetes other PFL algorithms on multiple public benchmarks.Comment: 19 pages, 6 figures, 3 table

    Minimizing the Maximum Flow Time in the Online Food Delivery Problem

    Get PDF
    We study a common delivery problem encountered in nowadays online food-ordering platforms: Customers order dishes online, and the restaurant delivers the food after receiving the order. Specifically, we study a problem where k vehicles of capacity c are serving a set of requests ordering food from one restaurant. After a request arrives, it can be served by a vehicle moving from the restaurant to its delivery location. We are interested in serving all requests while minimizing the maximum flow-time, i.e., the maximum time length a customer waits to receive his/her food after submitting the order. We show that the problem is hard in both offline and online settings even when k = 1 and c = ?: There is a hardness of approximation of ?(n) for the offline problem, and a lower bound of ?(n) on the competitive ratio of any online algorithm, where n is number of points in the metric. We circumvent the strong negative results in two directions. Our main result is an O(1)-competitive online algorithm for the uncapacitated (i.e, c = ?) food delivery problem on tree metrics; we also have negative result showing that the condition c = ? is needed. Then we explore the speed-augmentation model where our online algorithm is allowed to use vehicles with faster speed. We show that a moderate speeding factor leads to a constant competitive ratio, and we prove a tight trade-off between the speeding factor and the competitive ratio

    The Model Inversion Eavesdropping Attack in Semantic Communication Systems

    Full text link
    In recent years, semantic communication has been a popular research topic for its superiority in communication efficiency. As semantic communication relies on deep learning to extract meaning from raw messages, it is vulnerable to attacks targeting deep learning models. In this paper, we introduce the model inversion eavesdropping attack (MIEA) to reveal the risk of privacy leaks in the semantic communication system. In MIEA, the attacker first eavesdrops the signal being transmitted by the semantic communication system and then performs model inversion attack to reconstruct the raw message, where both the white-box and black-box settings are considered. Evaluation results show that MIEA can successfully reconstruct the raw message with good quality under different channel conditions. We then propose a defense method based on random permutation and substitution to defend against MIEA in order to achieve secure semantic communication. Our experimental results demonstrate the effectiveness of the proposed defense method in preventing MIEA.Comment: Accepted by 2023 IEEE Global Communications Conference (GLOBECOM

    DBS: Dynamic Batch Size For Distributed Deep Neural Network Training

    Full text link
    Synchronous strategies with data parallelism, such as the Synchronous StochasticGradient Descent (S-SGD) and the model averaging methods, are widely utilizedin distributed training of Deep Neural Networks (DNNs), largely owing to itseasy implementation yet promising performance. Particularly, each worker ofthe cluster hosts a copy of the DNN and an evenly divided share of the datasetwith the fixed mini-batch size, to keep the training of DNNs convergence. In thestrategies, the workers with different computational capability, need to wait foreach other because of the synchronization and delays in network transmission,which will inevitably result in the high-performance workers wasting computation.Consequently, the utilization of the cluster is relatively low. To alleviate thisissue, we propose the Dynamic Batch Size (DBS) strategy for the distributedtraining of DNNs. Specifically, the performance of each worker is evaluatedfirst based on the fact in the previous epoch, and then the batch size and datasetpartition are dynamically adjusted in consideration of the current performanceof the worker, thereby improving the utilization of the cluster. To verify theeffectiveness of the proposed strategy, extensive experiments have been conducted,and the experimental results indicate that the proposed strategy can fully utilizethe performance of the cluster, reduce the training time, and have good robustnesswith disturbance by irrelevant tasks. Furthermore, rigorous theoretical analysis hasalso been provided to prove the convergence of the proposed strategy.Comment: The latest version of this article has been accepted by IEEE TETC
    • …
    corecore