160 research outputs found

    On Stability and Convergence of Distributed Filters

    Full text link
    Recent years have bore witness to the proliferation of distributed filtering techniques, where a collection of agents communicating over an ad-hoc network aim to collaboratively estimate and track the state of a system. These techniques form the enabling technology of modern multi-agent systems and have gained great importance in the engineering community. Although most distributed filtering techniques come with a set of stability and convergence criteria, the conditions imposed are found to be unnecessarily restrictive. The paradigm of stability and convergence in distributed filtering is revised in this manuscript. Accordingly, a general distributed filter is constructed and its estimation error dynamics is formulated. The conducted analysis demonstrates that conditions for achieving stable filtering operations are the same as those required in the centralized filtering setting. Finally, the concepts are demonstrated in a Kalman filtering framework and validated using simulation examples

    Personalized Graph Federated Learning with Differential Privacy

    Full text link
    This paper presents a personalized graph federated learning (PGFL) framework in which distributedly connected servers and their respective edge devices collaboratively learn device or cluster-specific models while maintaining the privacy of every individual device. The proposed approach exploits similarities among different models to provide a more relevant experience for each device, even in situations with diverse data distributions and disproportionate datasets. Furthermore, to ensure a secure and efficient approach to collaborative personalized learning, we study a variant of the PGFL implementation that utilizes differential privacy, specifically zero-concentrated differential privacy, where a noise sequence perturbs model exchanges. Our mathematical analysis shows that the proposed privacy-preserving PGFL algorithm converges to the optimal cluster-specific solution for each cluster in linear time. It also shows that exploiting similarities among clusters leads to an alternative output whose distance to the original solution is bounded, and that this bound can be adjusted by modifying the algorithm's hyperparameters. Further, our analysis shows that the algorithm ensures local differential privacy for all clients in terms of zero-concentrated differential privacy. Finally, the performance of the proposed PGFL algorithm is examined by performing numerical experiments in the context of regression and classification using synthetic data and the MNIST dataset

    Asynchronous Online Federated Learning with Reduced Communication Requirements

    Full text link
    Online federated learning (FL) enables geographically distributed devices to learn a global shared model from locally available streaming data. Most online FL literature considers a best-case scenario regarding the participating clients and the communication channels. However, these assumptions are often not met in real-world applications. Asynchronous settings can reflect a more realistic environment, such as heterogeneous client participation due to available computational power and battery constraints, as well as delays caused by communication channels or straggler devices. Further, in most applications, energy efficiency must be taken into consideration. Using the principles of partial-sharing-based communications, we propose a communication-efficient asynchronous online federated learning (PAO-Fed) strategy. By reducing the communication overhead of the participants, the proposed method renders participation in the learning task more accessible and efficient. In addition, the proposed aggregation mechanism accounts for random participation, handles delayed updates and mitigates their effect on accuracy. We prove the first and second-order convergence of the proposed PAO-Fed method and obtain an expression for its steady-state mean square deviation. Finally, we conduct comprehensive simulations to study the performance of the proposed method on both synthetic and real-life datasets. The simulations reveal that in asynchronous settings, the proposed PAO-Fed is able to achieve the same convergence properties as that of the online federated stochastic gradient while reducing the communication overhead by 98 percent.Comment: A conference precursor of this work appears in the 2022 IEEE IC

    Quantum Cellular Neural Networks

    Full text link
    We have previously proposed a way of using coupled quantum dots to construct digital computing elements - quantum-dot cellular automata (QCA). Here we consider a different approach to using coupled quantum-dot cells in an architecture which, rather that reproducing Boolean logic, uses a physical near-neighbor connectivity to construct an analog Cellular Neural Network (CNN).Comment: 7 pages including 3 figure

    Crystallization of Adenylylsulfate Reductase from Desulfovibrio gigas: A Strategy Based on Controlled Protein Oligomerization

    Get PDF
    Adenylylsulfate reductase (adenosine 5′-phosphosulfate reductase, APS reductase or APSR, E.C.1.8.99.2) catalyzes the conversion of APS to sulfite in dissimilatory sulfate reduction. APSR was isolated and purified directly from massive anaerobically grown Desulfovibrio gigas, a strict anaerobe, for structure and function investigation. Oligomerization of APSR to form dimers–α_2β_2, tetramers–α_4β_4, hexamers–α_6β_6, and larger oligomers was observed during purification of the protein. Dynamic light scattering and ultracentrifugation revealed that the addition of adenosine monophosphate (AMP) or adenosine 5′-phosphosulfate (APS) disrupts the oligomerization, indicating that AMP or APS binding to the APSR dissociates the inactive hexamers into functional dimers. Treatment of APSR with β-mercaptoethanol decreased the enzyme size from a hexamer to a dimer, probably by disrupting the disulfide Cys156—Cys162 toward the C-terminus of the β-subunit. Alignment of the APSR sequences from D. gigas and A. fulgidus revealed the largest differences in this region of the β-subunit, with the D. gigas APSR containing 16 additional amino acids with the Cys156—Cys162 disulfide. Studies in a pH gradient showed that the diameter of the APSR decreased progressively with acidic pH. To crystallize the APSR for structure determination, we optimized conditions to generate a homogeneous and stable form of APSR by combining dynamic light scattering, ultracentrifugation, and electron paramagnetic resonance methods to analyze the various oligomeric states of the enzyme in varied environments
    corecore