38 research outputs found

    Network Lifetime Maximization With Node Admission in Wireless Multimedia Sensor Networks

    Get PDF
    Wireless multimedia sensor networks (WMSNs) are expected to support multimedia services such as delivery of video and audio streams. However, due to the relatively stringent quality-of-service (QoS) requirements of multimedia services (e.g., high transmission rates and timely delivery) and the limited wireless resources, it is possible that not all the potential sensor nodes can be admitted into the network. Thus, node admission is essential for WMSNs, which is the target of this paper. Specifically, we aim at the node admission and its interaction with power allocation and link scheduling. A cross-layer design is presented as a two-stage optimization problem, where at the first stage the number of admitted sensor nodes is maximized, and at the second stage the network lifetime is maximized. Interestingly, it is proved that the two-stage optimization problem can be converted to a one-stage optimization problem with a more compact and concise mathematical form. Numerical results demonstrate the effectiveness of the two-stage and one-stage optimization frameworks

    Over-the-Air Computation Aided Federated Learning with the Aggregation of Normalized Gradient

    Full text link
    Over-the-air computation is a communication-efficient solution for federated learning (FL). In such a system, iterative procedure is performed: Local gradient of private loss function is updated, amplified and then transmitted by every mobile device; the server receives the aggregated gradient all-at-once, generates and then broadcasts updated model parameters to every mobile device. In terms of amplification factor selection, most related works suppose the local gradient's maximal norm always happens although it actually fluctuates over iterations, which may degrade convergence performance. To circumvent this problem, we propose to turn local gradient to be normalized one before amplifying it. Under our proposed method, when the loss function is smooth, we prove our proposed method can converge to stationary point at sub-linear rate. In case of smooth and strongly convex loss function, we prove our proposed method can achieve minimal training loss at linear rate with any small positive tolerance. Moreover, a tradeoff between convergence rate and the tolerance is discovered. To speedup convergence, problems optimizing system parameters are also formulated for above two cases. Although being non-convex, optimal solution with polynomial complexity of the formulated problems are derived. Experimental results show our proposed method can outperform benchmark methods on convergence performance

    Federated Learning Robust to Byzantine Attacks: Achieving Zero Optimality Gap

    Full text link
    In this paper, we propose a robust aggregation method for federated learning (FL) that can effectively tackle malicious Byzantine attacks. At each user, model parameter is firstly updated by multiple steps, which is adjustable over iterations, and then pushed to the aggregation center directly. This decreases the number of interactions between the aggregation center and users, allows each user to set training parameter in a flexible way, and reduces computation burden compared with existing works that need to combine multiple historical model parameters. At the aggregation center, geometric median is leveraged to combine the received model parameters from each user. Rigorous proof shows that zero optimality gap is achieved by our proposed method with linear convergence, as long as the fraction of Byzantine attackers is below half. Numerical results verify the effectiveness of our proposed method
    corecore