393 research outputs found

    On the Capacity Region for Secure Index Coding

    Full text link
    We study the index coding problem in the presence of an eavesdropper, where the aim is to communicate without allowing the eavesdropper to learn any single message aside from the messages it may already know as side information. We establish an outer bound on the underlying secure capacity region of the index coding problem, which includes polymatroidal and security constraints, as well as the set of additional decoding constraints for legitimate receivers. We then propose a secure variant of the composite coding scheme, which yields an inner bound on the secure capacity region of the index coding problem. For the achievability of secure composite coding, a secret key with vanishingly small rate may be needed to ensure that each legitimate receiver who wants the same message as the eavesdropper, knows at least two more messages than the eavesdropper. For all securely feasible index coding problems with four or fewer messages, our numerical results establish the secure index coding capacity region

    Large Steklov eigenvalues on hyperbolic surfaces

    Full text link
    In this paper, we first construct a sequence of hyperbolic surfaces with connected geodesic boundary such that the first normalized Steklov eigenvalue Οƒ~1\tilde{\sigma}_1 tends to infinity. We then prove that as gβ†’βˆžg\rightarrow \infty, a generic Σ∈Mg,n(Lg)\Sigma\in \mathcal{M}_{g,n}(L_g) satisfies Οƒ~1(Ξ£)>Cβ‹…βˆ₯Lgβˆ₯1\tilde{\sigma}_1(\Sigma)>C\cdot \|L_g\|_1 where CC is a positive universal constant. Here Mg,n(Lg)\mathcal{M}_{g,n}(L_g) is the moduli space of hyperbolic surfaces of genus gg and nn boundary components of length Lg=(Lg1,⋯ ,Lgn)L_g=(L_g^1,\cdots, L_g^n) endowed with the Weil-Petersson metric where βˆ₯Lgβˆ₯1β†’βˆž\|L_g\|_1\rightarrow\infty satisfies certain conditions.Comment: 20pages, new results added, second theorem is improve

    Towards Fairness-Aware Federated Learning

    Full text link
    Recent advances in Federated Learning (FL) have brought large-scale collaborative machine learning opportunities for massively distributed clients with performance and data privacy guarantees. However, most current works focus on the interest of the central controller in FL,and overlook the interests of the FL clients. This may result in unfair treatment of clients which discourages them from actively participating in the learning process and damages the sustainability of the FL ecosystem. Therefore, the topic of ensuring fairness in FL is attracting a great deal of research interest. In recent years, diverse Fairness-Aware FL (FAFL) approaches have been proposed in an effort to achieve fairness in FL from different perspectives. However, there is no comprehensive survey which helps readers gain insight into this interdisciplinary field. This paper aims to provide such a survey. By examining the fundamental and simplifying assumptions, as well as the notions of fairness adopted by existing literature in this field, we propose a taxonomy of FAFL approaches covering major steps in FL, including client selection, optimization, contribution evaluation and incentive distribution. In addition, we discuss the main metrics for experimentally evaluating the performance of FAFL approaches, and suggest promising future research directions towards fairness-aware federated learning.Comment: 16 pages, 4 figure

    Dadu-RBD: Robot Rigid Body Dynamics Accelerator with Multifunctional Pipelines

    Full text link
    Rigid body dynamics is a key technology in the robotics field. In trajectory optimization and model predictive control algorithms, there are usually a large number of rigid body dynamics computing tasks. Using CPUs to process these tasks consumes a lot of time, which will affect the real-time performance of robots. To this end, we propose a multifunctional robot rigid body dynamics accelerator, named RBDCore, to address the performance bottleneck. By analyzing different functions commonly used in robot dynamics calculations, we summarize their reuse relationship and optimize them according to the hardware. Based on this, RBDCore can fully reuse common hardware modules when processing different computing tasks. By dynamically switching the dataflow path, RBDCore can accelerate various dynamics functions without reconfiguring the hardware. We design Structure-Adaptive Pipelines for RBDCore, which can greatly improve the throughput of the accelerator. Robots with different structures and parameters can be optimized specifically. Compared with the state-of-the-art CPU, GPU dynamics libraries and FPGA accelerator, RBDCore can significantly improve the performance

    Fairness-Aware Client Selection for Federated Learning

    Full text link
    Federated learning (FL) has enabled multiple data owners (a.k.a. FL clients) to train machine learning models collaboratively without revealing private data. Since the FL server can only engage a limited number of clients in each training round, FL client selection has become an important research problem. Existing approaches generally focus on either enhancing FL model performance or enhancing the fair treatment of FL clients. The problem of balancing performance and fairness considerations when selecting FL clients remains open. To address this problem, we propose the Fairness-aware Federated Client Selection (FairFedCS) approach. Based on Lyapunov optimization, it dynamically adjusts FL clients' selection probabilities by jointly considering their reputations, times of participation in FL tasks and contributions to the resulting model performance. By not using threshold-based reputation filtering, it provides FL clients with opportunities to redeem their reputations after a perceived poor performance, thereby further enhancing fair client treatment. Extensive experiments based on real-world multimedia datasets show that FairFedCS achieves 19.6% higher fairness and 0.73% higher test accuracy on average than the best-performing state-of-the-art approach.Comment: Accepted by ICME 202

    Beyond Reverse KL: Generalizing Direct Preference Optimization with Diverse Divergence Constraints

    Full text link
    The increasing capabilities of large language models (LLMs) raise opportunities for artificial general intelligence but concurrently amplify safety concerns, such as potential misuse of AI systems, necessitating effective AI alignment. Reinforcement Learning from Human Feedback (RLHF) has emerged as a promising pathway towards AI alignment but brings forth challenges due to its complexity and dependence on a separate reward model. Direct Preference Optimization (DPO) has been proposed as an alternative, and it remains equivalent to RLHF under the reverse KL regularization constraint. This paper presents ff-DPO, a generalized approach to DPO by incorporating diverse divergence constraints. We show that under certain ff-divergences, including Jensen-Shannon divergence, forward KL divergences and Ξ±\alpha-divergences, the complex relationship between the reward and optimal policy can also be simplified by addressing the Karush-Kuhn-Tucker conditions. This eliminates the need for estimating the normalizing constant in the Bradley-Terry model and enables a tractable mapping between the reward function and the optimal policy. Our approach optimizes LLMs to align with human preferences in a more efficient and supervised manner under a broad set of divergence constraints. Empirically, adopting these divergences ensures a balance between alignment performance and generation diversity. Importantly, ff-DPO outperforms PPO-based methods in divergence efficiency, and divergence constraints directly influence expected calibration error (ECE).Comment: Preprin
    • …
    corecore