2,550 research outputs found

    Edge-Fault Tolerance of Hypercube-like Networks

    Full text link
    This paper considers a kind of generalized measure λs(h)\lambda_s^{(h)} of fault tolerance in a hypercube-like graph GnG_n which contain several well-known interconnection networks such as hypercubes, varietal hypercubes, twisted cubes, crossed cubes and M\"obius cubes, and proves λs(h)(Gn)=2h(n−h)\lambda_s^{(h)}(G_n)= 2^h(n-h) for any hh with 0⩽h⩽n−10\leqslant h\leqslant n-1 by the induction on nn and a new technique. This result shows that at least 2h(n−h)2^h(n-h) edges of GnG_n have to be removed to get a disconnected graph that contains no vertices of degree less than hh. Compared with previous results, this result enhances fault-tolerant ability of the above-mentioned networks theoretically

    Shares in the EMCA : the time is ripe for true no par value shares in the EU, and the 2nd directive is not an obstacle

    Get PDF
    The most interesting proposal in the draft European Model Companies Act ( EMCA) concerning shares and the focus of this Article is the recommendation to introduce true no par value shares, as they have been in use in the US for many years and were introduced in Australia, New Zealand but also Finland more recently. Contrary to what has often been assumed, the 2nd EU Company Law Directive does not preclude no par value shares. There is nothing in the wording of the Directive to suggest otherwise, and the reference in the Directive to shares without a nominal value is a reference to Belgian law, which has allowed true no par value shares in all but name since at least 1913. EU member states could therefore introduce such shares even for public companies. True no par value shares offer a far more flexible framework in case of capital increases or mergers, but since under a no par value system there is no link between par value and shareholder rights, additional disclosure about these rights might be warranted under a no par value system. Traditional par value shares offer no protection to creditors, shareholders or other stakeholders, so that their abolition should not be mourned. The threat of new share issues at an unacceptably high discount is more efficiently countered by disclosure and shareholder decision rights

    Federated Learning in the Presence of Adversarial Client Unavailability

    Full text link
    Federated learning is a decentralized machine learning framework that enables collaborative model training without revealing raw data. Due to the diverse hardware and software limitations, a client may not always be available for the computation requests from the parameter server. An emerging line of research is devoted to tackling arbitrary client unavailability. However, existing work still imposes structural assumptions on the unavailability patterns, impeding their applicability in challenging scenarios wherein the unavailability patterns are beyond the control of the parameter server. Moreover, in harsh environments like battlefields, adversaries can selectively and adaptively silence specific clients. In this paper, we relax the structural assumptions and consider adversarial client unavailability. To quantify the degrees of client unavailability, we use the notion of ϵ\epsilon-adversary dropout fraction. We show that simple variants of FedAvg or FedProx, albeit completely agnostic to ϵ\epsilon, converge to an estimation error on the order of ϵ(G2+σ2)\epsilon (G^2 + \sigma^2) for non-convex global objectives and ϵ(G2+σ2)/μ2\epsilon(G^2 + \sigma^2)/\mu^2 for μ\mu strongly convex global objectives, where GG is a heterogeneity parameter and σ2\sigma^2 is the noise level. Conversely, we prove that any algorithm has to suffer an estimation error of at least ϵ(G2+σ2)/8\epsilon (G^2 + \sigma^2)/8 and ϵ(G2+σ2)/(8μ2)\epsilon(G^2 + \sigma^2)/(8\mu^2) for non-convex global objectives and μ\mu-strongly convex global objectives. Furthermore, the convergence speeds of the FedAvg or FedProx variants are O(1/T)O(1/\sqrt{T}) for non-convex objectives and O(1/T)O(1/T) for strongly-convex objectives, both of which are the best possible for any first-order method that only has access to noisy gradients
    • …
    corecore