4,081 research outputs found

    Cross cultural perspectives of decision-making and control in multinational corporations operating in ASEAN

    Get PDF
    Kajian ini membincangkan isu-isu persekitaran budaya negara ASEAN dan cabaran-cabarannya terhadap pengurus-pengurus korporat multi-nasional. Kajian ini melaporkan penemuan terhadap corak membuat keputusan, kawalan dan pengurusan budaya serta aspek gelagat. Walaupun penemuan ini mempunyai persamaan dengan kajian lain, namun terdapat perbezaan dalam corak membuat keputusan, kawalan dan pengurusan budaya serta aspek gelagat

    Multi-party Poisoning through Generalized pp-Tampering

    Get PDF
    In a poisoning attack against a learning algorithm, an adversary tampers with a fraction of the training data TT with the goal of increasing the classification error of the constructed hypothesis/model over the final test distribution. In the distributed setting, TT might be gathered gradually from mm data providers P1,…,PmP_1,\dots,P_m who generate and submit their shares of TT in an online way. In this work, we initiate a formal study of (k,p)(k,p)-poisoning attacks in which an adversary controls k∈[n]k\in[n] of the parties, and even for each corrupted party PiP_i, the adversary submits some poisoned data Tiβ€²T'_i on behalf of PiP_i that is still "(1βˆ’p)(1-p)-close" to the correct data TiT_i (e.g., 1βˆ’p1-p fraction of Tiβ€²T'_i is still honestly generated). For k=mk=m, this model becomes the traditional notion of poisoning, and for p=1p=1 it coincides with the standard notion of corruption in multi-party computation. We prove that if there is an initial constant error for the generated hypothesis hh, there is always a (k,p)(k,p)-poisoning attacker who can decrease the confidence of hh (to have a small error), or alternatively increase the error of hh, by Ξ©(pβ‹…k/m)\Omega(p \cdot k/m). Our attacks can be implemented in polynomial time given samples from the correct data, and they use no wrong labels if the original distributions are not noisy. At a technical level, we prove a general lemma about biasing bounded functions f(x1,…,xn)∈[0,1]f(x_1,\dots,x_n)\in[0,1] through an attack model in which each block xix_i might be controlled by an adversary with marginal probability pp in an online way. When the probabilities are independent, this coincides with the model of pp-tampering attacks, thus we call our model generalized pp-tampering. We prove the power of such attacks by incorporating ideas from the context of coin-flipping attacks into the pp-tampering model and generalize the results in both of these areas
    • …
    corecore