To address issues such as gradient privacy protection, server inference attacks, and low accuracy caused by client data poisoning in federated learning, a secure Byzantine resilient federated learning scheme based on multi-party computation was proposed, targeting the server-client two-layer architecture. Firstly, a two-party ciphertext calculation method based on additive secret sharing was proposed to split the local model gradient to resist the inference attack of the server. Secondly, a poisoning detection algorithm and client screening mechanism under confidential data were designed to resist poisoning attacks. Finally, experiments were conducted on the MNIST and CIFAR-10 datasets to verify the feasibility of the scheme. Compared with the traditional Trim-mean and Median methods, when the proportion of Byzantine participants reaches 40%, the accuracy of the model is improved by 3%~6%. In summary, the proposed scheme can not only resist inference attacks and poisoning attacks, but also improve the accuracy of the global model, which is sufficient to prove the effectiveness of the scheme