Integrating machine learning into Automated Control Systems (ACS) enhances
decision-making in industrial process management. One of the limitations to the
widespread adoption of these technologies in industry is the vulnerability of
neural networks to adversarial attacks. This study explores the threats in
deploying deep learning models for fault diagnosis in ACS using the Tennessee
Eastman Process dataset. By evaluating three neural networks with different
architectures, we subject them to six types of adversarial attacks and explore
five different defense methods. Our results highlight the strong vulnerability
of models to adversarial samples and the varying effectiveness of defense
strategies. We also propose a novel protection approach by combining multiple
defense methods and demonstrate it's efficacy. This research contributes
several insights into securing machine learning within ACS, ensuring robust
fault diagnosis in industrial processes